Made with Unity 2018.2.3f1 and ML-Agents Beta 0.5.0a. Check out the original Medium post by Abe Haskins that inspired this.
If you like this repo make sure to follow me @seppischuchmann 👋.
I wanted to modify the original tf-jam project for use with ML-Agents. Now you can train everything by using the ML-Agents toolkit via reinforcement learning. This makes it really use the extend the scope of the agents capability.
- First download ML-Agents Beta 0.5.0a and import the ML-Agents folder (.../ml-agents-0.5.0a/UnitySDK/Assets/) into the projects asset folder.
- Then instalhttps://media.giphy.com/media/9A6JRi5hUDtjogBcSA/giphy.gifl the TFSharpPlugin unitypackage.
- Make sure you have all the necessary python dependencies installed. Find more information in the installation docs of ML-Agents.
If you need more guidance check out the basic guide.
After importing the packages, we have to make sure everything is setup correctly. Make sure the agent is set to the following settings:
You can find those settings in the scene explorer under PlayerCollection/Player/BallSpawner(Agent).
- I included a pretrained model in the Assets/ML-Model folder. I used my MacBook Pro to train this model for 7 hours and it pretty much hits the court everytime. To try it out, just drag the "editor_Academy_tfhoop-execute4-0" file to the "Graph Model" parameter in the brain.
- Just hit play! 🎮
- Enable every "Player" gameobject and add it to "PlayerCollection". This way we can have multiple agents training at once. They are all connected to the same brain, so we can speed up the training process. Depending on your computer you can handle more or less agents at once. Just play around.
- Set the Brain Type to External
- Then uncomment this line 76 in BallSpawnerController.cs. Now the agents will move randomly to diversify the training data.
- Then follow the instructions listed here.
Again, if you need more guidance check out the ML-Agents docs, they are great.