This is a multi-agent version of TORCS, for multi-agent reinforcement learning. In other words, the multiple cars running simultaneously on a track can be controlled by different control algorithms - heuristic, reinforcement learning-based, etc.
- TORCS (the simulator)
- Simulated Car Racing modules (the patch which creates a server-client model to expose the higher-level game features to the learning agent)
It is assumed that you have TORCS installed from the source code on a machine with Ubuntu 14.04/16.04 LTS.
Install the scr-client as follows:
- Download the scr-patch from here.
- Unpack the package scr-linux-patch.tgz in your base TORCS directory.
- This will create a new directory called scr-patch.
cd scr-patch
sh do_patch.sh
(do_unpatch.sh
to revert the modifications)- Move to the parent TORCS directory
cd ../
- Run the following commands:
./configure make -j4 sudo make install -j4 sudo make datainstall -j4
10 scr_server car should be available in the race configurations now.
- Download the C++ client from here.
- Unpack the package
scr-client-cpp.tgz
in your base TORCS directory. - This will create a new directory called
scr-client-cpp
.
cd scr-client-cpp
make -j4
- At this point, multiple clients can join an instance of the TORCS game by:
Typical values are between 3001 and 3010 (3001 is the default)
./client ./client port:3002
- Start a 'Quick Race' in TORCS in one terminal console (with the n agents being
scr_*
)
torcs
Close the TORCS window. - From inside the multi-agent-torcs directory in one console:
python playGame.py 3001
- From another console:
python playGame.py 3002
And so on...
In the game loop in playGame.py
, the action at every timestep a_t
can be supplied by any algorithm.
- Start a 'Quick Race' in TORCS in one terminal console. Choose only one
scr
car and as many as traffic cars as you want (preferablychenyi*
1, since they're programmed to follow individual lanes at speeds low enough for the agent to learn to overtake) - From inside the multi-agent-torcs directory in one console:
python playGame_DDPG.py 3001
or any other port.
Sample results for a DDPG agent learned to drive in traffic are available here.
Do check out the wiki for this project for in-depth information about TORCS and getting Deep (Reinforcement) Learning to work on it.
1 The chenyi*
cars can be installed from Princeton's DeepDrive project, which also adds a few maps from training and testing the agents. The default cars in TORCS are all programmed heuristic racing agents, which do not serve as good stand-ins for 'traffic'. Hence, using chenyi's code is highly recommended.
The multi-agent learning simulator was developed by Abhishek Naik, extending ugo-nama-kun
's gym-torcs
, and yanpanlau
's project under the guidance of Anirban Santara, Balaraman Ravindran, and Bharat Kaul, at Intel Labs.