/MultiAgentTORCS

The multi-agent version of TORCS for developing control algorithms for fully autonomous driving in the cluttered, multi-agent settings of everyday life.

Primary LanguagePython

MADRaS - Multi-Agent DRiving Simulator


This is a multi-agent version of TORCS, for multi-agent reinforcement learning. In other words, the multiple cars running simultaneously on a track can be controlled by different control algorithms - heuristic, reinforcement learning-based, etc.

Dependencies

  • TORCS (the simulator)
  • Simulated Car Racing modules (the patch which creates a server-client model to expose the higher-level game features to the learning agent)

Installation

It is assumed that you have TORCS installed from the source code on a machine with Ubuntu 14.04/16.04 LTS.

scr-client

Install the scr-client as follows:

  1. Download the scr-patch from here.
  2. Unpack the package scr-linux-patch.tgz in your base TORCS directory.
  3. This will create a new directory called scr-patch.
    cd scr-patch
  4. sh do_patch.sh (do_unpatch.sh to revert the modifications)
  5. Move to the parent TORCS directory
    cd ../
  6. Run the following commands:
    ./configure    
    make -j4    
    sudo make install -j4    
    sudo make datainstall -j4    
    

10 scr_server car should be available in the race configurations now.

  1. Download the C++ client from here.
  2. Unpack the package scr-client-cpp.tgz in your base TORCS directory.
  3. This will create a new directory called scr-client-cpp.
    cd scr-client-cpp
  4. make -j4
  5. At this point, multiple clients can join an instance of the TORCS game by:
    ./client    
    ./client port:3002
    
    Typical values are between 3001 and 3010 (3001 is the default)

Usage:

  1. Start a 'Quick Race' in TORCS in one terminal console (with the n agents being scr_*)
    torcs
    Close the TORCS window.
  2. From inside the multi-agent-torcs directory in one console:
    python playGame.py 3001
  3. From another console:
    python playGame.py 3002
    And so on...

In the game loop in playGame.py, the action at every timestep a_t can be supplied by any algorithm.


For single-agent learning:

  1. Start a 'Quick Race' in TORCS in one terminal console. Choose only one scr car and as many as traffic cars as you want (preferably chenyi*1, since they're programmed to follow individual lanes at speeds low enough for the agent to learn to overtake)
  2. From inside the multi-agent-torcs directory in one console:
    python playGame_DDPG.py 3001
    or any other port.

Sample results for a DDPG agent learned to drive in traffic are available here.


Do check out the wiki for this project for in-depth information about TORCS and getting Deep (Reinforcement) Learning to work on it.


1 The chenyi* cars can be installed from Princeton's DeepDrive project, which also adds a few maps from training and testing the agents. The default cars in TORCS are all programmed heuristic racing agents, which do not serve as good stand-ins for 'traffic'. Hence, using chenyi's code is highly recommended.

Credits

The multi-agent learning simulator was developed by Abhishek Naik, extending ugo-nama-kun's gym-torcs, and yanpanlau's project under the guidance of Anirban Santara, Balaraman Ravindran, and Bharat Kaul, at Intel Labs.