/wolpertinger_ddpg

Wolpertinger Training with DDPG (Pytorch), Deep Reinforcement Learning in Large Discrete Action Spaces. Multi-GPU/Singer-GPU/CPU compatible.

Primary LanguagePythonApache License 2.0Apache-2.0

Wolpertinger Training with DDPG (Pytorch, Multi-GPU/single-GPU/CPU)

Overview

Pytorch version of Wolpertinger Training with DDPG.
The code is compatible with training in multi-GPU, single-GPU or CPU.
It is also compatible with both continuous and discrete control of OpenAI gym.
In continuous case, I discretize the action space to use wolpertinger-DDPG training algorithm.

Dependencies

  • python 3.6.8
  • torch 1.1.0
  • OpenAI gym
    • If you get an RunTimeError:NotImplementedError in ActionWrapper.step while training with gym, replace your gym/core.py file with core.py in openai/gym.
  • pyflann
    • This is the library (FLANN, Muja & Lowe, 2014) with approximate nearest-neighbor methods allowed for logarithmic-time lookup complexity relative to the number of actions. However, the python binding of FLANN (pyflann) is written for python 2 and is no longer maintained. Please refer to pyflann for the pyflann package compatible with python3. Just download and place it in your (virtual) environment.

Usage

  • In Pendulum-v0 (continuous control), discretize the continuous action space to a discrete action spaces with 200000 actions.
    $ python main.py --env 'Pendulum-v0' --max-actions 200000
    
  • To use CPU only:
    $ python main.py --gpu-ids -1
    
  • To use single-GPU only:
    $ python main.py --gpu-ids 0 --gpu-nums 1
    
  • To use multi-GPU (e.g., use GPU-0 and GPU-1):
    $ python main.py --gpu-ids 0 1 --gpu-nums 2
    

Result

  • Please refer to output for the trained policy and training log.

Project Reference