Under Development
This repo contains code for training RL agents with adversarial disturbance agents in our work on Robust Adversarial Reinforcement Learning (RARL). We build heavily build on the OpenAI rllab repo.
Since we build upon the rllab package for the optimizers, the installation process is similar to rllab's
manual installation. Most of the packages are virtually installated in the anaconda rllab3-adv
enivronment.
- Dependencies for scipy:
sudo apt-get build-dep python-scipy
- Install python modules:
conda env create -f environment.yml
-
Add
rllab-adv
to yourPYTHONPATH
.
export PYTHONPATH=<PATH_TO_RLLAB_ADV>:$PYTHONPATH
# Enter the anaconda virtual environment
source activate rllab3-adv
# Train on InvertedPendulum
python adversarial/scripts/train_adversary.py --env InvertedPendulumAdv-v1 --folder ~/rllab-adv/results
Lerrel Pinto -- lerrelpATcsDOTcmuDOTedu.