/Hierarchical-Actor-Critic-Pytorch

Hierarchical Actor-Critic in Pytorch

Primary LanguagePythonMIT LicenseMIT

This is the repo that reproduce the results for continuous control domains presented in the paper "Learning Multi-Level Hierarchies with Hindsight" (ICLR 2019) in Pytorch. Original repo in Tensorflow is at https://github.com/andrew-j-levy/Hierarchical-Actor-Critc-HAC-. The repo is inspired from "https://github.com/nikhilbarhate99/Hierarchical-Actor-Critic-HAC-PyTorch". The difference is that I use the domains (Ant-Four-Room, Ant-Reacher, UR5-Reacher, Inverted-Pendulum) in the original paper, while the other repo does not (it uses two custom and simpler domains - one of them is included in this repo as well).

Setup

  • Install pip3 install -e .
  • You will need MuJoCo to run

Training (replace the number of layers with 2 or 3)

python3 run_hac.py --n_layers 2 --env hac-inverted-pendulum-v0 --retrain --timesteps 2000000 --seed 0 --group 2-level

Visualizing the policy saved at results/logs/hac-inverted-pendulum-v0/2-levels/0

python3 run_hac.py --n_layers 2 --env hac-inverted-pendulum-v0 --test --show --timesteps 2000000 --seed 0 --group 2-level

  • Inverted-Pendulum (3 levels)

- Mountain-Car (2 and 3 levels)

- UR5-Reacher (2 and 3 levels)

- Ant-Four-Rooms (2 and 3 levels)

- Ant-Reacher (2 and 3 levels)

Learning Curves (logged by wandb)

  • Inverted-Pendulum

  • Mountain-Car

  • UR5-Reacher

  • Ant-Four-Rooms

  • Ant-Reacher

Saved policies are stored in saved_policies/

To replay a saved policy: create a folder results/logs/hac/hac-ant-reacher-v0/2-levels and copy a saved policies into the folder, e.g., copy the whole 0/ folder in saved_policies/ant-reacher-2-levels. Then run the following command (the seed value must also be the name of the folder copied over):

python3 run_hac.py --n_layers 2 --env hac-ant-reacher-v0 --test --show --timesteps 2000000 --seed 0 --group 2-level