/GAC

Code accompanying NeurIPS 2019 paper: "Distributional Policy Optimization - An Alternative Approach for Continuous Control"

Primary LanguagePythonGNU General Public License v3.0GPL-3.0

This repo contains the code for the implementation of Distributional Policy Optimization: An Alternative Approach for Continuous Control (NeurIPS 2019). The theoretical framework is named DPO (Distributional Policy Optimization), whereas the Deep Learning approach to attaining it is named GAC (Generative Actor Critic).

How to run

An example of how to run the code is provided below. The exact hyper-parameters per each domain are provided in the appendix of the paper.

main.py --visualize --env-name Hopper-v2 --training_actor_samples 32 --noise normal --batch_size 128 --noise_scale 0.2 --print --num_steps 1000000 --target_policy exponential --train_frequency 2048 --replay_size 200000

Visualizing

You may visualize the run by adding the flag --visualize and starting a visdom server as follows:

python3.6 -m visdom.server

Requirements

Performance

The graphs below are taken from the paper and compare the performance of our proposed method to various baselines. The best performing method is the Autoregressive network.

performance graphs