Source code of Robust On-Policy Sampling for Data-Efficient Policy Evaluation in Reinforcement Learning (NeurIPS 2022).
The code is written in Python 3, using PyTorch for the implementation of the deep networks and OpenAI gym for the experiment domains.
To install the required codebase, it is recommended to create a conda or a virtual environment. Then, run the following command
pip install -r requirements.txt
To conduct policy evaluation, we need to prepare a set of pretrained policies. You can skip this part if you already have the pretrained models in policy_models/
and the corresponding policy values in experiments/policy_info.py
Train the policy models using REINFORCE in different domains by running:
python policy/reinfoce.py --exp_name {exp_name}
where {exp_name}
can be MultiBandit, GridWorld, CartPole or CartPoleContinuous. The parameterized epsilon-greedy policies for MultiBandit and GridWorld can be obtained by running:
python policy/handmade_policy.py
For each policy model, the true policy value is estimated with
python experiments/policy_value.py --policy_name {policy_name} --seed {seed} --n 10e6
This will print the average steps, true policy value and variance of returns. Make sure you copy these results into the file experiment/policy_info.py
.
If you can use qsub
or sbatch
, you can also run jobs/jobs_value.py
with different seeds in parallel and merge them by running experiments/merge_values.py
to get
The main running script for policy evaluation is experiments/evaluate.py
. The following running command is an example of Monte Carlo estimation for Robust On-policy Acting with model_GridWorld_5000.pt
with seeds from 0 to 199.
python experiments/evaluate.py --policy_name GridWorld_5000 --ros_epsilon 1.0 --collectors RobustOnPolicyActing --estimators MonteCarlo --eval_steps "7,14,29,59,118,237,475,951,1902,3805,7610,15221,30443,60886" --seeds "0,199"
To conduct policy evaluation with off-policy data, you need to add the following arguments to the above running command:
--combined_trajectories 100 --combined_ops_epsilon 0.10
If you can use qsub
or sbatch
, you may only need to run the script jobs/jobs.py
where all experiments in the paper are arranged. The log will be saved in log/
and the seed results will be saved in results/seeds
. Note that we save the data collection cache in results/data
and re-use it for different value estimations. To merge results of different seeds, run experiments/merge_results.py
, and the merged results will be saved in results/
.
All experimental data used to generate plots of the paper can be found in data/
with the following structure:
-
Subdirectories:
data/
contains six subdirectories, out of which four (bandit
,Gridworld
,Cartpole
andcon_cartpole
) contain results for each of the four domains. The other two subfolders (Gridworld_ms
andbandit_ms
) contain results for further experiments used to generate Figures like Figure 4c, d in the paper. -
File names:
The name of a file consists of four parts: domain, number of pre-trainings to get the evaluation policy, sampling method and estimation method. For example, in the
data/bandit
folder, there is a file namedMultiBandit_5000_BehaviorPolicyGradient1_OrdinaryImportanceSampling
. As fordata/Gridworld_ms
anddata/bandit_ms
, file names containms
referring to means and scale.
Code is provided to reproduce all the figures included in the paper. See the jupyter notebooks found in plotting/
.
If you use this repository in your work, please consider citing the paper
@inproceedings{zhong2022robust,
title = {Robust On-Policy Sampling for Data-Efficient Policy Evaluation in Reinforcement Learning},
author = {Rujie Zhong, Duohan Zhang, Lukas Sch\”afer, Stefano V. Albrecht, Josiah P. Hanna},
booktitle = {Advances in Neural Information Processing Systems},
year = {2022}
}