This repository is the official implementation of our ICML 2023 paper Future-conditioned Unsupervised Pretraining for Decision Transformer. Here is the poster.
To install requirements, run:
conda env create -f env.yml
conda activate pdt
The D4RL datasets can be downloaded via the following commands:
python data/download_d4rl_gym_datasets.py
To pretrain a model, run this command:
python main.py \
--data_dir /path/to/data \
--max_pretrain_iters 50 \
--num_updates_per_pretrain_iter 1000 \
--max_online_iters 0 \
--env hopper-medium-replay-v2 \
--seed 0
To finetune a pretrained model, run:
python main.py \
--data_dir /path/to/data \
--model_path_prefix /path/to/model \
--model_name model \
--max_pretrain_iters 0 \
--online_warmup_samples 10000 \
--return_warmup_iters 5 \
--max_online_iters 1500 \
--num_updates_per_online_iter 300 \
--env hopper-medium-replay-v2 \
--seed 0
Run the following script to evaluate a model:
python main.py \
--eval_only \
--eval_pretrained \
--data_dir /path/to/data \
--model_path_prefix /path/to/model \
--model_name model \
--env hopper-medium-replay-v2 \
--seed 0
Besides, you can also monitor training with Tensorboard:
tensorboard --logdir /path/to/res
This repository is based on online-dt, which is licensed under CC-BY-NC. We have made modifications to the models, data processing, and training/evaluation scripts to fit our needs.
If you use our code or find our work valuable, please cite:
@inproceedings{xie2023future,
title={Future-conditioned Unsupervised Pretraining for Decision Transformer},
author={Xie, Zhihui and Lin, Zichuan and Ye, Deheng and Fu, Qiang and Wei, Yang and Li, Shuai},
booktitle={International Conference on Machine Learning},
pages={38187--38203},
year={2023},
organization={PMLR}
}