This repository contains all the source codes and dataset for "Lifelike Agility and Play in Quadrupedal Robots using Reinforcement Learning and Generative Pre-trained Models", published in Nature Machine Intelligence, 2024 (link, arxiv). Please refer to the official project page for a general introduction of this work.
To run the codes, you will need Python3.6 or Python3.7 to install tensorflow 1.15.0, and the TLeague and TPolicies repositories, which are developed for distributed multi-agent RL. For more details, please refer to the TLeague paper.
Please follow these steps to setup the environments:
git clone https://github.com/tencent-ailab/TLeague.git
git clone https://github.com/tencent-ailab/TPolicies.git
cd TLeague
pip install -e .
cd ..
cd TPolicies
pip install -e .
cd ..
cd lifelike
pip install -e .
To test the simulation scenarios, you can simply try the following scripts.
PMC for tracking tasks:
python test_scripts/pritimitive_level/test_primitive_level_env.py
EPMC for traversing tasks:
python test_scripts/environmental_level/test_environmental_level_env.py
SEPMC for the Chase Tag Game:
python test_scripts/strategic_level/test_strategic_level_env.py
The training scripts are provided in .sh files in the `train_scripts' folder. The TLeague training pipeline goes with four modules: model_pool, league_mgr, learner and actor, each of which should be run in an independent terminal. The model_pool holds all the trained or training models, the league_mgr manages the learner and actor tasks, the learner optimizes the current model and the actor runs the agent-environment interaction and generates data. Please refer to the TLeague paper for more details of these modules.
To train PMC:
cd train_scripts
Open Terminal 1 and run
bash example_pmc_train.sh model_pool
Open Terminal 2 and run
bash example_pmc_train.sh league_mgr
Open Terminal 3 and run
bash example_pmc_train.sh learner
Open Terminal 4 and run
bash example_pmc_train.sh actor
To train EPMC and SEPMC, you can simply follow the steps of PMC and just replace `example_pmc_train.sh' with 'example_epmc_train.sh' and 'example_epmc_train.sh'. Note that you can launch multiple actors in a distributed manner to fast generate data samples.
The motion capture data is obtained from a medium-sized Labrador Retriever. The motions include walking, running, jumping, playing, and sitting. The original data is located in data/raw_mocap_data
. For tracking with a quadrupedal robot, we retargeted the data and generated a mirrored version, which are located in data/mocap_data
.
If you find the codes and dataset in this repo useful for your research, please cite the paper:
@article{han2024lifelike,
title={Lifelike Agility and Play in Quadrupedal Robots using Reinforcement Learning and Generative Pre-trained Models},
author={Lei Han and Qingxu Zhu and Jiapeng Sheng and Chong Zhang and Tingguang Li and Yizheng Zhang and He Zhang and Yuzhen Liu and Cheng Zhou and Rui Zhao and Jie Li and Yufeng Zhang and Rui Wang and Wanchao Chi and Xiong Li and Yonghui Zhu and Lingzhu Xiang and Xiao Teng and Zhengyou Zhang},
year={2024},
journal={Nature Machine Intelligence},
publisher={Nature Publishing Group UK London},
volume={7},
doi = {10.1038/s42256-024-00861-3},
url = {https://www.nature.com/articles/s42256-024-00861-3},
}
This is not an officially supported Tencent product. The code and data in this repository are for research purpose only. No representation or warranty whatsoever, expressed or implied, is made as to its accuracy, reliability or completeness. We assume no liability and are not responsible for any misuse or damage caused by the code and data. Your use of the code and data are subject to applicable laws and your use of them is at your own risk.