/GR1-Training

A generalized policy for robotics manipulation

Primary LanguagePythonApache License 2.0Apache-2.0

GR1-Training

A variant of GR-1: "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation". It performs good on CALVIN Manipulation Benchmark without using depth information. The original implementation is here but there is no training script.

This variant has higher performance than the original implementation on the CALVIN benchmark (current SOTA on ABC->D scenario, I may test ABCD->D later) . The details, data, multi-GPU training and evaluation code are fully open-source.

I will try to perform the Pre-training again with my version of Droid dataset, which only has ~350 GB of size (after compression) and easy-to-use Huggingface or LMDB format. I am also working to merge this method to HuggingFace LeRobot library.

After pre-trained on diverse video generation dataset, it can be used as a generalized policy easily by the community:

  • Like Octo, the image query token and the action query token will not influence other tokens, so the policy can be easily adapted to different action space or output.
  • Unlike Octo, this policy is in pytorch. It's not a diffusion policy and the implementation is simpler. This policy is pretrained by video generation instead of action generation, so you can use much more Internet data!

Please remember I build systems for you ヾ(^▽^*)). Feel free to ask me if you have any question!

Also star and cite this repository (and of course the original implementation) if you find it is helpful!

News

[In Progress] Pre-training, finetune the visual encoder, modify the architecture + loss but keep compatibility with Bytedance's weights...

[2024.5.28] Release GR-Chunk which has higher performance. Specifically, the followings are updated:

  • The actions predicted by GR-Chunk has shape (sequence length, action dim), which improves the average length from 3.25 to 3.556 on CALVIN's ABC->D scenerio (I uploads the log to evaluation_logs folder). See the method section. However, you can always load Bytedance's weights and use their settings by modifying configs.json.
  • This implementation can be directly used for multi-GPU training and evaluation. I run it on 12*4090 GPUs but it can be easily scaled if you have more computing resources.
  • Unlike the original implementation, GR-Chunk does not have old dependencies like transformers==4.5.1. Other dependencies mainly comes from CALVIN so you can discard them if you use other environments.
  • I use the same image shifting approach of the original implementation. The hyper-parameters (except for chunking) are as close as possible.
  • Add independent evaluation script and modify some APIs.

[2024.5.16] Initial data, checkpoint, training & evaluation code released.

Installation

Setup conda environment and install the CALVIN benchmark. Notice that you do not need this step if you don't want to use CALVIN simulation (so just install my repository).

source activate
conda create -n gr python=3.8
conda activate gr
git clone --recurse-submodules https://github.com/mees/calvin.git
pip install setuptools==57.5.0
cd calvin
cd calvin_env; git checkout main
cd ../calvin_models
sed -i 's/pytorch-lightning==1.8.6/pytorch-lightning/g' requirements.txt
sed -i 's/torch==1.13.1/torch/g' requirements.txt
cd ..
sh ./install.sh
cd ..

Install this repository:

git clone https://github.com/EDiRobotics/GR1-Training
cd ./GR1-Training
pip install -r requirements.txt
apt-get install -y libegl1-mesa libegl1
apt-get install -y libgl1
apt-get install -y libosmesa6-dev
apt-get install -y patchelf

Prepare Dataset

You do not need to download any datasets if you just want to evaluate the checkpoint.

If you want to train it, the original CALVIN dataset is too large (~500GB for each task) and contains trajectories data without any language annotation. Here, we only use the trajectories with annotations for training (the same as GR-1 and 3D Diffuser Actor).

As an example, let's download the CALVIN debug dataset (1.3GB) and transfer it to our LMDB format.

wget http://calvin.cs.uni-freiburg.de/dataset/calvin_debug_dataset.zip
unzip calvin_debug_dataset.zip
python calvin2lmdb.py --input_dir ./calvin_debug_dataset --output_dir ./calvin_lmdb

You can also download the processed LMDB dataset (ABC->D split) from Huggingface. The LMDB dataset only takes ~23GB, while the original ABC->D split takes 517GB. In this example, I use the tool of HF-Mirror. You can set the environment variable export HF_ENDPOINT=https://hf-mirror.com to avoid the connection problem in some regions.

rm -rf calvin_lmdb
apt install git-lfs aria2
wget https://hf-mirror.com/hfd/hfd.sh
chmod a+x hfd.sh
./hfd.sh StarCycle/calvin_lmdb --dataset --tool aria2c -x 4

Config HuggingFace Accelerate & Setup CALVIN Simulation

To config accelerate, run this command and follow its guidance:

accelerate config

To setup CALVIN, use

export CALVIN_ROOT=<path to calvin folder>

Prepare Weights

You need to download the weights of the ViT visual encoder. The weights of ViT is here. The model will also automatically download weights of CLIP when loading it.

For weights of policy, you can use the Bytedance's weights for CALVIN's ABC->D or my weights. My weights (chunk size==1 or 10) are within the HuggingFace dataset mentioned above.

If you choose to use Bytedance's weights, please set the followings in configs.json. My implementation is always compatible with Bytedance's weights no matter what chunk size you choose.

"bytedance_ckpt_path": "<path to the weight>",
"load_bytedance_ckpt": true,
"mae_ckpt": "<path to ViT weight>",

If you choose to use my weights, please set

"load_bytedance_ckpt": false,
"load_epoch": <the epoch of checkpoint you want>,
"save_path": "<the folder you save the checkpoints & log>",
"mae_ckpt": "<path to ViT weight>",

It loads GR1_<the epoch you select>.pth from ./Save/. Notice that my weights with different chunk_size are incompatible, but you can slightly modify state_dict['action_chunk_queries.weight'] to solve it.

Evaluation

Remember to set these in configs.json (you may not use my hyper-parameters):

"record_evaluation_video": "<true or false>",
"bytedance_ckpt_path": "<path to the Bytedance's weight>",
"load_bytedance_ckpt": <true or false>,
"load_epoch": <the epoch of checkpoint you want>,
"num_sequences": <how many episodes you want to simulate, I use 1000>,
"ep_len": <maximum step number of a task in an episode>,
"chunk_size": <action chunk size of the network>,
"test_chunk_size": <the action chunk size you actually execute>,

Then simply run

accelerate launch evaluate_calvin.py

It will run N simulations in parallel on N GPUs (depending on what you set in accelerate config). If you choose to record the evaluation video, the videos will be saved in eval_<GPU id> folder under the save_path you specify.

Training

After setting configs.json, you can simply launch training with

accelerate launch Main.py

If "evaluate_during_training": true in configs.json, then it will evaluate the policy after every epoch of training.

GR-Chunk Method

Let's assume sequence_len==4, chunk_size==3, test_chunk_size==2 and the currect timestep is 4, the input and output of the original GR1 is simply (s1, s2, s3, s4) and (a1, a2, a3, a4), respectively. In the environment, we take the predicted action a4.

By contrast, the output in GR-Chunk is ((a1, a2, a3), (a2, a3, a4), (a3, a4, a5), (a4, a5, a6)). In the environment, we take the predicted action a4 and a5 in consecutive timesteps, and then run the policy to predict future actions again.

Similar approach is taken in Meta's recent 4-token prediction LLM, ACT/Aloha of @tonyzhaozh, Diffusion Policy of @cheng-chi, and Octo.

In my experiment, temporal ensembling of ACT does not improve the success rate. Octo has similar conclusion. As reported by @tonyzhaozh, temporal ensembling seems to work better with small data.

The best test_chunk_size seems to be 1 when chunk_size==10, see the following ablation:

Configuration Avg. Len of ABC->D
chunk_size=1, test_chunk_size=1 3.257
chunk_size=10, test_chunk_size=1 3.556
chunk_size=10, test_chunk_size=2 2.145

Detailed Analysis of the Original Network and API

Please refer to this issue.

Training Curves

I first finetune the policy with chunk_size==1 and then keep finetuning it with chunk_size==10. Each phase takes 20 epochs. I guess you can directly train it with chunk_size==10.

I do not evaluate the policy during training because the server I use now does not have Nvidia EGL support.

First finetune it with chunk_size==1 (8*4090 GPU, batch size per GPU 22, gradient accumulation step 3):

Static camera reconstruction loss

Wrist camera reconstruction loss

Arm action loss

Gripper action loss

Learning rate

First finetune it with chunk_size==10 (8*4090 GPU, batch size per GPU 19, gradient accumulation step 3):

Static camera reconstruction loss

Wrist camera reconstruction loss

Arm action loss

Gripper action loss

Learning rate

Notice

  • I have not trained it from scratch yet (according to the authors, the Ego4D pretraining takes 4 days on 32 V100 16GB, and the CALVIN finetuning takes 1 day on 32 V100 16GB). I just finetuned their checkpoint to adapt to my training pipeline.
  • When I measure the success rate of the Bytedance's weight, I get an average length of 3.25 instead of 3.06 in their original paper. Perhaps they did not fix this issue in CALVIN benchmark, while 3D Diffuser Actor fixed it

Acknowledgement

Great thanks to @bdrhtw to make it open-source!

Feel Free to Contact Me!

Email: zhuohengli@foxmail.com

Find Zhuoheng Li in HuggiingFace LeRobot server: Discord (try to merge this repo to LeRobot)

Wechat group:

Or feel free to open an issue here.