This is the code for TapMo: Shape-aware Motion Generation of Skeleton-free Characters by Jiaxu Zhang, et al.
TapMo is a text-based animation pipeline for generating motion in a wide variety of skeleton-free characters.
- Inference code
- Training code
- Python >= 3.7
- Pytorch >= 1.4
- Pytorch-geometric
conda create python=3.8 --name tapmo
conda activate tapmo
Install the packages in requirements.txt
and install PyTorch 2.1.0
pip install -r requirements.txt
Download the processed datasets and the requriements from Google dirve
cd TapMo
unzip datasets.zip -d ./
unzip weights.zip -d ./
unzip deps.zip -d ./shape_diffusion
cd shape_diffusion
python3 -m sample.generate_handle_motion --model_path ../weights/diffusion_model_latest.pt --arch trans_dec --emb_trans_dec False --dataset t6d_mixrig --char_feature_path ../demo/shape_features/001.npy --save_path ../demo/motion/motion_ --text_prompt "walk forward and turn right."
cd handle_predictor
python -m motion_to_mesh --ckpt_path ../weights/handle_predictor_latest.pth --motion_path ../demo/motion/motion_0.npz --tgt_mesh_path ../demo/mesh/001.obj --save_dir ../demo/results/001
Please cite our paper if you use this repository:
@inproceedings{zhang2024tapmo,
title = {TapMo: Shape-aware Motion Generation of Skeleton-free Characters},
author = {Zhang, Jiaxu and Huang, Shaoli and Tu, Zhigang and Chen, Xin and Zhan, Xiaohang and Yu, Gang and Shan, Ying},
booktitle = {The Twelfth International Conference on Learning Representations ({ICLR})},
year = {2024},
}
We borrowed part of the codes from the following projects: