Xuangeng Chu1,2
Yu Li2
Ailing Zeng2
Tianyu Yang2
Lijian Lin2
Yunfei Liu2
Tatsuya Harada1,3
1The University of Tokyo,
2International Digital Economy Academy (IDEA),
3RIKEN AIP
GPAvatar reconstructs controllable 3D head avatars from one or several images in a single forward pass.
More results can be seen from our Project Page.
More results can be seen from our Project Page.
Install step by step
conda create -n track python=3.9
conda install pytorch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 pytorch-cuda=11.8 -c pytorch -c nvidia
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
conda install pytorch3d -c pytorch3d
pip3 install mediapipe tqdm rich lmdb einops colored ninja av opencv-python scikit-image onnxruntime-gpu onnx transformers pykalman
pip3 install pytorch-lightning==2.1.3
<!-- pip3 install git+https://github.com/nerfstudio-project/nerfacc.git -->
pip3 install nerfacc==0.5.3 -f https://nerfacc-bucket.s3.us-west-2.amazonaws.com/whl/torch-2.0.0_cu118.html
Install with environment.yml (recommend)
conda env create -f environment.yml
conda activate GPAvatar
pip3 install nerfacc==0.5.3 -f https://nerfacc-bucket.s3.us-west-2.amazonaws.com/whl/torch-2.0.0_cu118.html
Run with Dockerfile
If your environment has unknown or unsolvable issues, please use the Dockerfile in https://github.com/xg-chu/lightning_track as a final solution.
Build resources with bash ./build_resources.sh
.
Download the model checkpoint and put it at checkpoints/one_model.ckpt
.
Driven by images:
python inference.py -r ./checkpoints/one_model.ckpt --driver ./demos/drivers/pdriver --input ./demos/examples/real1
or driven by video:
python inference.py -r ./checkpoints/one_model.ckpt --driver ./demos/drivers/vdriver1 --input ./demos/examples/art1 -v
If you find our work useful in your research, please consider citing:
@inproceedings{
chu2024gpavatar,
title={{GPA}vatar: Generalizable and Precise Head Avatar from Image(s)},
author={Xuangeng Chu and Yu Li and Ailing Zeng and Tianyu Yang and Lijian Lin and Yunfei Liu and Tatsuya Harada},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=hgehGq2bDv}
}
Some part of our work is built based on FLAME, StyleMatte, EMOCA and MICA. The GPAvatar Logo is designed by Caihong Ning. We thank you for sharing their wonderful code and their wonderful work.
- FLAME: https://flame.is.tue.mpg.de
- StyleMatte: https://github.com/chroneus/stylematte
- EMOCA: https://github.com/radekd91/emoca
- MICA: https://github.com/Zielon/MICA