/GPAvatar

[ICLR 2024] Generalizable and Precise Head Avatar from Image(s)

Primary LanguagePythonMIT LicenseMIT

       

Xuangeng Chu1,2Yu Li2Ailing Zeng2Tianyu Yang2Lijian Lin2Yunfei Liu2Tatsuya Harada1,3
1The University of Tokyo, 2International Digital Economy Academy (IDEA), 3RIKEN AIP

🤩ICLR 2024🤩

drawing
GPAvatar reconstructs controllable 3D head avatars from one or several images in a single forward pass.
More results can be seen from our Project Page.

Installation

Install step by step
conda create -n track python=3.9
conda install pytorch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 pytorch-cuda=11.8 -c pytorch -c nvidia
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
conda install pytorch3d -c pytorch3d
pip3 install mediapipe tqdm rich lmdb einops colored ninja av opencv-python scikit-image onnxruntime-gpu onnx transformers pykalman
pip3 install pytorch-lightning==2.1.3
<!-- pip3 install git+https://github.com/nerfstudio-project/nerfacc.git -->
pip3 install nerfacc==0.5.3 -f https://nerfacc-bucket.s3.us-west-2.amazonaws.com/whl/torch-2.0.0_cu118.html
Install with environment.yml (recommend)
conda env create -f environment.yml
conda activate GPAvatar
pip3 install nerfacc==0.5.3 -f https://nerfacc-bucket.s3.us-west-2.amazonaws.com/whl/torch-2.0.0_cu118.html
Run with Dockerfile
If your environment has unknown or unsolvable issues, please use the Dockerfile in https://github.com/xg-chu/lightning_track as a final solution.

Preparation

Build resources with bash ./build_resources.sh.

Download the model checkpoint and put it at checkpoints/one_model.ckpt.

Quick Start

Driven by images:

python inference.py -r ./checkpoints/one_model.ckpt --driver ./demos/drivers/pdriver --input ./demos/examples/real1

or driven by video:

python inference.py -r ./checkpoints/one_model.ckpt --driver ./demos/drivers/vdriver1 --input ./demos/examples/art1 -v

Fast Inference

Please refer to inference_ready2go.py for some quick inference tools. There are some scripts for easy inference with given only expression features, or given input and target images.

How to build the dataset for training

Build dataset

In lightning tracking, there is a track_lmdb.py script that can easily track expressions in a large number of discontinuous images. (We recommend sampling some discontinuous frames from the video to build a dataset to avoid sampling adjacent frames with too similar expressions during training.)

Building the dataset used for training requires img_lmdb, dataset.pkl and camera.json.

img_lmdb:
‘img_007914_79’ : image # refer to the lmdb_utiils.py, there is also API to build a lmdb. 007914 is video id (used when sampling), 79 is frame id.

dataset.pkl:
- dict_keys(['records', 'meta_info'])
	- “records": dict_keys(['img_007914_79', ‘img_007914_99', …])
		- "img_007914_79": dict_keys(['bbox', 'kps', 'mica_shape', 'emoca_expression', 'emoca_pose', 'transform_matrix’]) # can get from lightning.pkl generated by lightning_traking
	- ‘meta_info’: dict_keys(['train', 'val', ‘test'])
		- "train": list(['img_004997_80', ‘img_000630_159'])
		- "val": …
		- “test": ...

camera.json:
{"flame_scale": 5.0, "focal_length": 12.0, "principal_point": [0.0, 0.0]} # be same with lightning_tracking, other numbers may also be OK.
Training
python train.py --config one --dataset vfhq

Citation

If you find our work useful in your research, please consider citing:

@inproceedings{
    chu2024gpavatar,
    title={{GPA}vatar: Generalizable and Precise Head Avatar from Image(s)},
    author={Xuangeng Chu and Yu Li and Ailing Zeng and Tianyu Yang and Lijian Lin and Yunfei Liu and Tatsuya Harada},
    booktitle={The Twelfth International Conference on Learning Representations},
    year={2024},
    url={https://openreview.net/forum?id=hgehGq2bDv}
}

Acknowledgements

Some part of our work is built based on FLAME, StyleMatte, EMOCA and MICA. The GPAvatar Logo is designed by Caihong Ning. We thank you for sharing their wonderful code and their wonderful work.