Paper | Project Page | Arxiv
This repository is the official implementation of our work FaceG2E.
Text-Guided 3D Face Synthesis -- From Generation to Editing
Yunjie Wu, Yapeng Meng, Zhipeng Hu, Lincheng Li, Haoqian Wu, Kun Zhou, Weiwei Xu, Xin Yu
In CVPR2024
FUXI AILab, Netease, Hangzhou, China
conda env create -f faceg2e.yaml
conda activate faceg2e
bash install_extra_lib.sh
This implementation is only tested under the device:
- System: Unbuntu 18.04
- GPU: A30
- Cuda Version: 12.0
- Cuda Driver Version: 525.78.01
Download our pretrained texture diffusion ckpts and put them in ./ckpts
directory.
Download the HIFI3D++ 3DMM files (AI-NExT-Albedo-Global.mat and HIFI3D++.mat) and put them in ./HIFI3D
directory.
bash demo_geometry_generation.sh
bash demo_texture_generation.sh
bash demo_editing.sh
The results are saved in exp/demo
.
-
During editing, you need to input a token indice to indicates the token which determines the consistency-preservation mask. If your editing is a global effect to the face, you can input
0
as indice. -
The weighting parameters in
demo_editing.sh
controls editing effect. You can adjust them by yourself. For example, a higheredit_prompt_cfg
makes editing more obvious, and a higherw_reg_diffuse
makes unrelated regions more consistent.
Generation of Scarlett Johansson
, Cate Blanchett
, and Tom Cruise
.
Generation of Neteyam in Avatar
, Thanos
, and Kratos
.
Editing of Make his eye mask blue
, Make him chubby
, and Turn his eyemask golden
.
If you have any questions, please contact Yunjie Wu (jiejiangwu@outlook.com).
If you use our work in your research, please cite our publication:
@inproceedings{wu2024text,
title={Text-Guided 3D Face Synthesis-From Generation to Editing},
author={Wu, Yunjie and Meng, Yapeng and Hu, Zhipeng and Li, Lincheng and Wu, Haoqian and Zhou, Kun and Xu, Weiwei and Yu, Xin},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={1260--1269},
year={2024}
}
There are some functions or scripts in this implementation that are based on external sources. We thank the authors for their excellent works. Here are some great resources we benefit:
- Deep3DFaceRecon_pytorch for the rendering framework code.
- Nvdiffrast for differentiable rendering.
- REALY for 3D Morphable Model.
- Stable-dreamfusion for SDS code.
- BoxDiff for token-image cross-attention compuatation.