2024/5/11
We are working on a journal extension of CoRA where the eyeball position and radius are automatically solved from the data instead of manually set as in the CVPR'24 version; this part of code is now released! Stay tuned for the journal version paper.2024/5/11
We have released the scripts and instructions to run on custom data!2024/3/3
We have released the full code to run on our preprocessed dataset, see Run.md for more details.
This is a PyTorch implementation of the following paper:
High-Quality Facial Geometry and Appearance Capture at Home, CVPR 2024.
Yuxuan Han, Junfeng Lyu, and Feng Xu
Project Page | Video | Paper
Abstract: Facial geometry and appearance capture have demonstrated tremendous success in 3D scanning real humans in studios. Recent works propose to democratize this technique while keeping the results high quality. However, they are still inconvenient for daily usage. In addition, they focus on an easier problem of only capturing facial skin. This paper proposes a novel method for high-quality face capture, featuring an easy-to-use system and the capability to model the complete face with skin, mouth interior, hair, and eyes. We reconstruct facial geometry and appearance from a single co-located smartphone flashlight sequence captured in a dim room where the flashlight is the dominant light source (e.g. rooms with curtains or at night). To model the complete face, we propose a novel hybrid representation to effectively model both eyes and other facial regions, along with novel techniques to learn it from images. We apply a combined lighting model to compactly represent real illuminations and exploit a morphable face albedo model as a reflectance prior to disentangle diffuse and specular. Experiments show that our method can capture high-quality 3D relightable scans.
To use our codebase to create your own 3D relightable avatar, we provide the following documents:
- Env.md for code environment setup.
- If you just want to quickly run the code on our released dataset, now you can directly goto Run.md.
- Capture.md for instructions to capture video under our setup, i.e. co-located video in a dim room where the smartphone flashlight is the dominant light source.
- Preprocess.md for video preprocessing.
- Run.md for scripts to train our method on the preprocessed dataset to reconstruct relightable avatar.
We also plan to create a video toturial to help users to create their own relightable avatar using our codebase. Stay tuned.
If you have any questions, please contact Yuxuan Han (hanyx22@mails.tsinghua.edu.cn).
This repository can only be used for personal/research/non-commercial purposes. Please cite the following paper if it helps your research:
@inproceedings{han2024cora,
author = {Han, Yuxuan and Lyu, Junfeng and Xu, Feng},
title = {High-Quality Facial Geometry and Appearance Capture at Home},
journal={CVPR},
year={2024}
}
- The code is built on a bunch of wonderful projects, including: Nerfacc, tinycudann, facer, metrical-tracker, AlbedoMM, RobustVideoMatting, and WildLight.
- Thanks SoulShell for providing their Light Stage to help us conduct comparison experiments.
- Thanks Jingwang Ling and Zhibo Wang for helpful discussions.