Project Page | Video | Paper
Kangkan Wang, Sida Peng, Xiaowei Zhou, Jian Yang, Guofeng Zhang
Official implementation of Human Performance Capture With Dynamic Neural Radiance Fields.
Questions and discussions are welcomed! Feel free to contact Kangkan Wang via wangkangkan@njust.edu.cn
conda create -n nerfcap python=3.7
conda activate nerfcap
pip install torch==1.6.0+cu102 -f https://download.pytorch.org/whl/torch_stable.html
pip install -r requirements.txt
Download DeepCap dataset here.
-
Download the corresponding pretrained model and put it to
$ROOT/data/trained_model/if_nerf/magdalena/latest.pth
-
Test and visualization
- Visualize all frames at test views
python run.py --type visualize --cfg_file configs/magdalena/magdalena.yaml exp_name magdalena
- Simultaneously extract mesh at each frame
python run.py --type visualize --cfg_file configs/magdalena/magdalena.yaml exp_name magdalena vis_mesh True
- Visualize all frames at test views
-
The result are located at
$ROOT/data/result/if_nerf/magdalena
- Train
python train_net.py --cfg_file configs/magdalena/magdalena.yaml exp_name magdalena resume False
- Tensorboard
tensorboard --logdir data/record/if_nerf
If you find this code useful for your research, please use the following BibTeX entry.
@ARTICLE{NerfCap_TVCG22,
title={NerfCap: Human Performance Capture With Dynamic Neural Radiance Fields},
author={Wang, Kangkan and Peng, Sida and Zhou, Xiaowei and Yang, Jian and Zhang, Guofeng},
journal={IEEE Transactions on Visualization and Computer Graphics},
year={2022},
pages={1-13},
doi={10.1109/TVCG.2022.3202503}}