/ml-neuman

Official repository of NeuMan: Neural Human Radiance Field from a Single Video (ECCV 2022)

Primary LanguagePythonOtherNOASSERTION

NeuMan: Neural Human Radiance Field from a Single Video

This repository is a reference implementation for NeuMan. NeuMan reconstructs both the background scene and an animatable human from a single video using neural radiance fields.

[Paper]

Video demos

Novel view and novel pose synthesis

[Bike] [Citron] [Parking lot] [Jogging] [Lab] [Seattle]

Compositional Synthesis

[Handshake] [Dance]

Environment

To create the environment using Conda:

conda env create -f environment.yml

Alternately, you can create the environment by executing:

conda create -n neuman_env python=3.7 -y;
conda activate neuman_env;
conda install pytorch==1.8.0 torchvision==0.9.0 cudatoolkit=10.2 -c pytorch;
# For RTX 30 series GPU with CUDA version 11.x, please use:
# conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=11.1 -c pytorch -c conda-forge
conda install -c fvcore -c iopath -c conda-forge fvcore iopath;
conda install -c bottler nvidiacub;
conda install pytorch3d -c pytorch3d;
conda install -c conda-forge igl;
pip install opencv-python joblib open3d imageio tensorboardX chumpy lpips scikit-image ipython matplotlib;

Notice that pytorch3d requires a specific version of pytorch, in our case pytorch=1.8.0.

Activate the environment:

conda activate neuman_env

Demo

  • Download SMPL weights:

    • Registration is required to download the UV map(Download UV map in OBJ format) from SMPL.
    • Download neutral SMPL weights(SMPLIFY_CODE_V2.ZIP) from SMPLify, extract basicModel_neutral_lbs_10_207_0_v1.0.0.pkl and rename it to SMPL_NEUTRAL.pkl.
    • Put the all the downloaded files into ./data/smplx folder with following structure:
     .
     └── data
         └── smplx
             ├── smpl
             │   └── SMPL_NEUTRAL.pkl
             └── smpl_uv.obj
  • Download NeuMan dataset and pretrained models:

    Alternately, run the following script to set up data and pretrained models.

    bash setup_data_and_models.sh
  • (Optional) Download AMASS dataset for reposing:

    • AMASS dataset is used for rendering novel poses, specifically render_reposing.py and render_gathering.py.
    • We used SFU mocap(SMPL+H G) subset, please download from AMASS.
    • Put the downloaded mocap data in to ./data/SFU folder.
     .
     └── data
         └── SFU
             ├── 0005
             ├── 0007
             │   ...
             └── 0018
  • Render using pretrained model

    Render 360 views of a human:

    python render_360.py --scene_dir ./data/bike --weights_path ./out/bike_human/checkpoint.pth.tar --mode canonical_360

    Render test views of a sequence, and evaluate the metrics:

    python render_test_views.py --scene_dir ./data/bike --weights_path ./out/bike_human/checkpoint.pth.tar

    Render novel poses with the background:

    python render_reposing.py --scene_dir ./data/bike --weights_path ./out/bike_human/checkpoint.pth.tar --motion_name=jumpandroll

    Render telegathering:

    python render_gathering.py --actors parkinglot seattle citron --scene_dir ./data/seattle --weights_path ./out/seattle_human/checkpoint.pth.tar

Training

  • Download NeuMan dataset

  • Train scene NeRF

python train.py --scene_dir ./data/bike/ --name=bike_background --train_mode=bkg
  • Train human NeRF
python train.py --scene_dir ./data/bike  --name=bike_human --load_background=bike_background --train_mode=smpl_and_offset

Use your own video

Citation

@inproceedings{jiang2022neuman,
  title={NeuMan: Neural Human Radiance Field from a Single Video},
  author={Jiang, Wei and Yi, Kwang Moo and Samei, Golnoosh and Tuzel, Oncel and Ranjan, Anurag},
  booktitle={Proceedings of the European conference on computer vision (ECCV)},
  year={2022}
}

License

The code is released under the LICENSE terms.