End-to-end Implementation of HumanNeRF with custom dataset
paper
HumanNeRF: Free-viewpoint Rendering of Moving People from Monocular Video
Create and activate a virtual environment.
conda create --name humannerf python=3.7
conda activate humannerf
Install the required packages.
pip install -r requirements.txt
Download the gender neutral SMPL model from here, and unpack mpips_smplify_public_v2.zip.
Copy the smpl model.
SMPL_DIR=/path/to/smpl
MODEL_DIR=$SMPL_DIR/smplify_public/code/models
cp $MODEL_DIR/basicModel_neutral_lbs_10_207_0_v1.0.0.pkl third_parties/smpl/models
Follow this page to remove Chumpy objects from the SMPL model.
git clone https://github.com/IVL-PKU/easyHumanNeRF.git
cd easyHumanNeRF/
put your images under the folder ./workspace/demo/
python e2e_train.py --workspace ./workspace/demo/
Render the frame input (i.e., observed motion sequence).
python run.py \
--type movement \
--cfg configs/human_nerf/wild/monocular/adventure.yaml
Run free-viewpoint rendering on a particular frame (e.g., frame 128).
python run.py \
--type freeview \
--cfg configs/human_nerf/wild/monocular/adventure.yaml \
freeview.frame_idx 128
Render the learned canonical appearance (T-pose).
python run.py \
--type tpose \
--cfg configs/human_nerf/wild/monocular/adventure.yaml
In addition, you can find the rendering scripts in scripts/wild
.
- The schedule
- end-to-end training HumanNeRF
- detailed README
- acceleration
- Multi-view HumanNeRF
easyHumanNeRF is an integration of HumanNeRF, VIBE and YOLOv7. If this is helpful to you, please give stars to the above works. Thanks!