We recommend using Anaconda to set up the environment. Run the following commands:
# Clone the repo
git clone https://github.com/Minisal/HeRF.git
cd HeRF
# Create a conda environment
conda create --name herf python=3.9.12
conda activate herf
# Prepare pip
conda install pip
pip install --upgrade pip
# install cudatoolkit, otherwise you can config it to your cuda path
conda install cudatoolkit=11.3
# Install PyTorch
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113
# Install other
pip install tqdm scikit-image opencv-python configargparse lpips imageio-ffmpeg kornia lpips tensorboard
The training script is in train.py
, to train:
python train.py --config configs/param_exp/scannet/paper_01/step_1k.txt
python train.py --config configs/lego.txt --ckpt path/to/your/checkpoint --render_only 1 --render_test 1
You can just simply pass --render_only 1
and --ckpt path/to/your/checkpoint
to render images from a pre-trained
checkpoint. You may also need to specify what you want to render, like --render_test 1
, --render_train 1
or --render_path 1
.
The rendering results are located in your checkpoint folder.
You can also export the mesh by passing --export_mesh 1
:
python train.py --config configs/lego.txt --ckpt path/to/your/checkpoint --export_mesh 1
Note: Please re-train the model and don't use the pretrained checkpoints provided by us for mesh extraction, because some render parameters has changed.
If you find our code or paper helps, please consider citing:
@article{yang2024ijcnn,
title={{HeRF}: A Hierarchical Framework for Efficient and Extendable New View Synthesis},
author={Xiaoyan Yang, Dingbo Lu, Wenjie Liu, Ling You, Yang Li, Changbo Wang},
journal={IJCNN},
year={2024}
}