Code for our CVPR 2023 paper "ShadowNeuS: Neural SDF Reconstruction by Shadow Ray Supervision", which draws inspiration from NeRF and presents a new ray supervision scheme for reconstructing scenes from single-view shadows.
Project Page | Paper | Video | Dataset
github_teaser.mp4
git clone https://github.com/gerwang/ShadowNeuS.git
cd ShadowNeuS
conda create -n shadowneus python=3.9
conda activate shadowneus
pip install -r requirements.txt
Here we show how to test our code on an example scene. Before testing, you need to
-
Download the example data and unzip it to
./public_data/nerf_synthetic
. -
Download the pretrained checkpoint of
lego_specular_point
here and unzip it to./exp
.
python exp_runner.py --mode validate_view --conf confs/point_color.conf --case lego_specular_point --is_continue --data_sub 1 --test_mode
See the results at ./exp/lego_specular_point/point_color/novel_view/validations_fine/
.
python exp_runner.py --mode validate_mesh --conf confs/point_color.conf --case lego_specular_point --is_continue --data_sub 1 --test_mode
See the results at ./exp/lego_specular_point/point_color/meshes/00150000.ply
.
python exp_runner.py --mode validate_relight --conf confs/point_color.conf --case lego_specular_point --is_continue --data_sub 1 --test_mode
See the results at ./exp/lego_specular_point/point_color/novel_light/validations_fine/
.
python exp_runner.py --mode validate_relight_0_point_gold --conf confs/point_color.conf --case lego_specular_point --is_continue --test_mode
See the results at ./exp/lego_specular_point/point_color/novel_light_gold/validations_fine/
.
The --mode
option can be validate_relight_<img_idx>_<light>_<material>
, where <img_idx>
is the image index in the training dataset, light
can be point
or dir
which determines whether a point light or direction light is used, and material
can be gold
or emerald
.
python exp_runner.py --mode validate_normal_depth --conf confs/point_color.conf --case lego_specular_point --is_continue --data_sub 1 --test_mode
See the results at ./exp/lego_specular_point/point_color/quantitative_compare/
.
python exp_runner.py --mode validate_env_0_0 --conf confs/point_color.conf --case lego_specular_point --is_continue --data_sub 1 --test_mode
python exp_runner.py --mode validate_env_0_0.25 --conf confs/point_color.conf --case lego_specular_point --is_continue --data_sub 1 --test_mode
python exp_runner.py --mode validate_env_0 --conf confs/point_color.conf --case lego_specular_point --is_continue --data_sub 1 --test_mode
python exp_runner.py --mode validate_env_0_0.75 --conf confs/point_color.conf --case lego_specular_point --is_continue --data_sub 1 --test_mode
Download environment maps of Industrial Workshop Foundry, Thatch Chapel, Blaubeuren Night and J&E Gray Park. Extract them to ./public_data/envmap
.
python env_relight.py --work_path ./exp/lego_specular_point/point_color/ --env_paths ./public_data/envmap/industrial_workshop_foundry_4k.exr,./public_data/envmap/thatch_chapel_4k.exr,./public_data/envmap/blaubeuren_night_4k.exr,./public_data/envmap/je_gray_park_4k.exr --save_names super_workshop,super_chapel,super_night,super_park --super_sample --n_theta 128 --n_frames 128 --device_ids 0,1,2,3
See the results at ./exp/lego_specular_point/point_color/super_workshop
, super_chapel
, super_night
, and super_park
. The above command is tested on four RTX 3090 GPUs.
Output:
super_env_relight.mp4
You can download training data from here.
Extract point_light.zip
and move each scene to ./public_data/nerf_synthetic
. Then run
python exp_runner.py --mode train --conf ./confs/point_color.conf --case <case_name>_specular_point
Extract point_light.zip
and move each scene to ./public_data/nerf_synthetic
. Then run
python exp_runner.py --mode train --conf ./confs/point_shadow.conf --case <case_name>_specular_point
Extract directional_light.zip
and move each scene to ./public_data/nerf_synthetic
. Then run
python exp_runner.py --mode train --conf ./confs/directional_color.conf --case <case_name>_specular
Extract directional_light.zip
and move each scene to ./public_data/nerf_synthetic
. Then run
python exp_runner.py --mode train --conf ./confs/directional_shadow.conf --case <case_name>_specular
Extract vertical_down.zip
and move each scene to ./public_data/nerf_synthetic
. Then run
python exp_runner.py --mode train --conf ./confs/point_shadow.conf --case <case_name>_upup
Extract real_data.zip
to ./public_data
and run
python exp_runner.py --mode train --conf ./confs/real_data.conf --case <case_name>
You can download DeepShadowData.zip
from their project page, and unzip it to ./public_data
. Then run
python exp_runner.py --mode train --conf ./confs/deepshadow.conf --case <case_name>
Cite as below if you find this repository helpful:
@misc{ling2022shadowneus,
title={ShadowNeuS: Neural SDF Reconstruction by Shadow Ray Supervision},
author={Jingwang Ling and Zhibo Wang and Feng Xu},
year={2022},
eprint={2211.14086},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
The project structure is based on NeuS. Some code is borrowed from deep_shadow, IRON and psnerf. Thanks for these great projects.