Implementation of the paper "Inverse Rendering of Translucent Objects using Shape-adaptive Importance Sampling". Accepted to Pacific Graphics 2024 Conference Track.
This is a differentiable renderer for reconstructing scattering parameters of translucent objects based on a neural BSSRDF model.
Our implementation is based on the path-space differentiable renderer PSDR-CUDA and the extension to BSSRDFs.
To run our code, you can set up the environment yourself by following the instructions found here.
We also provide a docker container with necessary libraries installed. (Some may still require manual installation, e.g. OptiX)
docker pull spockthewizard/shapeadaptiveir:latest
This code was tested on Ubuntu 20.04.6 LTS.
mkdir build
cd build
../cmake.sh # A script for running cmake and make
cd .. && source setpath.sh # Add to PYTHONPATH
.
├── src/bsdf
│ ├── vaesub.cpp
| | # code for shape-adaptive BSSRDF model
│ └── scattereigen.h # helper code
├── variables # model weights
├── data_stats.json # metadata for running neural model
├── data_kiwi_soap # data (imgs, lights, obj)
│ ├── imgs
│ ├── obj
│ └── light
├── examples/python/scripts # experiment code
The provided weights are trained with a more lightweight architecture than proposed in the original forward model paper. No significant performance degradation was noted in our experiments.
-
Prepare your data Put your images, lights and obj file in
/data_kiwi_soap
-
Set necessary constants (e.g. your path) in
/examples/python/constants.py
-
Run the following code
cd examples/python/scripts
./exp_ours.sh ${SCENE_NAME}
We provide an item from our synthetic dataset here.