/shapeAdaptiveIR

Code for paper "Inverse Rendering of Translucent Objects using Shape-adaptive Importance Sampling" (PG 2024)

Primary LanguageC++BSD 3-Clause "New" or "Revised" LicenseBSD-3-Clause

Shape-adaptive Inverse Rendering

Implementation of the paper "Inverse Rendering of Translucent Objects using Shape-adaptive Importance Sampling". Accepted to Pacific Graphics 2024 Conference Track.

Project Page

This is a differentiable renderer for reconstructing scattering parameters of translucent objects based on a neural BSSRDF model.

Installation

Our implementation is based on the path-space differentiable renderer PSDR-CUDA and the extension to BSSRDFs.

To run our code, you can set up the environment yourself by following the instructions found here.

We also provide a docker container with necessary libraries installed. (Some may still require manual installation, e.g. OptiX)

docker pull spockthewizard/shapeadaptiveir:latest

This code was tested on Ubuntu 20.04.6 LTS.

Build

mkdir build
cd build
../cmake.sh # A script for running cmake and make
cd .. && source setpath.sh # Add to PYTHONPATH

Folder Structure

.
├── src/bsdf
│   ├── vaesub.cpp
|   |   # code for shape-adaptive BSSRDF model
│   └── scattereigen.h # helper code 
├── variables # model weights
├── data_stats.json # metadata for running neural model
├── data_kiwi_soap # data (imgs, lights, obj)
│   ├── imgs
│   ├── obj
│   └── light
├── examples/python/scripts # experiment code

The provided weights are trained with a more lightweight architecture than proposed in the original forward model paper. No significant performance degradation was noted in our experiments.

Running Experiments

  1. Prepare your data Put your images, lights and obj file in /data_kiwi_soap

  2. Set necessary constants (e.g. your path) in /examples/python/constants.py

  3. Run the following code

cd examples/python/scripts
./exp_ours.sh ${SCENE_NAME}

Dataset

We provide an item from our synthetic dataset here.