/DARTS

[SIGGRAPH Asia 2024 (Journal Track)] Official code release for paper: 🎯"DARTS: Diffusion Approximated Residual Time Sampling for Time-of-flight Rendering in Homogeneous Scattering Media"

Primary LanguageC++MIT LicenseMIT

DARTS

ACM TOG (SIGGRAPH Asia 24 Journal Track)

DARTS: Diffusion Approximated Residual Time Sampling for Time-of-flight Rendering in Homogeneous Scattering Media

Qianyue He, Dongyu Du, Haitian Jiang, Xin Jin*

Coming soon. Jesus, never thought that I would say this one day, but yeah, for real, coming soon (patent-related problems). The code will eventually be open-sourced as the supp material (which is included in the submission), nevertheless, so even if I should forget to update this repo, which would be highly unlikely, I am sure anyone in need can find the code.

Well, this is the repository of our paper: DARTS: Diffusion Approximated Residual Time Sampling for Time-of-flight Rendering in Homogeneous Scattering Media.

Yet, the code release will be postponed, most likely before the date of the paper getting officially released. The code is composed of two renderers:

A kind reminder: don't get misled. Diffusion here does not mean DDPM related generative AI. It is an optics concept which describes the "diffusion" of photons within the participating medium. The only part Pytorch gets used in this work, is the precomputation of the EDA direction sampling table, which takes merely 5 seconds to complete with torch.compile and no back-prop is used. Yeah, I know, this is old-fashioned, sorry about that if you accidentally click this repo and try to see how diffusion models are employed.

The code base will be huge (including several modified external packages used by pbrt-v3), therefore it takes time to release the code (that is gauranteed to be compile-able and runable). Since the arxiv version of this paper (I use IEEE template to avoid being recognized as SIGGRAPH submission) is already available, and judging by the reviews, the method in this work can be easily reproduced, so if you have any question about the implementation before the code upload, feel free to open an issue to have a discussion with me.

Apart from other authors, here I'd like to extend my personal gratitude to Yang Liu, who is the first author of paper "Temporally sliced photon primitives for time-of-flight rendering". The implementation (photon point methods) of our method is based on the code base of his (listed above) and some of the discussions with him really pushed forward the modification of the Tungsten renderer part. His work for camera-unwarped transient rendering is very solid and inspiring, which definitely deserves more attention.