Fireflies is a wrapper for the Mitsuba Renderer and allows for rapid prototyping and generation of physically-based renderings and simulation data in a differentiable manner. It can be used for example, to easily generate highly realistic medical imaging data for medical machine learning tasks or (its intended use) test the reconstruction capabilities of Structured Light projection systems in simulated environments. I originally created it to research if the task of finding an optimal point-based laser pattern for structured light laryngoscopy can be reformulated as a gradient-based optimization problem.
This code accompanies the paper Fireflies: Photorealistic Simulation and Optimization of Structured Light Endoscopy accepted at SASHIMI 2024. 🎊
- Easy torch-like and pythonic scene randomization description. This library is made to be easily usable for everyone who regularly uses python and pytorch. We implement train() and eval() functionality from the get go.
- Integratable into online deep-learning and machine learning tasks due to the differentiability of the mitsuba renderer w.r.t. the scene parameters.
- Simple animation description. Have a look into the examples.
- Single Shot Structured Light specific. You can easily test different projection pattern and reconstruction algorithms on randomized scenes, giving a good estimation of the quality and viability of patterns/systems/algorithms.
Make sure to create a conda environment first. I tested fireflies on Python 3.10, it should however work with every Python version that is also supported by Mitsuba and Pytorch. I'm working on adding Fireflies to PyPi in the future. First install the necessary dependencies:
pip install pywavefront geomdl
pip install torch
pip install mitsuba
To run the examples, you also need OpenCV:
pip install opencv-python
Finally, you can install Fireflies via:
git clone https://github.com/Henningson/Fireflies.git
cd Fireflies
pip install .
import mitsuba as mi
import fireflies as ff
mi_scene = mi.scene(path)
mi_params = mi.traverse(mi_scene)
ff_scene = ff.scene(mi_params)
mesh = ff_scene.mesh_at(0)
mesh.rotate_z(-3.141, 3.141)
ff_scene.eval()
#ff_scene.train() generates uniformly sampled results on the right
for i in range(0, 20):
ff_scene.randomize()
mi.render(mi_scene)
A bunch of different examples can be found in the examples folder. Note that I'm currently reworking these examples. They span from defining a simple scene to training neural networks and optimizing point-based structured light pattern. Ideally, you work through them one by one. The last examples include the experiments of the paper. They consist of:
- Hello World - How to wrap fireflies around your Mitsuba scene.
- General Transformations - Showcasing different affine transformations.
- Parent Child - Defining hierarchical relationships for objects in the scene.
- Material Randomization - How to randomize material parameters
- Light Randomization - How to randomize lights
- Sampling - How to implement different sampling strategies for scene randomization.
- Animation (a) - Apply deformations via a deformation function.
- Animation (b) - Apply deformations by loading a set of obj-files from a folder.
- B-Spline Camera Trajectory - Load a B-Spline camera trajectory for randomization of colonoscopic data.
- Laser Pattern Creation - How to define and create laser pattern highlighted in the paper.
- Laser Pattern Optimization - Laser pattern optimization to reduce ambiguities in correspondence estimation.
- Domain Specific Pattern Optimization: Gaussian Mean Localization - Optimize a laser pattern and small neural network that minimize a specific target function. For paper readers, this is the Gaussian optimization task. The complete experiments can be found in the paper branch.
- Domain Specific Pattern Optimization: Depth Completion (Vocal Fold/Laryngoscopy) - Optimize a laser pattern and gated convolutional neural network that infer dense depth maps from sparse depth input in a laryngoscopic setting. For paper readers, this is the Vocal Fold Depth Completion task. The complete experiments can be found in the paper branch.
- Domain Specific Pattern Optimization: Depth Completion (Colonoscopy) - Optimize a laser pattern and gated convolutional neural network that infer dense depth maps from sparse depth input in a coloscopic setting. For paper readers, this is the Colon Depth Completion task. The complete experiments can be found in the paper branch.
- 3D Reconstruction Pipeline - Implementing a 3D reconstruction pipeline for evaluating a grid-based laser pattern.
You can easily generate a scene using Blender.
To export a scene in Mitsubas required .xml format, you first need to install the Mitsuba Blender Add-On.
You can then export it under the File -> Export Tab.
Make sure to tick the ✅ export ids Checkbox, as fireflies infers the object type by checking for name qualifiers with specific keys, e.g.: "mesh", "brdf", etc.
Can be found in the README of the paper branch.
Because optimizing a point-based laser pattern looks like fireflies that jet around. :)
Since I am now in my last year of my PhD, I won't be really able to further work on this library for the time being. Please start pull requests for features, Add-Ons, Bug-fixes, etc. I'd be very happy about any help. :)
A big thank you to Wenzel Jakob and team for their wonderful work on the Mitsuba renderer. You should definitely check out their work: Mitsuba Homepage, Mitsuba Github.
Furthermore, this work was supported by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant STA662/6-1, Project-ID 448240908 and (partly) funded by the DFG – SFB 1483 – Project-ID 442419336, EmpkinS.
Please cite this, if this work helps you with your research:
@InProceedings{HenningsonFireflies,
author="TBD",
title="TBD",
booktitle="TBD",
year="2023",
pages="?",
isbn="?"
}