/awesome-deep-optics

Paper list for End2End optical design.

Awesome Deep Optics/End-to-End Optical Design

A curated list of awesome deep optics papers, inspired by awesome-computer-vision.

Deep optics/end-to-end optical design learns optical elements simutaneously with the image processing network, with the goal to:

  • Encode more information from the physical world, for example, hyperspectral imaging.
  • Reduce physical size and cost of an imaging system, for example, compact camera.

In deep optics, we usually care about two things:

  • Differentiable simulation. A differentiable image formation model enables us to optimize the optics together with the network. So I will classify papers according to different image formation models. Existing methods usually treat the diffraction and aberration separately, either simplifying the optical system as a series of thin elements to capture the wave optical effects, or doing ray tracing to capture the full geometry of the system.
  • Fabrication. When reading the papers, do pay attention to their fabrication/manufacturing methods in the experiments.

Knowledge Base

The following are some materials I think will help you enter this field.

  • [1996 Book] Introduction to Fourier Optics McGraw-Hill Series in Electrical and Computer Engineering. link
  • [2007 Book] Modern Optical Engineering. link
  • [2011 Book] Computational fourier optics : a MATLAB tutorial. link
  • [2012 Siggraph course] Computational displays: combining optical fabrication, computational processing, and perceptual tricks to build the displays of the future. link
  • [2019 PhD thesis] Ray-based methods for simulating aberrations and cascaded diffraction in imaging systems. link
  • [2020 Siggraph course] Deep optics: joint design of optics and image recovery algorithms for domain specific cameras. link
  • [2022 Siggraph course] Differentiable cameras and displays. link

Papers

1. Wave propagation model

In the wave propagation model, each optical element (DOE, lens, aperture, et al.) is represented as a phase mask. This idealized optics is easy to simulate; however, it is not accurate enough and cannot model optical aberrations.

Single DOE or metasurface

  • 2016 Encoded diffractive optics for full-spectrum computational imaging. paper, supp
  • 2016 The diffractive achromat full spectrum computational imaging with diffractive optics. paper, supp, video, project
  • 2018 End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging. paper, supp, project, code
  • 2019 Compact Snapshot Hyperspectral Imaging with Diffracted Rotation. paper, supp, project
  • 2020 Learned rotationally symmetric diffractive achromat for full-spectrum computational imaging. paper, supp, project, video
  • 2021 Single-shot Hyperspectral-Depth Imaging with Learned Diffractive Optics. paper, supp, project, video
  • 2021 Neural nano-optics for high-quality thin lens imaging. paper, supp, project, project, code
  • 2022 Quantization-aware Deep Optics for Diffractive Snapshot Hyperspectral Imaging. paper, supp, code

DOE + Thin lens (not optimizable)

  • 2020 Learning Rank-1 Diffractive Optics for Single-shot High Dynamic Range Imaging. paper, supp, project
  • 2020 Deep Optics for Single-shot High-dynamic-range Imaging. paper, project, video, code
  • 2020 End-to-end Learned, Optically Coded Super-resolution SPAD Camera. paper, supp, project
  • 2021 Depth from Defocus with Learned Optics for Imaging and Occlusion-aware Depth Estimation. paper, supp, project, code
  • 2022 End-to-end snapshot compressed super-resolution imaging with deep optics. paper, supp
  • 2022 Seeing Through Obstructions with Diffractive Cloaking. paper, project, code
  • 2022 Hybrid diffractive optics design via hardware-in-the-loop methodology for achromatic extended-depth-of-field imaging. paper, supp

Lens (with Zernike basis)

Others (coded aperture, ...)

  • 2020 Spectral DiffuserCam: lensless snapshot hyperspectral imaging with a spectral filter array. paper, code
  • 2021 Mask-ToF: Learning Microlens Masks for Flying Pixel Correction in Time-of-Flight Imaging. paper, project, code
  • 2021 Shift-variant color-coded diffractive spectral imaging system. paper, video, code

2. Ray tracing model

Ray tracing is the most common technique in optical design (e.g., ZEMAX and CodeV). In the field of deep optics, people usually compute the point spread function (PSF) and convolve it with the input, or perform ray-tracing-based rendering to simulate sensor images. Most ray tracing works are incoherent, but there are also some works of coherent ray tracing.

Lens

  • 2019 Learned large field-of-view imaging with thin-plate optics. project, video, code
  • 2021 End-to-end complex lens design with differentiate ray tracing. paper, project
  • 2021 End-to-end computational optics with a singlet lens for large depth-of-field imaging. paper
  • 2021 End-to-end learned single lens design using fast differentiable ray tracing. paper
  • 2021 dO: A differentiable engine for Deep Lens design of computational imaging systems. paper, project, code
  • 2022 Computational Optics for Mobile Terminals in Mass Production. paper
  • 2022 The Differentiable Lens: Compound Lens Search over Glass Surfaces and Materials for Object Detection. paper, code
  • 2023 Curriculum Learning for ab initio Deep Learned Refractive Optics. paper, video, code
  • 2023 Image Quality Is Not All You Want: Task-Driven Lens Design for Image Classification. paper
  • 2023 Large depth-of-field ultra-compact microscope by progressive optimization and deep learning. paper, code
  • 2023 Revealing the preference for correcting seperated aberrations in joint optic-image design. paper

Others

  • 2021 Towards self-calibrated lens metrology by differentiable refractive deflectometry. paper, project, code
  • 2021 End-to-end sensor and neural network design using differential ray tracing. paper
  • 2022 Adjoint Nonlinear Ray Tracing. paper

3. Network Representation

The latest method is to model a group of optical systems by a network. The network takes optical parameters (e.g., curvatures) as input and outputs the PSF. By feeding massive training, the network learns a continous interpolation on optical parameters. In the End-to-End training, we can back-propagate gradients through the network to get the gradients for optical parameters.

  • 2021 Deep learning-enabled framework for automatic lens design starting point generation. paper, project
  • 2021 Differentiable Compound Optics and Processing Pipeline Optimization for End-To-end Camera Design. paper, project
  • 2023 Aberration-Aware Depth-from-Focus. paper

Contribution

Please feel free to open pull requests or email (xinge.yang@kaust.edu.sa) to contibute to this repo.

Licenses

License

CC0

To the extent possible under law, Xinge Yang has waived all copyright and related or neighboring rights to this work.