/nstm

🚀⏱️ Official implementation for "neural space-time model for dynamic multi-shot imaging"

Primary LanguagePythonBSD 3-Clause "New" or "Revised" LicenseBSD-3-Clause

🚀⏱️ Neural Space-time Model for dynamic multi-shot imaging

drawing

Neural space-time model (NSTM) is a computational image reconstruction framework that can jointly estimate the scene and its motion dynamics by modeling its spatiotemporal relationship, without data priors or pre-training. It is especially useful for multi-shot imaging systems which sequentially capture multiple measurements and are susceptible to motion artifacts if the scene is dynamic. Neural space-time model exploits the temporal redundancy of dynamic scenes. This concept, widely used in video compression, assumes that a dynamic scene evolves smoothly over adjacent timepoints. By replacing the reconstruction matrix, neural space-time model can remove motion-induced artifacts and resolve sample dynamics, from the same set of raw measurements used for the conventional reconstruction.

The usage of NSTM is demonstrated through three example imaging systems: differential phase contrast microscopy, 3D structured illumination microscopy, and rolling-shutter DiffuserCam. And there's a guide to incorporate NSTM into your own imaging system!

See the full documentation here and the original paper here.

Demo on DPC: Open DPC Demo In Colab

Demo on SIM: Open SIM Demo In Colab

Installation

Installation instructions are available in the documentation.

Application on Differential Phase Contrast Microscopy (DPC)

Locally run the step-by-step example in this Jupyter notebook or run on Google Colab Open DPC Demo In Colab

jupyter lab --notebook-dir=./nstm/examples

DPC results

Application on 3D Structured Illumination Microscopy (SIM)

Option 1: Step-by-step example on Jupyter notebook with dense microbead data. You may also run this on Google Colab Open SIM Demo In Colab

Option 2: Run the python script for reconstruction 0. Download additional data from Google Drive and place .npz files in examples folder.

  1. Start running endoplasmic reticulum (ER)-labeled cell reconstruction in commandline. Replace er_cell with mito_cell for mitochondria-labeled cell data.

    python nstm/sim3d_main.py --config er_cell
    

    The mito_cell reconstruction takes ~40 minutes (slightly faster for er_cell) on a single NVIDIA A6000 GPU (48GB). er_cell is also runnable on a single NVIDIA RTX 3090 GPU (24GB) when batch_size is set to 1 in the .yaml file. mito_cell requires close to 40GB GPU memory to run, as it has more image planes.

  2. The reconstruction results will be saved in examples/checkpoint/ folder. The 3D reconstruction volume with three timepoints (each corresponding to an illumination orientation) will be saved as recon_filtered.tif, and can be viewed using Fiji. The recovered motion map will be saved as motion_dense_t.npy.

  3. Additional reconstruction parameters are stored in examples/configs/er_cell.yaml and examples/configs/mito_cell.yaml. To print the full parameter descriptions, run:

    python nstm/sim3d_main.py --helpfull
    

3D SIM results

Paper

@article{cao2024neural,
  title={Neural space-time model for dynamic scene recovery in multi-shot computational imaging systems},
  author={Cao, Ruiming and Divekar, Nikita and Nu{\~n}ez, James and Upadhyayula, Srigokul and Waller, Laura},
  journal={bioRxiv 2024.01.16.575950},
  pages={2024--01},
  year={2024},
  publisher={Cold Spring Harbor Laboratory}
}