/strong_lensing_vit_resnet

Vision Transformers on Gravitational Lens Images

Primary LanguageJupyter NotebookMIT LicenseMIT

Strong Gravitational Lensing Parameter Estimation with Vision Transformer

This repo contains all our code for using the Vision Transformer models on the imaging multi-regression task of parameter and uncertainty estimation for strong lensing systems. The paper is published in the ECCV 2022 Workshops and this is the arXiv link.

Authors: Kuan-Wei Huang, Geoff Chih-Fan Chen, Po-Wen Chang, Sheng-Chieh Lin, ChiaJung Hsu, Vishal Thengane, and Joshua Yao-Yu Lin

Tag v3.0.0 is the code version when the paper was submitted.

Data generation / preparation using Lenstronomy

  • This notebook is used to generate the images (data) and paramters (targets) as the dataset for the strong lensing systems.
  • This notebook is used to process the dataset: data split and target normalization.

Source code for training models

Train models

Model Prediction and visulization

  • predict.py is the source code to make prediction using a trained model.
  • This notebook uses predict.py to make predictions for our ECCV paper.
  • visualization.py contains objects and functions for visulization.
  • This notebook uses visualization.py to make figures for our ECCV paper.