/unsupervisedMFBD

Learning to do multiframe blind deconvolution unsupervisedly

Primary LanguagePythonMIT LicenseMIT

Learning to do multiframe blind deconvolution unsupervisedly

github license ADS arxiv

Observation from ground based telescopes are affected by the presence of the Earth atmosphere, which severely perturbs them. The use of adaptive optics techniques has allowed us to partly beat this limitation. However, image selection or post-facto image reconstruction methods are routinely needed to reach the diffraction limit of telescopes. Deep learning has been recently used to accelerate these image reconstructions. Currently, these deep neural networks are trained with supervision, so that standard deconvolution algorithms need to be applied a-priori to generate the training sets.

Our aim is to propose an unsupervised method which can then be trained simply with observations and check it with data from the FastCam instrument.

We use a neural model composed of three neural networks that are trained end-to-end by leveraging the linear image formation theory to construct a physically-motivated loss function.

The analysis of the trained neural model shows that multiframe blind deconvolution can be trained self-supervisedly, i.e., using only observations. The output of the network are the corrected images and also estimations of the instantaneous wavefronts. The network model is of the order of 1000 times faster than applying standard deconvolution based on optimization. With some work, the model can bed used on real-time at the telescope.

Training

This repository contains all the infrastructure needed to retrain the neural approach. However, you will need to build a training set and do the needed modifications in the train.py file to use your training set. You only need to provide bursts of images for the training since no supervision is required. You will also need to adapt the sizes of the telescope primary and secondary mirror, observing wavelength and pixel size in arcsec for the training to proceed correctly.

We have tested with PyTorch 1.5 but it should work in all versions above 1.0.

Dependencies

numpy
h5py
torch
tqdm
argparse
scipy

Validation

This repository contains an example of an observation of sigOri with 200 frames and the network trained for observations with the Nordic Optical Telescope (NOT) at 800 nm. The file validation.py shows how to apply the neural deconvolution to this example.

Dependencies

numpy
matplotlib
astropy
torch
tqdm
skimage
scipy