/deepsum

DeepSUM: Deep neural network for Super-resolution of Unregistered Multitemporal images (ESA PROBA-V challenge)

Primary LanguageJupyter Notebook

DeepSUM is a novel Multi Image Super-Resolution (MISR) deep neural network that exploits both spatial and temporal correlations to recover a single high resolution image from multiple unregistered low resolution images.

This repository contains python/tensorflow implementation of DeepSUM, trained and tested on the PROBA-V dataset provided by ESA’s Advanced Concepts Team in the context of the European Space Agency's Kelvin competition.

DeepSUM is the winner of the PROBA-V SR challenge.

BibTex reference:

@article{molini2019deepsum,
  title={DeepSUM: Deep neural network for Super-resolution of Unregistered Multitemporal images},
  author={Molini, Andrea Bordone and Valsesia, Diego and Fracastoro, Giulia and Magli, Enrico},
  journal={IEEE Transactions on Geoscience and Remote Sensing},
  volume={58},
  number={5},
  pages={3644--3656},
  year={2019},
  publisher={IEEE}
}

Setup to get started

Make sure you have Python3 and all the required python packages installed:

pip install -r requirements.txt

Load data from Kelvin Competition and create the training set and the validation set

  • Download the PROBA-V dataset from the Kelvin Competition and save it under ./dataset_creation/probav_data
  • Load the dataset from the directories and save it to pickles by running Save_dataset_pickles.ipynb notebook
  • Run the Create_dataset.ipynb notebook to create training dataset and validation dataset for both bands NIR and RED
  • To save RAM memory we advise to extract the best 9 images based on the masks: run Save_best9_from_dataset.ipynb notebook after Create_dataset.ipynb. Based on the dataset you want to use (full or best 9) change the 'full' parameter in the config file.

Usage

In config_files/ you can place your configuration before starting training the model:

"lr" : learning rate
"batch_size" batch size
"skip_step": validation frequency,
"dataset_path": directory with training set and validation set created by means of Create_dataset.ipynb,
"n_chunks": number of pickles in which the training set is divided,
"channels": number of channels of input images,
"T_in": number of images per scene,
"R": upscale factor,
"full": use the full dataset with all images or the best 9 for each imageset,
"patch_size_HR": size of input images,
"border": border size to take into account shifts in the loss and psnr computation,
"spectral_band": NIR or RED,
"RegNet_pretrain_dir": directory with RegNet pretraining checkpoint,
"SISRNet_pretrain_dir": directory with SISRNet pretraining checkpoint,

Run DeepSUM_train.ipynb to train a MISR model on the training dataset just generated. If tensorboard_dir directory is found in checkpoints/, the training will start from the latest checkpoint, otherwise the RegNet and SISRNet weights will be initialized from the checkpoints contained in the pretraining_checkpoints/ directory. These weights come from the pretraining procedure explained in DeepSUM paper.

Challenge checkpoints

The DeepSUM has been trained for both NIR and RED bands. In the 'checkpoints' directory there are the final weights used to produce the superresolved test images for the final ESA challenge submission.

DeepSUM_NIR_lr_5e-06_bsize_8

DeepSUM_NIRpretraining_RED_lr_5e-06_bsize_8

Validation

During training, only the best 9 images for each imageset are considered for the score. After the training procedure is completed, you can compute a final evaluation on the validation set by also exploiting the other images available in each imageset. To do so, run Sliding_window_evaluation.ipynb.

Testing

  • Run the Create_testset.ipynb notebook under dataset_creation/ to create the dataset with the test LR images
  • To test the trained model on new LR images and get the corresponding superresolved images run DeepSUM_superresolve_testdata.ipynb.

Authors & Contacts

DeepSUM is based on work by team SuperPip from the Image Processing and Learning group of Politecnico di Torino: Andrea Bordone Molini (andrea.bordone AT polito.it), Diego Valsesia (diego.valsesia AT polito.it), Giulia Fracastoro (giulia.fracastoro AT polito.it), Enrico Magli (enrico.magli AT polito.it).