/AlignFormer

Official implementation of "Generating Aligned Pseudo-Supervision from Non-Aligned Data for Image Restoration in Under-Display Camera"

Primary LanguagePythonOtherNOASSERTION

AlignFormer: Generating Aligned Pseudo-Supervision from Non-Aligned Data for Image Restoration in Under-Display Camera

Python 3.7 pytorch 1.3.0 CUDA 10.1

This repository contains the implementation of the following paper:

Generating Aligned Pseudo-Supervision from Non-Aligned Data for Image Restoration in Under-Display Camera
Ruicheng Feng, Chongyi Li, Huaijin Chen, Shuai Li, Jinwei Gu, Chen Change Loy
Computer Vision and Pattern Recognition (CVPR), 2023

[Paper] [Project Page]

⭐ Come and check our poster at West Building Exhibit Halls ABC 083 on TUE-PM (20/06/2023)!

⭐ If you found this project helpful to your projects, please help star this repo. Thanks! 🤗

Update

  • 2023.07: Release training code of AlignFormer.
  • 2023.06: Release inference code of AlignFormer.
  • 2023.03: This repo is created!

Dependencies and Installation

  • Python >= 3.7 (Recommend to use Anaconda or Miniconda)
  • Pytorch >= 1.7.1
  • CUDA >= 10.1
  • Other required packages in requirements.txt
# git clone this repository
git clone https://github.com/jnjaby/AlignFormer.git
cd AlignFormer

# (Optional) create new anaconda env
conda create -n alignformer python=3.8 -y
conda activate AlignFormer

# install python dependencies
pip install -r requirements.txt
python setup.py develop

Quick Inference

We provide quick test code with the pretrained model. The testing command assumes using single GPU testing. Please see TrainTest.md if you prefer using slurm.

Download Pre-trained Models:

Download the pretrained models from Google Drive to the experiments/pretrained_models folder.

Dataset Preparation:

You can also grab the data directly from GoogleDrive, unzip and put them into ./datasets. Note that iamges in AlignFormer are the results of our pre-trained model.

Dataset structure

├── AlignFormer
│   ├── test_sub
│   └── train
├── lq
│   ├── test_sub
│   └── train
├── mask
│   ├── test_sub
│   └── train
└── ref
    ├── test_sub
    └── train

Testing:

  1. Modify the paths to dataset and pretrained model in the following yaml files for configuration.

    ./options/test/AlignFormer_test.yml
  2. Run test code for data.

    python -u basicsr/test.py -opt "options/test/AlignFormer_test.yml" --launcher="none"

    Check out the results in ./results.

Training models:

To train an AlignFormer, you will need to train a DAM module first. Then you can merge the pre-trained DAM into AlignFormer and train the whole model.

  1. Prepare the datasets. Please refer to Dataset Preparation.

  2. Modify config files ./options/train/DAM_train.yml.

  3. Run training code (Slurm Training). Kindly checkout TrainTest.md and use single GPU training, distributed training, or slurm training as per your preference.

    srun -p [partition] --mpi=pmi2 --job-name=DAM --gres=gpu:2 --ntasks=2 --ntasks-per-node=2 --cpus-per-task=2 --kill-on-bad-exit=1 \
    python -u basicsr/train.py -opt "options/train/DAM_train.yml" --launcher="slurm"
  4. After training the DAM, modify config file of AlignFormer ./options/train/AlignFormer_train.yml.

  5. Run training code (Slurm Training).

    srun -p [partition] --mpi=pmi2 --job-name=DAM --gres=gpu:2 --ntasks=2 --ntasks-per-node=2 --cpus-per-task=2 --kill-on-bad-exit=1 \
    python -u basicsr/train.py -opt "options/train/AlignFormer_train.yml" --launcher="slurm"

All logging files in the training process, e.g., log message, checkpoints, and snapshots, will be saved to ./experiments directory.

Citation

If you find our repo useful for your research, please consider citing our paper:

@InProceedings{Feng_2023_Generating,
   author    = {Feng, Ruicheng and Li, Chongyi and Chen, Huaijin and Li, Shuai and Gu, Jinwei and Loy, Chen Change},
   title     = {Generating Aligned Pseudo-Supervision from Non-Aligned Data for Image Restoration in Under-Display Camera},
   booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
   month     = {June},
   year      = {2023},
}
@InProceedings{Feng_2021_Removing,
   author    = {Feng, Ruicheng and Li, Chongyi and Chen, Huaijin and Li, Shuai and Loy, Chen Change and Gu, Jinwei},
   title     = {Removing Diffraction Image Artifacts in Under-Display Camera via Dynamic Skip Connection Network},
   booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
   month     = {June},
   year      = {2021},
   pages     = {662-671}
}

License and Acknowledgement

This project is open sourced under NTU S-Lab License 1.0. Redistribution and use should follow this license. The code framework is mainly modified from BasicSR. Please refer to the original repo for more usage and documents.

Contact

If you have any question, please feel free to contact us via ruicheng002@ntu.edu.sg.