VSR-Transformer (Coming soon)

By Jiezhang Cao, Yawei Li, Kai Zhang, Luc Van Gool

This paper proposes a new Transformer for video super-resolution (called VSR-Transformer). Our VSR-Transformer block contains a spatial-temporal convolutional self-attention layer and a bidirectionaloptical flow-based feed-forward layer. Our VSR-Transformer is able to improve the performance of VSR. This repository is the official implementation of "Video Super-Resolution Transformer".

Dependencies and Installation

  1. Clone repository

    git clone https://github.com/caojiezhang/VSR-Transformer.git
  2. Install dependent packages

    cd VSR-Transformer
    pip install -r requirements.txt
  3. Compile environment

    python setup.py develop

Dataset Preparation

  • Please refer to DatasetPreparation.md for more details.
  • The descriptions of currently supported datasets (torch.utils.data.Dataset classes) are in Datasets.md.

Training

  • Please refer to configuration of training for more details.

    # Train on REDS
    CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/train.py -opt options/train/train_vsrTransformer_x4_REDS.yml --launcher pytorch
    # Train on Vimeo-90K
    CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/train.py -opt options/train/train_vsrTransformer_x4_Vimeo.yml --launcher pytorch

Testing

  • Please refer to configuration of testing for more details.

    # Test on REDS
    CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/test.py -opt options/test/test_vsrTransformer_x4_REDS.yml --launcher pytorch
    
    # Test on Vimeo-90K
    CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/test.py -opt options/test/test_vsrTransformer_x4_Vimeo.yml --launcher pytorch
    
    # Test on Vid4
    CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/test.py -opt options/test/test_vsrTransformer_x4_Vid4.yml --launcher pytorch

Citation

If you use this code of our paper please cite:

@article{cao2021vsrt,
  title={Video Super-Resolution Transformer},
  author={Cao, Jiezhang and Li, Yawei and Zhang, Kai and Van Gool, Luc},
  journal={arXiv},
  year={2021}
}

Acknowledgments

This repository is implemented based on BasicSR. If you use the repository, please consider citing BasicSR.