/DDAN

Primary LanguagePythonMIT LicenseMIT

DDAN

Introduction

This is the implementation of Learning a Deep Dual Attention Network for Video Super-Resolution.(IEEE TIP) image The architecture of our proposed deep dual attentian network(DDAN).

Environment

  • python==3.6
  • Tensorflow==1.13.1

models

Download trained DDAN model from Baiduyun we provide. (Access code for Baiduyun: zelr) Unzip and place the files in the DDAN_x4 directory

Installations

  • numpy==1.16.4
  • scipy==1.2.1
  • Pillow==8.1.2

Testing

For testing, you can test one video or videos using function testvideo() or testvideos(). Please change the test video directory.

# testvideos()
python main.py

Training

You can also train your DDNL using function train(). Before you train your models, download the data for training in data directory.

# model.train()
python main.py

Visual results

Here are the results from different dataset. The frame is from Myanmar. image

The frame is from calendar. image

The frame is from real-world LR videos we captured. image

Citation

If you use our code or model in your research, please cite with:

@ARTICLE{ddan,
  author={Feng. Li and Huihui. Bai and Yao. Zhao},
  journal={IEEE Transactions on Image Processing},
  title={Learning a Deep Dual Attention Network for Video Super-Resolution},
  year={2020},
  volume={29},
  pages={4474-4488},
  doi={10.1109/TIP.2020.2972118}
 }

ACknowledgements

This code is built on MMCNN(Tensorflow), we thank the authors for sharing their code.