Ali Syed Saqlain1, Li Yun Wang2, & Fang Fang1
1 North China Electric Power University, Beijing
2 Portland State University, USA
In this paper, we introduce an end-to-end generative adversarial network (GAN) based on sparse learning for single image motion deblurring, which we called SL-CycleGAN. For the first time in image motion deblurring, we propose a sparse ResNet-block as a combination of sparse convolution layers and a trainable spatial pooler k-winner based on HTM (Hierarchical Temporal Memory) to replace non-linearity such as ReLU in the ResNet-block of SL-CycleGAN generators. Furthermore, we take our inspiration from the domain-to-domain translation ability of the CycleGAN, and we show that image deblurring can be cycle-consistent while achieving the best qualitative results. Finally, we perform extensive experiments on popular image benchmarks both qualitatively and quantitatively and achieve the highest PSNR of 38.087 dB on GoPro dataset, which is 5.377 dB better than the most recent deblurring method.
Deblurring results on GoPro test images
The results shown in Tab I of SL-CycleGAN are conducted on 256x256.
Deblurring results on Kohler test images using pre-trained model
Deblurring results on Lai test images via pre-trained model
Blind deblurring results on images from (Pan et al.) via pre-trained model. The GT images weren't fed to the network only the blurry inputs
Test images