This repository provides the official PyTorch implementation of the paper accepted in TPAMI:
Exposure Trajectory Recovery from Motion Blur
Youjian Zhang, Chaoyue Wang, Stephen J. Maybank, Dacheng Tao
Abstract: Motion blur in dynamic scenes is an important yet challenging research topic. Recently, deep learning methods have achieved impressive performance for dynamic scene deblurring. However, the motion information contained in a blurry image has yet to be fully explored and accurately formulated because: (i) the ground truth of dynamic motion is difficult to obtain; (ii) the temporal ordering is destroyed during the exposure; and (iii) the motion estimation from a blurry image is highly ill-posed. By revisiting the principle of camera exposure, motion blur can be described by the relative motions of sharp content with respect to each exposed position. In this paper, we define exposure trajectories, which represent the motion information contained in a blurry image and explain the causes of motion blur. A novel motion offset estimation framework is proposed to model pixel-wise displacements of the latent sharp image at multiple timepoints. Under mild constraints, our method can recover dense, (non-)linear exposure trajectories, which significantly reduce temporal disorder and ill-posed problems. Finally, experiments demonstrate that the recovered exposure trajectories not only capture accurate and interpretable motion information from a blurry image, but also benefit motion-aware image deblurring and warping-based video extraction tasks.
The contents of this repository are as follows:
- gcc-7 and g++-7
- Python 3.6
- Pytorch 1.1.0 + cuda 10.0
- scikit-image 0.17.2
- opencv-python 4.7.0.72
- ipdb 0.13.13
- dominate 2.7.0
- tensorboardX
- (optional for running tensorboard web server) tensorboard
- (optional is you need do debug) debugpy
You also need to install two repositories, DCN_v2 and MSSSIM. In the './model' directory, you will find pytorch-msssim and DCN_v2. Choose the correct version of DCN_v2 folder and following their installation instructions respectively.
- Install pyssim and pytorch-msssim
- pyssim 0.6 from https://github.com/jterrace/pyssim, install directly using pip install pyssim.
- pytorch-msssim 0.1 from https://github.com/jorge-pessoa/pytorch-msssim.
- Install DCN_v2
Install DCN_v2 from ./model/DCN_v2. This is a pytorch_1.0.0 branch of https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch with modulated_deform_conv.py modified by the author. Before compilation, switch your gcc and g++ into gcc-7 and g++-7. You can ignore the warining during the compilation. At the end, you can see the prompt of successfully installation of the module.
I renamed the original ./model/DCN_v2 from the author as ./model/DCN_v2_origin and it will no longer be used.
Download GoPro datasets and algin the blurry/sharp image pairs. Organize the dataset in the following form:
|- Gopro_align_data
| |- train % 2103 image pairs
| | |- GOPR0372_07_00_000047.png
| | |- ...
| |- test % 1111 image pairs
| | |- GOPR0384_11_00_000001.png
| | |- ...
- To train motion offset estimation model, run the following command:
sh run_train.sh
Note that you can replace the argument offset_mode
from lin/bilin/quad
to decide the constraint of the estimated trajectory as linear/bi-linear/quadratic
- To train the deblurring model, run the same command and change the argument
blur_direction
from"reblur"
to"deblur"
- To test motion offset estimation model, run the following command:
sh run_test.sh
- To test the deblurring model, run the same command and change the argument
blur_direction
from"reblur"
to"deblur"
We provide some examples of our quadratic exposure trajectory and the cooresponding reblurred images.
We have put the pretrained quadratic model in directory ./pretrain_models/MTR_Gopro_quad
, and we will provide other models which mentioned in the paper in the Google drive.
Model | Zero constraint | Linear | Bi-linear | Quadratic |
---|---|---|---|---|
PSNR | 35.82 | 33.45 | 33.79 | 34.68 |
SSIM | 0.9800 | 0.9669 | 0.9687 | 0.9740 |
Also, we provide our pretrained motion-aware deblurring model.