This Repository contains code and pretrained models for HDR version of our paper : A Fast, Scalable, and Reliable Deghosting Method for Extreme Exposure Fusion accepted at ICCP, 2019 .
It has been tested on GTX 1080ti and RTX 2070 GPUs and tensorflow 1.13 and contains scripts for both inference and training .
The project was built on Python-3.6.7 and requires following packages
affine==2.2.2
matplotlib==3.0.2
numpy==1.16.2
opencv-python==4.0.0.21
Pillow==5.4.1
scikit-image==0.14.2
scikit-learn==0.20.2
scipy==1.2.1
tensorboard==1.13.1
tensorflow-gpu==1.13.1
termcolor==1.1.0
tqdm==4.31.1
Use script infer.py to perform inference. The script expects :
- A directory containing set of multi-exposure shots, lebeled as 1.tif, 2.tif, 3.tif and a file exposure.txt listing out EV gaps between the images.
- Pretrained flow, refinement and fusion models.
- The choice of fusion model: tied (works for any number of images) or untied model (fixed number of images).
- The image to choose as reference (1st or 2nd)
- GPU id to choose the gpu to run the script on.
- To fit everything in single script, unofficial PWC-NET implementation available in this repository has been used, but you can use any other official implementation to precompute flows as well.
- The script is meant for 3 multi-exposure shots but can easily be extended to arbitrary number of inputs along similar lines.
python infer.py --source_dir ./data_samples/test_set --fusion_model tied --ref_label 2 --gpu 1
- train_patch_list : list of training images. Download them from (Link to be updated soon). Use a pretrained flow algorithm to precompute flow as numpy files and save them as flow_21.npy and flow_23.npy. Refer to file data_samples/refine_train.txt and directory data_samples/refine_data for sample
- val_patch_list : list of test images organized similarly.
- logdir : checkpoints and tensorboard visualizations get logged here.
- iters : number of iterations to train model for.
- image_dim : dimensions of input patch during training
- batch_size : ---do----
- restore : 0 to start afresh, 1 to load checkpoint
- restore_ckpt: if restore was 1, path to checkpoint to load
- gpu : GPU id of the device to use for training.
Note: Use pretrained refinement model to generate static version of training images
- train_patch_idx : list of training images. Download them from here. Refer to file data_samples/fusion_train.txt and directory data_samples/fusion_data for sample.
- test_patch_idx : list of test images.
- fusion_model : choose between untied and tied fusion model.
- logdir : checkpoints and tensorboard visualizations get logged here.
- iters : number of iterations to train model for.
- lr : initial learning rate
- image_dim : dimensions of input patch during training
- batch_size : ---do----
- restore : 0 to start afresh, 1 to load checkpoint
- restore_ckpt: if restore was 1, path to checkpoint to load
- gpu : GPU id of the device to use for training.
- hdr : set 1 if you want to concatenate corresponding hdr with inputs ldrs
- hdr_weight : weight to mse loss between tonemapped hdr outputs.
- ssim_weight : weight for MS-SSIM loss
- perceptual_weight: Weight for perceptual loss