This repository contains the official implementation for PSD Framework introduced in the following paper:
PSD: Principled Synthetic to Real Dehazing Guided by Physical Priors
Zeyuan Chen, Yangchao Wang, Yang Yang, Dong Liu
CVPR 2021 (Oral)
If you find our work useful in your research, please cite:
@InProceedings{Chen_2021_CVPR,
author = {Chen, Zeyuan and Wang, Yangchao and Yang, Yang and Liu, Dong},
title = {PSD: Principled Synthetic-to-Real Dehazing Guided by Physical Priors},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2021},
pages = {7180-7189}
}
- Python 3.6
- Pytorch 1.3.0
Model | File size | Download |
---|---|---|
PSD-MSBDN | 126M | Google Drive |
PSD-FFANET | 24M | Google Drive |
PSD-GCANET | 9M | Google Drive |
百度网盘链接: https://pan.baidu.com/s/1M1RO5AZaYcZtckb-OzfXgw (提取码: ixcz)
In the paper, all the qualitative results and most visual comparisons are produced by PSD-MSBDN model.
python test.py
- Note that the test.py file is hard coded, and the default code is for the testing of PSD-FFANET model. If you want to test the other two models, you need to modify the code. See annotations in test.py and it would only take seconds.
- If the program reports an error when going through A-Net, please make sure that your PyTorch version is 1.3.0. You could also solve the problem by resize the input of A-Net to 512×512 or delete A-Net (only for testing). See issue #5 for more information.
As most existing dehazing models are end-to-end, you are supposed to modify the network to make it a physics-baesd one.
To be specific, take GCANet as an example. In its GCANet.py file, the variable y in Line 96 is the final feature map. You should replace the final deconv layer by two branches for transmission maps and dehazing results, separately. The branch can be consisted of two simple convolutional layers. In addition, you should also add an A-Net to generate atmosphere light.
With the modified Network, you can do the pre-train phase with synthetic data. In our settings, we use OTS from RESIDE dataset as the data for pre-training.
In main.py, we present the pipeline and loss settings for the pre-training of PSD-FFANet, you can take it as an example and modify it to fit your own model.
Based on our observations, the pre-train models usually have similar performance (sometimes suffer slight drops) on PSNR and SSIM compared with the original models.
Start from a pre-trained model, you can fine-tune it with real-world data in an unsupervised manner. We use RTTS from RESIDE dataset as our fine-tuning data. We also process all hazy images in RTTS by CLAHE for convenience.
You can find both RTTS and our pre-processed data in this Link (code: wxty). Code for the fine-tuning of the three provided models is included in finetune.py.