/DWT-FFC

Official PyTorch implementation of dehazing method based on FFC and ConvNeXt, 1st place solution of NTIRE 2023 HR NonHomogeneous Dehazing Challenge (CVPR Workshop 2023).

Primary LanguagePythonMIT LicenseMIT

Breaking Through the Haze: An Advanced Non-Homogeneous Dehazing Method based on Fast Fourier Convolution and ConvNeXt

This is the official PyTorch implementation of our dehazing method based on the FFC and ConvNeXt.

Winner award (1st place solution) of NTIRE 2023 HR NonHomogeneous Dehazing Challenge (CVPR Workshop 2023).

See more details in [Challenge Report], [Paper], [Certificate].

Environment:

CUDA Version: 11.0

Python 3.8

Dependencies:

torch==1.10.0

torchvision==0.9.0

NVIDIA GPU and CUDA

pytorch_lightning=2.0.0

timm=0.6.12

Pretrained Model

Download the pretrained ConvNext model and place it into the folder ./weights.

Our saved Model

Download our saved model for NTIRE 2023 HR Nonhomogeneous Test set and place it into the folder ./weights to reproduce our test result.

Download our saved model for NTIRE 2023 HR Nonhomogeneous Validation set and place it into the folder ./weights to reproduce our validation result.

These weights are the checkpoints that perform best for NTIRE 23 dehazing challenge ofiicial validation set and test set.

How to reproduce our result or recover your hazy image

Download above pretrained and saved models

Prepare NTIRE2023 HR Nongomogeneous Dehazing Chanllenge Validation set and Test set

Run test.py and find results in the folder ./test_result. Please check the test hazy image path (test.py line 12).

More Information about our model and paper

Datasets can be found below:

Reside, NH-HAZE, NH-HAZE2, HD-NH-HAZE, and our Combined Dataset.

If you want to train with your data, you can use the train.py in DW-GAN, as we adopt similar training stratety with DWGAN.

We are sorry that we didn't name our model in our paper, but we are glad you can use DWT-FFC to represent our method if you want to compare with our model.

Acknowledgement

We thank the authors of DW-GAN, LaMa, and ConvNeXt. Part of our code is built on their models.

Citation

If you find this repository helps, please consider citing:

@InProceedings{DWT-FFC_2023_CVPRW,
    author    = {Zhou, Han and Dong, Wei and Liu, Yangyi and Chen, Jun},   
    title     = {Breaking Through the Haze: An Advanced Non-Homogeneous Dehazing Method based on Fast Fourier Convolution and ConvNeXt},  
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},  
    month     = {June},  
    year      = {2023},  
    pages     = {1894-1903}  
}