Official Implementation for "FHDe2Net: Full High Definition Demoireing Network" (ECCV 20)
- Linux
- python2 or 3
- NVIDIA GPU + CUDA CuDNN (CUDA 8.0)
- Install PyTorch from http://pytorch.org
- Install Torch vision from https://github.com/pytorch/vision
- Install python package: numpy, scipy, PIL, math, skimage, visdom
You can download the training and testing dataset from
https://pan.baidu.com/s/19LTN7unSBAftSpNVs8x9ZQ with password jf2d
or
https://drive.google.com/drive/folders/1IJSeBXepXFpNAvL5OyZ2Y1yu4KPvDxN5?usp=sharing
You accelerate the training with a subset of FHDMi described by FHDMi_thin.txt.
-
Download pre-trained models You can download the pre-trained models from https://pan.baidu.com/s/14fo4gdBtx4GDohNNyYObpg with password t8vn And all the models are supposed to be placed in the ckpt folder.
-
Build up the testing environment You can easily build the testing environment by:
pip install -r requirements.txt
-
testing Specify the --dataroot with the testing dataset path, and run
bash run_test.sh
-
Download Vgg19 ckpt from https://pan.baidu.com/s/1c3eEh29uAfZTzTe0X9Jz_Q by password: zvcy And put it in models/
-
open visdom by
python -m visdom.server -port 8099
-
change the dataroot in run_GDN.sh and train GND by running
bash run_GDN.sh
-
change the dataroot in run_LRN.sh and train LRN by running
bash run_LRN.sh
(For trainnig LRN, you can either use the distilled dataset generated by rank2_edge_batch.py or directly use the whole dataset. A subset image list for FHDMi has been generated ahead in list_7000_f1000.txt.) -
change the dataroot in run_FDN_FRN.sh and train FDN and FRN by running
bash run_FDN_FRN.sh
@article{hefhde2net,
title={FHDe2Net: Full High Definition Demoireing Network},
author={He, Bin and Wang, Ce and Shi, Boxin and Duan, Ling-Yu},
publisher={Springer}
}
If you have any question, please feel free to contact me with 1801213742@pku.edu.cn