This is the official implementation of LKD-Net.
- Clone our repository.
git clone https://github.com/SWU-CS-MediaLab/LKD-Net.git
cd LKD-Net
- Make conda environment.
conda create -n LKD python=3.7.0
conda activate LKD
- Install dependencies.
conda conda install pytorch==1.10.1 torchvision==0.11.2 torchaudio==0.10.1 cudatoolkit=10.2 -c pytorch
pip install -r requirement.txt
RESIDE official website here.
DATASETS FILE STRUCTURE
|-- data
|-- ITS
|-- hazy
|-- *.png
|-- gt
|-- *.png
|-- OTS
|-- hazy
|-- *.jpg
|-- gt
|-- *.jpg
|-- SOTS
|-- indoor
|-- hazy
|-- *.png
|-- gt
|-- *.png
|-- outdoor
|-- hazy
|-- *.jpg
|-- gt
|-- *.png
Run the following script to train you own model.
python train.py \
--model LKD-t \
--model_name LKD.py \
--num_workers 8 \
--save_dir ./result \
--datasets_dir ./data \
--train_dataset ITS \
--valid_dataset SOTS \
--exp_config indoor \
--gpu 0 \
--exp_name train_my_model \
Run the following script to test the trained mode.
python test.py --model (model name) --model_weight (model weight dir) --data_dir (your dataset dir) --save_dir (path to save test result) --dataset (dataset name) --subset (subset name)
For example, we test the LKD-t on the SOTS indoor set.
python test.py --model LKD-t --model_weight ./result/RESIDE-IN/LKD-t/LKD-t.pth --data_dir ./data --save_dir ./result --dataset SOTS --subset indoor
Thanks for the work of Yuda Song et al. Our code is heavily borrowed from the implementation of Dehazeformer.