/GSAD

[NeurIPS 2023] Global Structure-Aware Diffusion Process for Low-Light Image Enhancement

Primary LanguagePython

PWC PWC PWC

[NeurIPS 2023] Global Structure-Aware Diffusion Process for Low-Light Image Enhancement

ArXiv · NeurIPS23 · Project Page

Logo Logo Logo

Get Started

Dependencies and Installation

  • Python 3.8
  • Pytorch 1.11
  1. Create Conda Environment
conda create --name GlobalDiff python=3.8
conda activate GlobalDiff
  1. Install PyTorch
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113
  1. Clone Repo
git clone https://github.com/jinnh/GSAD.git
  1. Install Dependencies
cd GSAD
pip install -r requirements.txt

Data Preparation

You can refer to the following links to download the datasets.

Then, put them in the following folder:

dataset (click to expand)
├── dataset
    ├── LOLv1
        ├── our485
            ├──low
            ├──high
	├── eval15
            ├──low
            ├──high
├── dataset
   ├── LOLv2
       ├── Real_captured
           ├── Train
	   ├── Test
       ├── Synthetic
           ├── Train
	   ├── Test

Testing

Note: Following LLFlow and KinD, we have also adjusted the brightness of the output image produced by the network, based on the average value of Ground Truth (GT). It should be noted that this adjustment process does not influence the texture details generated; it is merely a straightforward method to regulate the overall illumination. Moreover, it can be easily adjusted according to user preferences in practical applications.

Visual results on LOLv1 and LOLv2 can be downloaded from Google drive.

You can also refer to the following links to download the pretrained model and put it in the following folder:

├── checkpoints
    ├── lolv1_gen.pth
    ├── lolv2_real_gen.pth
    ├── lolv2_syn_gen.pth
# LOLv1
python test.py --dataset ./config/lolv1.yml --config ./config/lolv1_test.json

# LOLv2-real
python test.py --dataset ./config/lolv2_real.yml --config ./config/lolv2_real_test.json

#LOLv2-synthetic
python test.py --dataset ./config/lolv2_syn.yml --config ./config/lolv2_syn_test.json

Testing on unpaired data

python test_unpaired.py  --config config/test_unpaired.json --input unpaired_image_folder

You can use any one of these three pre-trained models, and employ different sampling steps and noise levels to obtain visual-pleasing results by modifying these terms in the 'test_unpaired.json'.

"resume_state": "./checkpoints/lolv2_syn_gen.pth"

"val": {
    "schedule": "linear",
    "n_timestep": 10,
    "linear_start": 2e-3,
    "linear_end": 9e-1
}

Training

bash train.sh

Note: Pre-trained uncertainty models are available in Google Drive.

Training on the customized dataset

  1. We provide the dataset and training configs for both LOLv1 and LOLv2 benchmarks in the 'config' folder. You can create your configs for your dataset. You can also write your dataloader for the customized dataset before going to the 'diffusion.feed_data()'.
./config/customized_dataset.yml # e.g., lolv1.yml
./config/customized_dataset_train.json # e.g., lolv1_train.json
  1. Specify the following terms in 'customized_dataset.yml'.
datasets.train.root # the path of training data
datasets.val.root # the path of testing data
  1. Modify the following config path in 'train.sh', then run 'train.sh'.
## train uncertainty model
python train.py -uncertainty --config ./config/llie_train_u.json --dataset ./config/customized_dataset.yml 

## train global structure-aware diffusion
python train.py --config ./config/customized_dataset_train.json --dataset ./config/customized_dataset.yml

Citation

If you find our work useful for your research, please cite our paper

@article{hou23global,
  title={Global Structure-Aware Diffusion Process for Low-Light Image Enhancement},
  author={Jinhui Hou, Zhiyu Zhu, Junhui Hou, Hui Liu, Huanqiang Zeng, and Hui Yuan},
  journal={Advances in Neural Information Processing Systems},
  year={2023}
}

Acknowledgement

Our code is built upon SR3. Thanks to the contributors for their great work.