/LYT-Net

LYT-Net: Lightweight YUV Transformer-based Network for Low-Light Image Enhancement

Primary LanguagePython

LYT-Net: Lightweight YUV Transformer-based Network for Low-Light Image Enhancement

arXiv

PWC

PWC

PWC

Ranked #1 on FLOPS(G) (3.49 GFLOPS) and Params(M) (0.045M = 45k Params)

Updates

  • 03.04.2024 Training code re-added and adjusted.
  • 30.01.2024 arXiv pre-print available.
  • 10.01.2024 Pre-trained model weights and code for training and testing are released.

Experiment

1. Create Environment

  • Make Conda Environment
conda create -n LYTNet python=3.10
conda activate LYTNet
  • Install Dependencies
conda install -c conda-forge cudatoolkit=11.2 cudnn=8.1
pip install tensorflow==2.10 opencv-python numpy tqdm matplotlib lpips

2. Prepare Datasets

Download the LOLv1 and LOLv2 datasets:

LOLv1 - Google Drive

LOLv2 - Google Drive

Note: Under the main directory, create a folder called data and place the dataset folders inside it.

Datasets should be organized as follows:
  |--data   
  |    |--LOLv1
  |    |    |--Train
  |    |    |    |--input
  |    |    |    |     ...
  |    |    |    |--target
  |    |    |    |     ...
  |    |    |--Test
  |    |    |    |--input
  |    |    |    |     ...
  |    |    |    |--target
  |    |    |    |     ...
  |    |--LOLv2
  |    |    |--Real_captured
  |    |    |    |--Train
  |    |    |    |    |--Low
  |    |    |    |    |     ...
  |    |    |    |    |--Normal
  |    |    |    |    |     ...
  |    |    |    |--Test
  |    |    |    |    |--Low
  |    |    |    |    |     ...
  |    |    |    |    |--Normal
  |    |    |    |    |     ...
  |    |    |--Synthetic
  |    |    |    |--Train
  |    |    |    |    |--Low
  |    |    |    |    |    ...
  |    |    |    |    |--Normal
  |    |    |    |    |    ...
  |    |    |    |--Test
  |    |    |    |    |--Low
  |    |    |    |    |    ...
  |    |    |    |    |--Normal
  |    |    |    |    |    ...

3. Test

You can test the model using the following commands. Pre-trained weights are available at Google Drive. GT Mean evaluation can be done with the --gtmean argument.

# Test on LOLv1
python main.py --test --dataset LOLv1 --weights pretrained_weights/LOLv1.h5
# Test on LOLv1 using GT Mean
python main.py --test --dataset LOLv1 --weights pretrained_weights/LOLv1.h5 --gtmean

# Test on LOLv2 Real
python main.py --test --dataset LOLv2_Real --weights pretrained_weights/LOLv2_Real.h5
# Test on LOLv2 Real using GT Mean
python main.py --test --dataset LOLv2_Real --weights pretrained_weights/LOLv2_Real.h5 --gtmean

# Test on LOLv2 Synthetic
python main.py --test --dataset LOLv2_Synthetic --weights pretrained_weights/LOLv2_Synthetic.h5
# Test on LOLv2 Synthetic using GT Mean
python main.py --test --dataset LOLv2_Synthetic --weights pretrained_weights/LOLv2_Synthetic.h5 --gtmean

4. Compute Complexity

You can test the model complexity (FLOPS/Params) using the following command:

# To run FLOPS check with default (1,256,256,3)
python main.py --complexity

# To run FLOPS check with custom (1,H,W,C)
python main.py --complexity --shape '(H,W,C)'

5. Train

You can train the model using the following commands:

# Train on LOLv1
python main.py --train --dataset LOLv1

# Train on LOLv2 Real
python main.py --train --dataset LOLv2_Real

# Train on LOLv2 Synthetic
python main.py --train --dataset LOLv2_Synthetic

Citation

Preprint Citation

@article{brateanu2024,
  title={LYT-Net: Lightweight YUV Transformer-based Network for Low-Light Image Enhancement},
  author={Brateanu, Alexandru and Balmez, Raul and Avram, Adrian and Orhei, Ciprian},
  journal={arXiv preprint arXiv:2401.15204},
  year={2024}
}