/Image_Adaptive_YOLOv3_demo

fork from https://github.com/wenyyu/Image-Adaptive-YOLO

Primary LanguagePythonApache License 2.0Apache-2.0

Image-Adaptive YOLO for Object Detection in Adverse Weather Conditions

Accepted by AAAI 2022 [arxiv]

Wenyu Liu, Gaofeng Ren, Runsheng Yu, Shi Guo, Jianke Zhu, Lei Zhang

This repo forked from https://github.com/wenyyu/Image-Adaptive-YOLO, modifying to implement in Tensorflow 2.0.

image

Update

The image-adaptive filtering techniques used in the segmentation task can be found in our preprint paper.

"Improving Nighttime Driving-Scene Segmentation via Dual Image-adaptive Learnable Filters". [arxiv]

Installation

$ git clone https://github.com/hsiangling0/Image_Adaptive_YOLO_demo.git
$ cd Image_Adaptive_YOLO_demo 
# Require python3 and tensorflow
$ pip install -r ./docs/requirements.txt

Datasets and Models

PSCAL VOC RTTS ExDark
Voc_foggy_test & Voc_dark_test & Models: Google Drive, Baidu Netdisk (key: iayl)

Quick test

# put checkpoint model in the corresponding directory 
# change the data and model paths in core/config.py
# this command is used to test the foggy codition, 
# you can represent it to python evaluate_lowlight.py to test low-light images.
$ python evaluate.py 

image

Train and Evaluate on the datasets

  1. Download VOC PASCAL trainval and test data
$ wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar
$ wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
$ wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar

Extract all of these tars into one directory and rename them, which should have the following basic structure.


VOC           # path:  /home/lwy/work/code/tensorflow-yolov3/data/VOC
├── test
|    └──VOCdevkit
|        └──VOC2007 (from VOCtest_06-Nov-2007.tar)
└── train
     └──VOCdevkit
         └──VOC2007 (from VOCtrainval_06-Nov-2007.tar)
         └──VOC2012 (from VOCtrainval_11-May-2012.tar)
                     
$ python scripts/voc_annotation.py
  1. Generate Voc_foggy_train and Voc_foggy_val dataset offline
# generate ten levels' foggy training images and val images, respectively
$ python ./core/data_make.py 
  1. Edit core/config.py to configure
--vocfog_traindata_dir'  = '/data/vdd/liuwenyu/data_vocfog/train/JPEGImages/'
--vocfog_valdata_dir'    = '/data/vdd/liuwenyu/data_vocfog/val/JPEGImages/'
--train_path             = './data/dataset_fog/voc_norm_train.txt'
--test_path              = './data/dataset_fog/voc_norm_test.txt'
--class_name             = './data/classes/vocfog.names'
  1. Train and Evaluate
$ python train.py # we trained our model from scratch.  
$ python evaluate.py   
$ cd ./experiments/.../mAP & python main.py 
  1. More details of Preparing dataset or Train with your own dataset
    reference the implementation tensorflow-yolov3.

Train and Evaluate on low_light images

The overall process is the same as above, run the *_lowlight.py to train or evaluate.

Acknowledgments

The code is based on tensorflow-yolov3, exposure.

Citation

@inproceedings{liu2022imageadaptive,
  title={Image-Adaptive YOLO for Object Detection in Adverse Weather Conditions},
  author={Liu, Wenyu and Ren, Gaofeng and Yu, Runsheng and Guo, Shi and Zhu, Jianke and Zhang, Lei},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  year={2022}
}

@article{liu2022improving,
  title={Improving Nighttime Driving-Scene Segmentation via Dual Image-adaptive Learnable Filters},
  author={Liu, Wenyu and Li, Wentong and Zhu, Jianke and Cui, Miaomiao and Xie, Xuansong and Zhang, Lei},
  journal={arXiv e-prints},
  pages={arXiv--2207},
  year={2022}
}