SJDL-Vehicle: Semi-supervised Joint Defogging Learning for Foggy Vehicle Re-identification--AAAI2022

This is official implentation of the paper "SJDL-Vehicle: Semi-supervised Joint Defogging Learning for Foggy Vehicle Re-identification".

Update the final version! 20220826.

According to the device updating, it must to correspond to the newer version of torch. And I provides some additional datas.

  1. I re-edits the final version, and provides the conda environment packet.
  2. Due to FVRID_syn: synthesis FVRID dataset process was based on the depth_estimation, but the original author had changed the model weights. So I provides the depth map of the FVRID_syn dataset, and you can use that to synthesis the whole dataset. Please kindly find the details in the https://github.com/Cihsaing/SJDL-Foggy-Vehicle-Re-Identification--AAAI2022/tree/master/Datasets/Preprocessing.
  3. Update the annotation file:./Datasets/Preprocessing/1_Split_Dataset/FVRID_Label.csv.

Abstract:

Vehicle re-identification (ReID) has attracted considerable attention in computer vision. Although several methods have been proposed to achieve state-of-the-art performance on this topic, re-identifying vehicle in foggy scenes remains a great challenge due to the degradation of visibility. To our knowledge, this problem is still not well-addressed so far. In this paper, to address this problem, we propose a novel training framework called Semi-supervised Joint Defogging Learning (SJDL) framework. First, the fog removal branch and the re-identification branch are integrated to perform simultaneous training. With the collaborative training scheme, defogged features generated by the defogging branch from input images can be shared to learn better representation for the re-identification branch. However, since the fog-free image of real-world data is intractable, this architecture can only be trained on the synthetic data, which may cause the domain gap problem between real-world and synthetic scenarios. To solve this problem, we design a semi-supervised defogging training scheme that can train two kinds of data alternatively in each iteration. Due to the lack of a dataset specialized for vehicle ReID in the foggy weather, we construct a dataset called FVRID which consists of real-world and synthetic foggy images to train and evaluate the performance. Experimental results show that the proposed method is effective and outperforms other existing vehicle ReID methods in the foggy weather.

You can also refer our previous works on other low-level vision applications!

Desnowing-[JSTASR](ECCV'20) and [HDCWNet](ICCV'21)
Dehazing-[PMS-Net](CVPR'19) and [PMHLD](TIP'20)
Image Relighting-[MB-Net] (NTIRE'21 1st solution) and [S3Net] (NTIRE'21 3 rd solution)

Network Architecture

Joint Defogging Learning (JDL)

image

Semi-supervised Joint Defogging Learning (SJDL)

image

Dataset

Both synthetic data and real-world data are adopted in this paper:


Example of synthetic data:
image

Example of real-world data:
image

Result

image

Setup and environment

To implement our method you need:

  1. Python 3.10
  2. pytorch 1.8.0+
  3. torchvision 0.13.0+
  4. yacs
  5. tqdm

As the device was updated, I changed the environment parameters for the RTX3090 and provided the final version, which can be generated with conda env create -f environment.yml. In 08/26/2022.

Data Preparation

Since the policy of Veri-1M, we can only provide the codes to synthesize the foggy data and the index of the real-world foggy data. Please follow the steps to generate the data: See Data Preparation.

Train SJDL

Run following command to train the SJDL model

cd SJDL/
CUDA_VISIBLE_DEVICES=0 python trainer.py -c configs/FVRID_syn.yml MODEL.NAME "resnet50" TEST.VIS True OUTPUT_DIR "./output/SJDL/" MODEL.TENSORBOARDX False

where CUDA_VISIBLE_DEVICES defines usable GPU.
where the configs/FVRID_syn.yml is the default SJDL training config.
where the MODEL.NAME select the backbone. eg. 'resnet50', 'resnet101'...
where the TEST.VIS enable resotration result plot.
where the OUTPUT_DIR define the output path.
where the MODEL.TENSORBOARDX enable tensorboard.

Common problem

  1. If the gpu memory is insufficient, please lower the number of "configs/FVRID_syn.yml": DATALOADER.NUM_WORKERS and SOLVER.IMS_PER_BATCH # Must be a multiple of DATALOADER.NUM_INSTANCE .
  2. If you encounter an amp conflict, there are two possibilities: torch version problem and the device must have support. If your device not support, please keep the "configs/FVRID_syn.yml": SOLVER.FP16 = False.
  3. If you encounter the loss.py line:242 torch.fft problem, please check the torch version and find the corresponding version of the fft.

Pretrained Models

We provide the pretrained SJDL, training on FVRID for your convinient. You can download it from the following link: https://drive.google.com/file/d/1WhsvYQP-qg1R-BcpH5lonjxh4DYp2ouv/view?usp=sharing

Testing

cd SJDL/
CUDA_VISIBLE_DEVICES=0 python inference.py -t -c <Configs> TEST.WEIGHT <PTH_PATH> OUTPUT_DIR <OUTPUT_PATH>

where the <Configs> is the testing configs file.
where the <PTH_PATH> is the test weight.
where the <OUTPUT_PATH> is the output paths.

The pre-trained model can be downloaded from Link:
https://drive.google.com/file/d/1WhsvYQP-qg1R-BcpH5lonjxh4DYp2ouv/view?usp=sharing.
and you can put it at the dir './SJLD/output/'

Examples

cd SJDL/
# For FVRID_real
CUDA_VISIBLE_DEVICES=0 python inference.py -t -c ./configs/FVRID_real.yml TEST.WEIGHT ./output/best.pth OUTPUT_DIR ./output/Test_on_FVRID_real/
# For FVRID_syn
CUDA_VISIBLE_DEVICES=0 python inference.py -t -c ./configs/FVRID_syn.yml TEST.WEIGHT ./output/best.pth OUTPUT_DIR ./output/Test_on_FVRID_syn/

And you can also create the new config at dir './configs' for another application. If you train another weights, please change the TEST.WEIGHT.

Citations

Please cite this paper in your publications if it is helpful for your tasks:

Bibtex:

@inproceedings{chen2021all,
  title={SJDL-Vehicle: Semi-supervised Joint Defogging Learning for Foggy Vehicle Re-identification},
  author={Chen, Wei-Ting and Chen, I-Hsiang and Yeh, Chih-Yuan and Yang, Hao-Hsiang and Ding, Jian-Jiun and Kuo, Sy-Yen},
  booktitle={Proceedings of the AAAI conference on artificial intelligence},
}