/fifo

[CVPR 2022 Oral] Official PyTorch implementation of FIFO

Primary LanguagePythonOtherNOASSERTION

FIFO: Learning Fog-invariant Features for Foggy Scene Segmentation

This repo is the official implementation of [CVPR 2022 Oral, Best Paper Finalist] paper: "FIFO: Learning Fog-invariant Features for Foggy Scene Segmentation".

FIFO: Learning Fog-invariant Features for Foggy Scene Segmentation
Sohyun Lee1, Taeyoung Son2, Suha Kwak1
POSTECH1, NALBI2
accept to CVPR 2022 as an oral presentation

Overall_architecture

Overview

Robust visual recognition under adverse weather conditions is of great importance in real-world applications. In this context, we propose a new method for learning semantic segmentation models robust against fog. Its key idea is to consider the fog condition of an image as its style and close the gap between images with different fog conditions in neural style spaces of a segmentation model. In particular, since the neural style of an image is in general affected by other factors as well as fog, we introduce a fog-pass filter module that learns to extract a fog-relevant factor from the style. Optimizing the fog-pass filter and the segmentation model alternately gradually closes the style gap between different fog conditions and allows to learn fog-invariant features in consequence. Our method substantially outperforms previous work on three real foggy image datasets. Moreover, it improves performance on both foggy and clear weather images, while existing methods often degrade performance on clear scenes.

Citation

If you find our code or paper useful, please consider citing our paper:

@inproceedings{lee2022fifo,
  author    = {Sohyun Lee and Taeyoung Son and Suha Kwak},
  title     = {FIFO: Learning Fog-invariant Features for Foggy Scene Segmentation},
  booktitle = {Proceedings of the {IEEE/CVF} Conference on Computer Vision and Pattern Recognition (CVPR)},
  year      = {2022}
}

Experimental Results

Main_qual

Dataset

  • Cityscapes: Download the Cityscapes Dataset, and put it in the /root/data1/Cityscapes folder

  • Foggy Cityscapes: Download the Foggy Cityscapes Dataset, and put it in the /root/data1/leftImg8bit_foggyDBF folder

  • Foggy Zurich: Download the Foggy Zurich Dataset, and put it in the /root/data1/Foggy_Zurich folder

  • Foggy Driving and Foggy Driving Dense: Download the Foggy Driving Dataset, and put it in the /root/data1/Foggy_Driving folder

Installation

This repository is developed and tested on

  • Ubuntu 16.04
  • Conda 4.9.2
  • CUDA 11.4
  • Python 3.7.7
  • PyTorch 1.5.0

Environment Setup

  • Required environment is presented in the 'FIFO.yaml' file
  • Clone this repo
~$ git clone https://github.com/sohyun-l/fifo
~$ cd fifo
~/fifo$ conda env create --file FIFO.yaml
~/fifo$ conda activate FIFO.yaml

Pretrained Models

PRETRAINED_SEG_MODEL_PATH = './Cityscapes_pretrained_model.pth'

PRETRAINED_FILTER_PATH = './FogPassFilter_pretrained.pth'

Testing

BEST_MODEL_PATH = './FIFO_final_model.pth'

Evaluating FIFO model

(fifo) ~/fifo$ python evaluate.py --file-name 'FIFO_model' --restore-from BEST_MODEL_PATH

Training

Pretraining fog-pass filtering module

(fifo) ~/fifo$ python main.py --file-name 'fog_pass_filtering_module' --restore-from PRETRAINED_SEG_MODEL_PATH --modeltrain 'no'

Training FIFO

(fifo) ~/fifo$ python main.py --file-name 'FIFO_model' --restore-from PRETRAINED_SEG_MODEL_PATH --restore-from-fogpass PRETRAINED_FILTER_PATH --modeltrain 'train'

Acknowledgments

Our code is based on AdaptSegNet, RefineNet-lw, and Pytorch-metric-learning. We also thank Christos Sakaridis for sharing datasets and code of CMAda. If you use our model, please consider citing them as well.