/DSRNSS_codes

[IROS 2024] Official code for Towards Dynamic and Small Objects Refinement for Unsupervised Domain Adaptative Nighttime Segmentation.

Primary LanguagePython


Towards Dynamic and Small Objects Refinement for Unsupervised Domain Adaptative Nighttime Segmentation

Paper

This repository provides the official code for Towards Dynamic and Small Objects Refinement for Unsupervised Domain Adaptative Nighttime Segmentation. The code is organized using PyTorch Lightning.

Abstract

Nighttime semantic segmentation is essential for various applications, e.g., autonomous driving, which often faces challenges due to poor illumination and the lack of well-annotated datasets. Unsupervised domain adaptation (UDA) has shown potential for addressing the challenges and achieved remarkable results for nighttime semantic segmentation. However, existing methods still face limitations in 1) their reliance on style transfer or relighting models, which struggle to generalize to complex nighttime environments, and 2) their ignorance of dynamic and small objects like vehicles and traffic signs, which are difficult to be directly learned from other domains. This paper proposes a novel UDA method that refines both label and feature levels for dynamic and small objects for nighttime semantic segmentation. First, we propose a dynamic and small object refinement module to complement the knowledge of dynamic and small objects from the source domain to target nighttime domain. These dynamic and small objects are normally context-inconsistent in under-exposed conditions. Then, we design a feature prototype alignment module to reduce the domain gap by deploying contrastive learning between features and prototypes of the same class from different domains, while re-weighting the categories of dynamic and small objects. Extensive experiments on four benchmark datasets demonstrate that our method outperforms prior arts by a large margin for nighttime segmentation. Project page: https://rorisis.github.io/DSRNSS/.

Usage

Requirements

The code is run with Python 3.8.13. To install the packages, use:

pip install -r requirements.txt

Set Data Directory

The following environment variable must be set:

export DATA_DIR=/path/to/data/dir

Download the Data

Before running the code, download and extract the corresponding datasets to the directory $DATA_DIR.

UDA

Cityscapes

Download leftImg8bit_trainvaltest.zip and gt_trainvaltest.zip from here and extract them to $DATA_DIR/Cityscapes.

$DATA_DIR
├── Cityscapes
│   ├── leftImg8bit
│   │   ├── train
│   │   ├── val
│   ├── gtFine
│   │   ├── train
│   │   ├── val
├── ...

Afterwards, run the preparation script:

python tools/convert_cityscapes.py $DATA_DIR/Cityscapes
Dark Zurich

Download Dark_Zurich_train_anon.zip, Dark_Zurich_val_anon.zip, and Dark_Zurich_test_anon_withoutGt.zip from here and extract them to $DATA_DIR/DarkZurich.

$DATA_DIR
├── DarkZurich
│   ├── rgb_anon
│   │   ├── train
│   │   ├── val
│   │   ├── val_ref
│   │   ├── test
│   │   ├── test_ref
│   ├── gt
│   │   ├── val
├── ...
Nighttime Driving

Download NighttimeDrivingTest.zip from here and extract it to $DATA_DIR/NighttimeDrivingTest.

$DATA_DIR
├── NighttimeDrivingTest
│   ├── leftImg8bit
│   │   ├── test
│   ├── gtCoarse_daytime_trainvaltest
│   │   ├── test
├── ...
BDD100k-night

Download 10k Images and Segmentation from here and extract them to $DATA_DIR/bdd100k.

$DATA_DIR
├── bdd100k
│   ├── images
│   │   ├── 10k
│   ├── labels
│   │   ├── sem_seg
├── ...
ACDC

Download rgb_anon_trainvaltest.zip and gt_trainval.zip from here and extract them to $DATA_DIR/ACDC.

$DATA_DIR
├── ACDC
│   ├── rgb_anon
│   │   ├── fog
│   │   ├── night
│   │   ├── rain
│   │   ├── snow
│   ├── gt
│   │   ├── fog
│   │   ├── night
│   │   ├── rain
│   │   ├── snow
├── ...

DSRNSS Training

Make sure to first download the trained UAWarpC model with the link provided here. Enter the path to the UAWarpC model for model.init_args.alignment_head.init_args.pretrained in the config file you intend to run (or save the model to ./pretrained_models/).

To train DSRNSS on DarkZurich (single GPU, with AMP) use the following command:

python tools/run.py fit --config configs/cityscapes_darkzurich/dsrnss_hrda.yaml --trainer.gpus 1 --trainer.precision 16

Other backbones are following corresponding configs.

DSRNSS Testing

As mentioned in the previous section, modify the config file by adding the UAWarpC model path. To evaluate DSRNSS e.g. on the DarkZurich validation set, use the following command:

python tools/run.py test --config configs/cityscapes_darkzurich/dsrnss_hrda.yaml --ckpt_path /path/to/trained/model --trainer.gpus 1

The results can be obainted from tensorbord:

tensorboard --logdir lightning_logs/version_x

DSRNSS Predicting

python tools/run.py predict --config configs/cityscapes_darkzurich/dsrnss_hrda.yaml --ckpt_path /path/to/trained/model --trainer.gpus 1

To get test set scores for DarkZurich, predictions are evaluated on the respective evaluation server: DarkZurich.

We also provide pretrained models, which can be downloaded from the link here. To evaluate them, simply provide them as the argument --ckpt_path.

Citation

If you find this code useful in your research, please consider citing the paper:

@article{pan2023towards,
  title={Towards Dynamic and Small Objects Refinement for Unsupervised Domain Adaptative Nighttime Semantic Segmentation},
  author={Pan, Jingyi and Li, Sihang and Chen, Yucheng and Zhu, Jinjing and Wang, Lin},
  journal={arXiv preprint arXiv:2310.04747},
  year={2023}
}

Credit

The pretrained backbone weights and code are from MMSegmentation. DAFormer code is from the original repo. Geometric matching code is from this repo. Refign code is from this repo. Local correlation CUDA code is from this repo.