/RIPU

[CVPR 2022] Official Implementation of Towards Fewer Annotations: Active Learning via Region Impurity and Prediction Uncertainty for Domain Adaptive Semantic Segmentation https://arxiv.org/abs/2111.12940

Primary LanguagePythonMIT LicenseMIT

Region Impurity and Prediction Uncertainty (CVPR 2022 Oral Presentation)

by Binhui Xie, Longhui Yuan, Shuang Li, Chi Harold Liu and Xinjing Cheng

[Arxiv] [Paper]

🥳 We are happy to announce that RIPU was accepted at CVPR22 Oral Presentation.

Overview

We propose a simple region-based active learning approach for semantic segmentation under a domain shift, aiming to automatically query a small partition of image regions to be labeled while maximizing segmentation performance.

Our algorithm, Region Impurity and Prediction Uncertainty (RIPU), introduces a new acquisition strategy characterizing the spatial adjacency of image regions along with the prediction confidence. We show that the proposed region-based selection strategy makes more efficient use of a limited budget than image-based or point-based counterparts.

image

We show some qualitative examples from the Cityscapes validation set, image

and also visualize the queried regions to annotate. image

For more information on DAFormer, please check our [Paper].

Citation

If you find this project useful in your research, please consider citing:

@InProceedings{xie2022towards,
  author = {Binhui Xie and Longhui Yuan and Shuang Li and Chi Harold Liu and Xinjing Cheng},
  title={Towards Fewer Annotations: Active Learning via Region Impurity and Prediction Uncertainty for Domain Adaptive Semantic Segmentation},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2022}
}

Prerequisites

  • Python 3.7
  • Pytorch 1.7.1
  • torchvision 0.8.2

Step-by-step installation

conda create --name ADASeg -y python=3.7
conda activate ADASeg

# this installs the right pip and dependencies for the fresh python
conda install -y ipython pip

# this installs required packages
pip install -r requirements.txt

Data Preparation

The data folder should be structured as follows:

├── datasets/
│   ├── cityscapes/     
|   |   ├── gtFine/
|   |   ├── leftImg8bit/
│   ├── gtav/
|   |   ├── images/
|   |   ├── labels/
|   |   ├── gtav_label_info.p
│   └──	synthia
|   |   ├── RAND_CITYSCAPES/
|   |   ├── synthia_label_info.p
│   └──	

Symlink the required dataset

ln -s /path_to_cityscapes_dataset datasets/cityscapes
ln -s /path_to_gtav_dataset datasets/gtav
ln -s /path_to_synthia_dataset datasets/synthia

Generate the label static files for GTAV/SYNTHIA Datasets by running

python datasets/generate_gtav_label_info.py -d datasets/gtav -o datasets/gtav/
python datasets/generate_synthia_label_info.py -d datasets/synthia -o datasets/synthia/

Train

We provide the training scripts in scripts/ using single GPU.

# training for GTAV to Cityscapes
sh gtav_to_cityscapes.sh

# training for SYNTHIA to Cityscapes
sh synthia_to_cityscapes.sh

Evaluate

python test.py -cfg configs/gtav/deeplabv3plus_r101_RA.yaml resume results/v3plus_gtav_ra_5.0_precent/model_iter040000.pth OUTPUT_DIR results/v3plus_gtav_ra_5.0_precent

Acknowledgements

This project is based on the following open-source projects. We thank their authors for making the source code publically available.

Contact

If you have any problem about our code, feel free to contact

or describe your problem in Issues.