/BDL

Primary LanguagePython

Bidirectional Learning for Domain Adaptation of Semantic Segmentation (CVPR 2019)

A pytorch implementation of BDL. If you use this code in your research please consider citing

@article{li2019bidirectional, title={Bidirectional Learning for Domain Adaptation of Semantic Segmentation}, author={Li, Yunsheng and Yuan, Lu and Vasconcelos, Nuno}, journal={arXiv preprint arXiv:1904.10620}, year={2019} }

Requirements

  • Hardware: PC with NVIDIA Titan GPU.
  • Software: Ubuntu 16.04, CUDA 9.2, Anaconda2, pytorch 0.4.0
  • Python package
    • conda install pytorch=0.4.0 torchvision cuda91 -y -c pytorch
    • pip install tensorboard tensorboardX

Datasets

Train adaptive segmenation network in BDL

python BDL.py --snapshot-dir ./snapshots/gta2city \
              --init-weights /path/to/inital_weights \
              --num-steps-stop 80000 \
              --model DeepLab
  • Training example (with self-supervised learning):
    • Download the model SSL_step1 or SSL_step2 to generate pseudo labels for CityScapes dataset and then run:
python SSL.py --data-list-target /path/to/dataset/cityscapes_list/train.txt \
              --restore-from /path/to/SSL_step1_or_SSL_step2 \
              --model DeepLab \ 
              --save /path/to/cityscapes/cityscapes_ssl \
              --set train

With the pseudo labels, the adaptive segmenation model can be trained as:

python BDL.py --data-label-folder-target pseudo_label_folder_name \ 
              --snapshot-dir ./snapshots/gta2city_ssl \
              --init-weights /path/to/inital_weights \
              --num-steps-stop 120000 \
              --model DeepLab

Evaluation

The pre-trained model can be downloaded here GTA5_deeplab. You can use the pre-trained model or your own model to make a test as following:

python evaluation.py --restore-from ./snapshots/gta2city \
                     --save /path/to/cityscapes/results

Others

The different initial models can be downloaded here:

If you want to use BDL for SYNTHIA dataset or use VGG-FCN model, you can assign '--source synthia' or '--model VGG' The model for SYNTHIA with DeepLab or VGG can be downloaded here to reproduce the results in the paper:

The model for GTA5 with VGG can be downloaded here to reproduce the results in the paper:

The other transferred images can be downloaed here:

Acknowledgment

This code is heavily borrowed from AdaptSegNet