/DAST_segmentation

The source code of DAST: Unsupervised Domain Adaptation in Semantic Segmentation Based on Discriminator Attention and Self-Training

Primary LanguagePython

DAST_segmentation

The source code of DAST: Unsupervised Domain Adaptation in Semantic Segmentation Based on Discriminator Attention and Self-Training.

This is a pytorch implementation.

Prerequisites

  • Python 3.6
  • GPU Memory >= 11G
  • Pytorch 1.6.0

Getting started

The data folder is structured as follows:

├── data/
│   ├── Cityscapes/     
|   |   ├── gtFine/
|   |   ├── leftImg8bit/
│   ├── GTA5/
|   |   ├── images/
|   |   ├── labels/
│   └── 			
└── model_weight/
│   ├── DeepLab_resnet_pretrained.pth
    ├── vgg16-00b39a1b-updated.pth
...

Train

  1. First train DA and choose the best weight evaluated by our established validation data
CUDA_VISIBLE_DEVICES=0 python DA_train.py --snapshot-dir ./snapshots/GTA2Cityscapes
  1. Then train DAST for several round using the above weight.
CUDA_VISIBLE_DEVICES=0 python DAST_train.py --snapshot-dir ./snapshots/GTA2Cityscapes

Evaluate

CUDA_VISIBLE_DEVICES=0 python -u evaluate_bulk.py
CUDA_VISIBLE_DEVICES=0 python -u iou_bulk.py

Our pretrained model is available via Google Drive

Citation

This code is heavily borrowed from the baseline AdaptSegNet and BDL