The source code of DAST: Unsupervised Domain Adaptation in Semantic Segmentation Based on Discriminator Attention and Self-Training.
This is a pytorch implementation.
- Python 3.6
- GPU Memory >= 11G
- Pytorch 1.6.0
-
Download The GTA5 Dataset
-
Download The SYNTHIA Dataset
-
Download The Cityscapes Dataset
-
Download The imagenet pretraind model
The data folder is structured as follows:
├── data/
│ ├── Cityscapes/
| | ├── gtFine/
| | ├── leftImg8bit/
│ ├── GTA5/
| | ├── images/
| | ├── labels/
│ └──
└── model_weight/
│ ├── DeepLab_resnet_pretrained.pth
├── vgg16-00b39a1b-updated.pth
...
- First train DA and choose the best weight evaluated by our established validation data
CUDA_VISIBLE_DEVICES=0 python DA_train.py --snapshot-dir ./snapshots/GTA2Cityscapes
- Then train DAST for several round using the above weight.
CUDA_VISIBLE_DEVICES=0 python DAST_train.py --snapshot-dir ./snapshots/GTA2Cityscapes
CUDA_VISIBLE_DEVICES=0 python -u evaluate_bulk.py
CUDA_VISIBLE_DEVICES=0 python -u iou_bulk.py
Our pretrained model is available via Google Drive
This code is heavily borrowed from the baseline AdaptSegNet and BDL