Official implementation of "DSP: Dual Soft-Paste for Unsupervised Domain Adaptive Semantic Segmentation". Accepted by ACM Multimedia 2021.
Authors: Li Gao, Jing Zhang, Lefei Zhang, Dacheng Tao.
- CUDA/CUDNN
- Python3
- PyTorch==1.7
- Packages found in requirements.txt
- Creat a new conda environment
conda create -n dsp_env python=3.7
conda activate dsp_env
conda install pytorch=1.7 torchvision torchaudio cudatoolkit -c pytorch
pip install -r requirements.txt
- Download the code from github and change the directory
git clone https://github.com/GaoLii/DSP/
cd DSP
- Prepare dataset
Download Cityscapes, GTA5 and SYNTHIA dataset, then organize the folder as follows:
├── ../../dataset/
│ ├── Cityscapes/
| | ├── gtFine/
| | ├── leftImg8bit/
│ ├── GTA5/
| | ├── images/
| | ├── labels/
│ ├── RAND_CITYSCAPES/
| | ├── GT/
| | ├── RGB/
...
Training and evaluation are on a single Tesla V100 GPU.
python train.py
python evaluateUDA.py --model-path checkpoint.pth
- [Pretrained model for GTA5->Cityscapes](Link: https://pan.baidu.com/s/10adjjSXarJOvat-ibzfoLg Code: wv28).
- [Pretrained model for SYNTHIA->Cityscapes](Link: https://pan.baidu.com/s/1FOXwfkihUZeWmEvh3RM15g Code: a9pj).
This model should be unzipped in the '../saved' folder.
The code is heavily borrowed from DACS.
If you use this code in your research please consider citing
@article{Gao_2021,
title={DSP: Dual Soft-Paste for Unsupervised Domain Adaptive Semantic Segmentation},
url={https://arxiv.org/abs/2107.09600},
DOI={10.1145/3474085.3475186},
journal={Proceedings of the 29th ACM International Conference on Multimedia},
publisher={ACM},
author={Gao, Li and Zhang, Jing and Zhang, Lefei and Tao, Dacheng},
year={2021},
month={Oct}
}