Code of our AAAI 2021 paper: Consistency Regularization with High-dimensional Non-adversarial Source-guided Perturbation for Unsupervised Domain Adaptation in Segmentation
Our Bidirectional Style-Induced Domain Adaptation (BiSIDA) employs consistency regularization to efficiently exploit information from the unlabeled target domain dataset, requiring only a simple neural style transfer model.
BiSIDA aligns domains by:
- transferring source images into the style of target images for supervised learning;
- transferring target images into the style of source images to perform high-dimensional perturbation on the unlabeled target images for unsupervised learning.
An example of our BiSIDA on the SYNTHIA-to-CityScapes benchmark eperiment.
-
Download the pretrained VGG model required by both the our style transfer network and FCN, and put it into saved_models/.
VGG initializations is available through this link.
-
Pretraining of our continuous style-induced image generator (AdaIN).
python adain/train/train_0_1.py
An example of our continuous style-induced image generator transferring an image in SYNTHIA to a image in CityScapes with different alpha ranging from 0 to 1 with an increment of 0.2.
Note: Pretrained style transfer network is available through this link and should be placed in saved_models/.
-
Experiment on SYNTHIA-to-CityScapes benckmark
python train/train_synthia_vgg/train_synthia_vgg_experiment.py
-
Experiment on GTAV-to-CityScapes benckmark
python train/train_gta_vgg/train_gta_vgg_experiment.py
@article{Wang_Yang_Betke_2021,
title={Consistency Regularization with High-dimensional Non-adversarial Source-guided Perturbation for Unsupervised Domain Adaptation in Segmentation},
journal={Proceedings of the AAAI Conference on Artificial Intelligence},
author={Wang, Kaihong and Yang, Chenhongyi and Betke, Margrit},
year={2021},
}
Acknowledgment
Code borrowed from BDL, self ensemble visual domain adapt, and fcn.