/vfm-uda

Primary LanguagePythonMIT LicenseMIT

Exploring the Benefits of Vision Foundation Models for Unsupervised Domain Adaptation (CVPR 2024 Second Workshop on Foundation Models)

Authors: Bruno B. Englert, Fabrizio J. Piva, Tommie Kerssies, Daan de Geus, Gijs Dubbelman
Affiliation: Eindhoven University of Technology
Publication: CVPR 2024 Workshop Proceedings for the Second Workshop on Foundation Models
Paper: arXiv
Code: GitHub

Abstract

Achieving robust generalization across diverse data domains remains a significant challenge in computer vision. This challenge is important in safety-critical applications, where deep-neural-network-based systems must perform reliably under various environmental conditions not seen during training. Our study investigates whether the generalization capabilities of Vision Foundation Models (VFMs) and Unsupervised Domain Adaptation (UDA) methods for the semantic segmentation task are complementary. Results show that combining VFMs with UDA has two main benefits: (a) it allows for better UDA performance while maintaining the out-of-distribution performance of VFMs, and (b) it makes certain time-consuming UDA components redundant, thus enabling significant inference speedups. Specifically, with equivalent model sizes, the resulting VFM-UDA method achieves an 8.4x speed increase over the prior non-VFM state of the art, while also improving performance by +1.2 mIoU in the UDA setting and by +6.1 mIoU in terms of out-of-distribution generalization. Moreover, when we use a VFM with 3.6x more parameters, the VFM-UDA approach maintains a 3.3x speed up, while improving the UDA performance by +3.1 mIoU and the out-of-distribution performance by +10.3 mIoU. These results underscore the significant benefits of combining VFMs with UDA, setting new standards and baselines for Unsupervised Domain Adaptation in semantic segmentation.

Getting started

  1. Create a Weights & Biases (W&B) account.

  2. Download datasets.

All the zipped data should be placed under one directory. No unzipping is required.

  1. Environment setup.

    conda create -n fuda python=3.10 && conda activate fuda
  2. Install required packages.

    pip install -r requirements.txt
  3. Train the VFM-UDA base model.

    python main.py fit -c uda_vit_vanilla.yaml --root /data  --trainer.devices [0]

    (replace /data with the folder where you stored the datasets)

Citation

@inproceedings{,
  author={Englert, Brunó B., Piva, Fabrizio J., Kerssies, Tommie, de Geus, Daan and Dubbelman, Gijs},
  title={Exploring the Benefits of Vision Foundation Models for Unsupervised Domain Adaptation},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
  year={2024},
}

Acknowledgement

We use some code from: