/vfm-uda

Primary LanguagePythonMIT LicenseMIT

Exploring the Benefits of Vision Foundation Models for Unsupervised Domain Adaptation (CVPR 2024 Second Workshop on Foundation Models)

Authors: Bruno B. Englert, Fabrizio J. Piva, Tommie Kerssies, Daan de Geus, Gijs Dubbelman
Affiliation: Eindhoven University of Technology
Publication: CVPR 2024 Workshop Proceedings for the Second Workshop on Foundation Models
Paper: arXiv
Code: GitHub

Abstract

Achieving robust generalization across diverse data domains remains a significant challenge in computer vision. This challenge is important in safety-critical applications, where deep-neural-network-based systems must perform reliably under various environmental conditions not seen during training. Our study investigates whether the generalization capabilities of Vision Foundation Models (VFMs) and Unsupervised Domain Adaptation (UDA) methods for the semantic segmentation task are complementary. Results show that combining VFMs with UDA has two main benefits: (a) it allows for better UDA performance while maintaining the out-of-distribution performance of VFMs, and (b) it makes certain time-consuming UDA components redundant, thus enabling significant inference speedups. Specifically, with equivalent model sizes, the resulting VFM-UDA method achieves an 8.4x speed increase over the prior non-VFM state of the art, while also improving performance by +1.2 mIoU in the UDA setting and by +6.1 mIoU in terms of out-of-distribution generalization. Moreover, when we use a VFM with 3.6x more parameters, the VFM-UDA approach maintains a 3.3x speed up, while improving the UDA performance by +3.1 mIoU and the out-of-distribution performance by +10.3 mIoU. These results underscore the significant benefits of combining VFMs with UDA, setting new standards and baselines for Unsupervised Domain Adaptation in semantic segmentation.

Getting started

  1. Create a Weights & Biases (W&B) account.

  2. Download datasets.

All the zipped data should be placed under one directory. No unzipping is required.

  1. Environment setup.

    conda create -n fuda python=3.10 && conda activate fuda
  2. Install required packages.

    pip install -r requirements.txt
  3. Train the VFM-UDA base model.

    python main.py fit -c uda_vit_vanilla.yaml --root /data  --trainer.devices [0]

    (replace /data with the folder where you stored the datasets)

  4. Reproducibility

We note that there are small variations in performance between training runs, due to the stochasticity in the process, particularly for UDA techniques. Therefore, results may differ slightly depending on the random seed.’

Models

Method Backbone Pre-training Cityscapes (miou) WildDash2 (miou) model
VFM-UDA ViT-B DINOv2 77.1 60.8 model
VFM-UDA ViT-L DINOv2 TBA

Note: these models are re-trained, so the results differ slightly from those reported in the paper.

Citation

@inproceedings{englert2024exploring,
  author={{Englert, Brunó B.} and {Piva, Fabrizio J.} and {Kerssies, Tommie} and {de Geus, Daan} and {Dubbelman, Gijs}},
  title={Exploring the Benefits of Vision Foundation Models for Unsupervised Domain Adaptation},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
  year={2024},
}

Acknowledgement

We use some code from: