/DivAlign

Improving Single Domain-Generalized Object Detection: A Focus on Diversification and Alignment [CVPR-2024]

Primary LanguagePythonMIT LicenseMIT

Improving Single Domain-Generalized Object Detection: A Focus on Diversification and Alignment [CVPR-2024]

Oryx Video-ChatGPT

1Mohamed bin Zayed University of AI, 2Information Technology University of Punjab, 3Mercedes-Benz Tech Innovation, 4Karlsruhe Institute of Technology

paper Weights


📢 Latest Updates

  • Jun-15-24: We open source the code, models.🔥🔥
  • Jun-10-24: DivAlign paper is released arxiv link. 🔥🔥
  • Feb-27-24: DivAlign has been accepted to CVPR-24 🎉.

Overview

In this work, we tackle the problem of domain generalization for object detection, specifically focusing on the scenario where only a single source domain is available. We propose an effective approach that involves two key steps: diversifying the source domain and aligning detections based on class prediction confidence and localization. Firstly, we demonstrate that by carefully selecting a set of augmentations, a base detector can outperform existing methods for single domain generalization by a good margin. This highlights the importance of domain diversification in improving the performance of object detectors. Secondly, we introduce a method to align detections from multiple views, considering both classification and localization outputs. This alignment procedure leads to better generalized and well-calibrated object detector models, which are crucial for accurate decision-making in safety-critical applications. Our approach is detector-agnostic and can be seamlessly applied to both single-stage and two-stage detectors. To validate the effectiveness of our proposed methods, we conduct extensive experiments and ablations on challenging domain-shift scenarios. The results consistently demonstrate the superiority of our approach compared to existing methods.

Installation

Our code is based on Mask R-CNN Benchmark.
Check INSTALL.md for installation instructions.

Datasets

Download Diverse Weather and Cross-Domain Datasets and place in the structure in a parent-folder as shown.

|-clipart
   |--VOC2007
      |---Annotations
      |---ImageSets
      |---JPEGImages
   |--VOC2012
      |---Annotations
      |---ImageSets
      |---JPEGImages
|-clipart
   |--Annotations
   |--ImageSets
   |--JPEGImages
|-comic
   |--Annotations
   |--ImageSets
   |--JPEGImages
|-daytime_clear
   |--VOC2007
      |---Annotations
      |---ImageSets
      |---JPEGImages
|--daytime_foggy
   |--VOC2007
      |---Annotations
      |---ImageSets
      |---JPEGImages
|---dusk_rainy
   |--VOC2007
      |---Annotations
      |---ImageSets
      |---JPEGImages
|---night_rainy
   |--VOC2007
      |---Annotations
      |---ImageSets
      |---JPEGImages
      

Training

We train our models on a 8 GPUs.

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 --master_port= ((RANDOM + 10000)) tools/train_net.py --config-file "configs/pascal_voc e2e_faster_rcnn_R_101_C4_1x_8_gpu_voc.yaml"

    or 

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 --master_port=$((RANDOM + 10000)) tools/train_net.py --config-file "configs/pascal_voc/e2e_faster_rcnn_R_101_C4_1x_8_gpu_dc.yaml"

Evaluation

python tools/test_net.py --config-file "configs/pascal_voc/e2e_faster_rcnn_R_101_C4_1x_8_gpu_voc.yaml" --ckpt models/voc-lcal1-lral1/model_final.pth

👁️💬 Architecture

At the core is a baseline detector, Here a two-stage detector Faster-RCNN is depicted, comprising of backbone, region proposal network (RPN), and ROI alignment (RA). To improve the single domain generalization of the baseline detector, we propose to diversify the single source domain and also align the diversified views by minimizing losses at both classification and regression outputs.

DivAlign Overview


🔍 Quantitative Results

Table: Performance comparison with baseline and possible ablations, mAP@0.5(%) reported. The model is trained on Pascal VOC and tested on Clipart1k, Watercolor2k, and Comic2k.

Method VOC Clipart Watercolor Comic
Faster R-CNN 81.8 25.7 44.5 18.9
NP 79.2 35.4 53.3 28.9
Diversification (div.) 82.1 34.2 53.0 24.2
div. + Lcal 82.1 36.2 53.9 28.7
div. +Lral 80.7 35.0 53.8 28.7
div. + Lcal + Lral (Ours) 80.1 38.9 57.4 33.2

Table: Results, mAP@0.5(%) reported on a multi-weather scenario where the model is trained on Daytime Sunny (DS) and tested on Night-Clear (NC), Night-Rainy (NR), Dusk-Rainy (DR) and Daytime-Foggy (DF).

Method DS NC DR NR DF
Faster R-CNN 51.8 38.9 30.0 15.7 33.1
SW 50.6 33.4 26.3 13.7 30.8
IBN-Net 49.7 32.1 26.1 14.3 29.6
IterNorm 43.9 29.6 22.8 12.6 28.4
ISW 51.3 33.2 25.9 14.1 31.8
Wu et al. 56.1 36.6 28.2 16.6 33.5
Vidit et al. 51.3 36.9 32.3 18.7 38.5
Diversification 50.6 39.4 37.0 22.0 35.6
Ours 52.8 42.5 38.1 24.1 37.2

Comparison of calibration performance using D-ECE metric (%) on Real to artistic shifts and in urban scene detection.

Artistic Shifts Urban Scene
Method Clipart Watercolor Comic NR DR NC DF
Faster R-CNN 11.9 18.5 15.4 31.5 29.3 27.9 25.8
Diversification (div.) 14.5 21.4 17.4 33.0 30.2 28.9 25.7
Ours 10.7 14.4 14.3 29.3 24.9 15.8 20.6

Performance comparison with single-stage baseline, mAP@0.5(%) reported. The model is trained on Pascal VOC and tested on Clipart1k, Watercolor2k and Comic2k.

Method VOC Clipart Watercolor Comic
FCOS 78.1 24.4 44.3 15.4
Diversification (div.) 79.6 31.7 48.8 25.2
div. +Lcal 80.1 35.4 52.6 29.4
div. +Lral 77.5 29.8 50.3 24.0
div. + Lcal + Lral (Ours) 77.5 37.4 55.0 31.2

📊 Qualitative Results

Qualitative results of baseline (Faster-RCNN), only diversifying domain, and our method.

Results_GCG

Results_GCG

---

📜 Citation

@inproceedings{danish2024improving,
  title={Improving Single Domain-Generalized Object Detection: A Focus on Diversification and Alignment},
  author={Danish, Muhammad Sohail and Khan, Muhammad Haris and Munir, Muhammad Akhtar and Sarfraz, M Saquib and Ali, Mohsen},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={17732--17742},
  year={2024}
}