SS-OWFormer : Semi-Supervised Open-World Object Detection (AAAI24)

Sahal Shaji Mullappilly, Abhishek Singh Gehlot, Rao Muhammad Anwer, Fahad Shahbaz Khan, Hisham Cholakkal.

Mohamed bin Zayed University of Artificial Intelligence, UAE

🚀 News


  • Dec-9 : Accepted to AAAI 2024 (Main Track)

Introduction

Conventional open-world object detection (OWOD) problem setting first distinguishes known and unknown classes and then later incrementally learns the unknown objects when introduced with labels in the subsequent tasks. However, the current OWOD formulation heavily relies on the external human oracle for knowledge input during the incremental learning stages. Such reliance on run-time makes this formulation less realistic in a real-world deployment. To address this, we introduce a more realistic formulation, named semi-supervised open-world detection (SS-OWOD), that reduces the annotation cost by casting the incremental learning stages of OWOD in a semi-supervised manner. We demonstrate that the performance of the state-of-the-art OWOD detector dramatically deteriorates in the proposed SS-OWOD setting. Therefore, we introduce a novel SS-OWOD detector, named SS-OWFormer, that utilizes a feature-alignment scheme to better align the object query representations between the original and augmented images to leverage the large unlabeled and few labeled data. We further introduce a pseudo-labeling scheme for unknown detection that exploits the inherent capability of decoder object queries to capture object-specific information. On the COCO dataset, our SS-OWFormer using only 50% of the labeled data achieves detection performance that is on par with the state-of-the-art (SOTA) OWOD detector using all the 100% of labeled data. Further, our SS-OWFormer achieves an absolute gain of 4.8% in unknown recall over the SOTA OWOD detector. Lastly, we demonstrate the effectiveness of our SS-OWOD problem setting and approach for remote sensing object detection, proposing carefully curated splits and baseline performance evaluations. Our experiments on 4 datasets including MS COCO, PASCAL, Objects365 and DOTA demonstrate the effectiveness of our approach.

Getting Started

Installation

conda create -n ssowod python=3.7 pip
conda activate ssowod
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=10.2 -c pytorch
pip install -r requirements.txt

Compiling CUDA operators

cd ./models/ops
sh ./make.sh
# unit test (should see all checking is True)
python test.py

Dataset Preparation

The original splits and semi-supervised splits are present inside data/VOC2007/OWOD/ImageSets/ folder. The remaining dataset can be downloaded using this link.

The files should be organized in the following structure:

SS-OWFormer/
└── data/
    └── OWOD/
        └── VOC2007/
        	├── JPEGImages
        	├── ImageSets
        	└── Annotations

Experimental Results

SS-OWOD Results on OWOD Splits

Method Task2 Task3 Task4
U-Recall mAP U-Recall mAP mAP
ORE-EBUI 2.9 39.4 3.9 29.7 25.3
OW-DETR 6.2 42.9 5.7 30.8 27.8
OW-DETR (50%) 6.94 34.91 7.64 24.85 19.49
SS-OWFormer (50%) 10.56 39.2 13.16 30.85 25.35
OW-DETR (25%) 5.03 32.42 6.94 23.72 18.77
SS-OWFormer (25%) 10.47 36.68 12.22 27.87 22.36
OW-DETR (10%) 4.83 30.08 8.24 22.48 17.11
SS-OWFormer (10%) 10.19 35.02 12.13 26.18 20.96

SS-OWOD Results on satellite OWOD Splits

Model Evaluation mAP U-Recall
Baseline Task-1 64.9 2.5
Task-2 68.1 -
SS-OWFormer Task-1 66.7 7.6
Task-2 70.9 -

Qualitative Examples

Citation

@misc{mullappilly2024semisupervised,
      title={Semi-supervised Open-World Object Detection}, 
      author={Sahal Shaji Mullappilly and Abhishek Singh Gehlot and Rao Muhammad Anwer and Fahad Shahbaz Khan and Hisham Cholakkal},
      year={2024},
      eprint={2402.16013},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Acknowledgement

We are thankful to ORE, MMRotate, and OW-DETR for releasing their models and code as open-source contributions.