/UCOS-DA

Unsupervised camouflaged object segmentation as domain adaptation

Primary LanguagePython

Authors: Yi Zhang, Chengyi Wu


Introduction

In this work, we investigate a new task, namely unsupervised camouflaged object segmentation (UCOS), where the target objects own a common rarely-seen attribute, i.e., camouflage. Unsurprisingly, we find that the state-of-the-art unsupervised models struggle in adapting UCOS, due to the domain gap between the properties of generic and camouflaged objects. To this end, we formulate the UCOS as a source-free unsupervised domain adaptation task (UCOS-DA), where both source labels and target labels are absent during the whole model training process. Specifically, we define a source model consisting of self-supervised vision transformers pre-trained on ImageNet. On the other hand, the target domain includes a simple linear layer (i.e., our target model) and unlabeled camouflaged objects. We then design a pipeline for foreground-background-contrastive self-adversarial domain adaptation, to achieve robust UCOS. As a result, our baseline model achieves superior segmentation performance when compared with competing unsupervised models on the UCOS benchmark, with the training set which’s scale is only one tenth of the supervised COS counterpart.


Benchmark Results


Figure 1: Comparison of our UCOS-DA and state-of-the-art unsupervised methods on salient object segmentation benchmark datasets. The best and the second best results of each row are highlighted.


Figure 2: Comparison of our UCOS-DA and state-of-the-art unsupervised methods on camouflaged object segmentation benchmark datasets. The best and the second best results of each row are highlighted.


Figure 3: Visual samples of our baseline model (UCOS-DA) and all competing models.

The whole UCOS benchmark results can be downloaded at Google Drive.

Please refer the eval for the evaluation code.


Baseline Model Implementation

Please refer to src for the code of our baseline model.

The results of our baseline model upon six benchmark datasets can be downloaded at Google Drive.

The pre-trained model can be downloaded at Google Drive.


Citation

@InProceedings{Zhang_2023_ICCV,
   author    = {Zhang, Yi and Wu, Chengyi},
   title     = {Unsupervised Camouflaged Object Segmentation as Domain Adaptation},
   booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
   month     = {October},
   year      = {2023},
   pages     = {4334-4344}
}