/Unified_Open_Set_Recognition

[ICLR 2023] The Devil is in the Wrongly-classified Samples: Towards Unified Open-set Recognition

Primary LanguagePython

Towards UOSR (Unified Open-set Recognition)

This is the codebase of: "The Devil is in the Wrongly-classified Samples: Towards Unified Open-set Recognition",
Jun Cen, Di Luan, Shiwei Zhang, Yixuan Pei, Yingya Zhang, Deli Zhao, Shaojie Shen, Qifeng Chen.
In International Conference on Learning Representations (ICLR), 2023.

Table of Contents

  1. Overview
  2. Datasets
  3. UOSR-Evaluation
  4. UOSRTraining
  5. Few-shot-UOSR
  6. Citation

Overview

We deeply analyze the Unified Open-set Recognition (UOSR) task under different training and evaluation settings. UOSR has been proposed to reject not only unknown samples but also known but wrongly classified samples. Specifically, we first evaluate the UOSR performance of existing OSR methods and demonstrate a significant finding that the uncertainty distribution of almost all existing methods designed for OSR is actually closer to the expectation of UOSR than OSR. Second, we analyze how two training settings of OSR (i.e.,pre-training and outlier exposure) effect the UOSR. Finally, we formulate the Few-shot Unified Open-set Recognition setting, where only one or five samples per unknown class are available during evaluation to help identify unknown samples. Our proposed FS-KNNS method for the few-shot UOSR achieved state-of-the-art performance under all settings.

Datasets

This repo uses standard image datasets, i.e., CIFAR-100 for closed set training, and TinuImagenet-resize and LSUN resize test sets as two different unknowns. Please refer to ODIN to download the out-of-distribution dataset. For outlier data, we choose the cleaned("debiased) image dataset 300K Random Images. More details could be found in Outlier Exposure.

UOSR-Evaluation

./UOSR_eval contains all uncertainty score result files to reproduce Table 2, 3, 5, 6, 7, and 19 in the manuscript. This codebase is like a evalaution server to calculate the UOSR performance based on the uncetainty score result files. Simply follow the README_eval.md in ./UOSR_eval to get all table results directly in the terminal.

UOSR-Training

./UOSR_train provide the code about the how to train the model with the methods mentioned in the manuscript. Readers may refer to README_train.md in ./UOSR_train for details.

Few-shot-UOSR

./UOSR_few_shot provide the code about the how to conduct few-shot UOSR evaluation. Readers may refer to README_few_shot.md in ./UOSR_few_shot for details.

Citation

If you find the code useful in your research, please cite:

@inproceedings{
jun2023devil
title={The Devil is in the Wrongly-classified Samples: Towards Unified Open-set Recognition},
author={Jun Cen, Di Luan, Shiwei Zhang, Yixuan Pei, Yingya Zhang, Deli Zhao, Shaojie Shen, Qifeng Chen},
booktitle={The Eleventh International Conference on Learning Representations (ICLR)},
year={2023},
}

License

See Apache-2.0 License

Acknowledgement

This repo contains modified codes from:

We sincerely thank the owners of all these great repos!