This is the codebase of: "The Devil is in the Wrongly-classified Samples: Towards Unified Open-set Recognition",
Jun Cen, Di Luan, Shiwei Zhang, Yixuan Pei, Yingya Zhang, Deli Zhao, Shaojie Shen, Qifeng Chen.
In International Conference on Learning Representations (ICLR), 2023.
We deeply analyze the Unified Open-set Recognition (UOSR) task under different training and evaluation settings. UOSR has been proposed to reject not only unknown samples but also known but wrongly classified samples. Specifically, we first evaluate the UOSR performance of existing OSR methods and demonstrate a significant finding that the uncertainty distribution of almost all existing methods designed for OSR is actually closer to the expectation of UOSR than OSR. Second, we analyze how two training settings of OSR (i.e.,pre-training and outlier exposure) effect the UOSR. Finally, we formulate the Few-shot Unified Open-set Recognition setting, where only one or five samples per unknown class are available during evaluation to help identify unknown samples. Our proposed FS-KNNS method for the few-shot UOSR achieved state-of-the-art performance under all settings.
This repo uses standard image datasets, i.e., CIFAR-100 for closed set training, and TinuImagenet-resize and LSUN resize test sets as two different unknowns. Please refer to ODIN to download the out-of-distribution dataset. For outlier data, we choose the cleaned("debiased) image dataset 300K Random Images. More details could be found in Outlier Exposure.
./UOSR_eval
contains all uncertainty score result files to reproduce Table 2, 3, 5, 6, 7, and 19 in the manuscript. This codebase is like a evalaution server to calculate the UOSR performance based on the uncetainty score result files. Simply follow the README_eval.md in ./UOSR_eval
to get all table results directly in the terminal.
./UOSR_train
provide the code about the how to train the model with the methods mentioned in the manuscript. Readers may refer to README_train.md in ./UOSR_train
for details.
./UOSR_few_shot
provide the code about the how to conduct few-shot UOSR evaluation. Readers may refer to README_few_shot.md in ./UOSR_few_shot
for details.
If you find the code useful in your research, please cite:
@inproceedings{
jun2023devil
title={The Devil is in the Wrongly-classified Samples: Towards Unified Open-set Recognition},
author={Jun Cen, Di Luan, Shiwei Zhang, Yixuan Pei, Yingya Zhang, Deli Zhao, Shaojie Shen, Qifeng Chen},
booktitle={The Eleventh International Conference on Learning Representations (ICLR)},
year={2023},
}
This repo contains modified codes from:
- Learning Confidence Estimates for Neural Networks: for implementation of baseline method LC .
- Out-of-Distribution Detector for Neural Networks: for implementation of baseline method ODIN .
- Learning Placeholders for Open-Set Recognition: for implementation of baseline method PROSER .
- Deep Anomaly Detection with Outlier Exposure: for implementation of baseline method OE .
- Evidential Deep Learning for Open Set Action Recognition: for implementation of methods in the video domain.
We sincerely thank the owners of all these great repos!