Welcome to the official repository for reproducing the results and accessing the dataset associated with our Out-of-Distribution (OOD) Detection Benchmark. This README will guide you through the necessary steps to set up the environment, download the required packages, and run the code to replicate the experiments.
For more details on our evaluation framework, you can refer to our paper titled "Towards Realistic Out-of-Distribution Detection: A Novel Evaluation Framework for Improving Generalization in OOD Detection".
Before you begin, ensure you have the necessary packages installed by running the following commands:
pip install faiss-gpu
pip install git+https://github.com/rwightman/pytorch-image-models
pip install Wand
apt-get install libmagickwand-dev
You also need to have PyTorch, scikit-learn (sklearn), scipy, and scikit-image installed. Visit the respective websites to find installation commands based on your operating system.
This codebase has been extensively tested on Unix-based systems. If you prefer a hassle-free experience, you can also run the code on Google Colab. Most required packages are already pre-installed. On Google Colab, you only need to install the packages mentioned above, as the rest are readily available. (Successfully tested on Colab Pro+ as well)
Download the ImageNet-30 dataset from the link provided in the paper Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty (Dan Hendrycks et al.).
The CIFAR-10 and CIFAR-100 datasets will be automatically downloaded using PyTorch's data loader.
Run the following commands to create modified datasets:
python create_CIFAR-10-R.py
python create_CIFAR-100-R.py
python create_ImageNet-30-R.py
Extract features from the modified datasets with these commands:
python extract_features_CIFAR-10-R.py
python extract_features_CIFAR-100-R.py
python extract_features_ImageNet-30-R.py
To calculate OOD detection performance on different datasets, run:
python calculate_OOD_performance_CIFAR-10-R.py
python calculate_OOD_performance_CIFAR-100-R.py
python calculate_OOD_performance_ImageNet-30-R.py
You can preview the results before applying adaptation by commenting out the sections in the code indicated by "############ adaptation".
If you find our work or dataset helpful in your research, please consider citing:
@article{khazaie2022out,
title={Towards Realistic Out-of-Distribution Detection: A Novel Evaluation Framework for Improving Generalization in OOD Detection},
author={Khazaie, Vahid Reza and Wong, Anthony and Sabokrou, Mohammad},
journal={arXiv preprint arXiv:2211.10892},
year={2023}
}