PyTorch-based library to accelerate research in Out-of-Distribution (OOD) Detection, as well as related
fields such as Open-Set Recognition, Novelty Detection, Confidence Estimation and Anomaly Detection
based on Deep Neural Networks.
This library provides
Objective/Loss Functions
Out-of-Distribution Detection Methods
Datasets
Neural Network Architectures as well as pretrained weights
Useful Utilities
and is designed such that it should integrate seamlessly with frameworks that enable the scaling of model training,
like pytorch-lightning.
Installation
The package can be installed via PyPI:
pip install pytorch-ood
Dependencies
torch
torchvision
scipy
torchmetrics
Optional Dependencies
libmr for the OpenMax Detector [1] . The library is currently broken and unlikely to be repaired. You will have to install cython and libmr afterwards manually.
Quick Start
Load model pre-trained on CIFAR-10 with the Energy-Bounded Learning Loss [6], and predict on some dataset data_loader using
Energy-based Out-of-Distribution Detection [6], calculating the common OOD detection metrics:
pytorch-ood was presented on a CVPR Workshop in 2022.
If you use it in a scientific publication, please consider citing:
@InProceedings{kirchheim2022pytorch,
author = {Kirchheim, Konstantin and Filax, Marco and Ortmeier, Frank},
title = {PyTorch-OOD: A Library for Out-of-Distribution Detection Based on PyTorch},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2022},
pages = {4351-4360}
}
Contributing
We encourage everyone to contribute to this project by adding implementations of OOD Detection methods, datasets etc,
or check the existing implementations for bugs.
License
The code is licensed under Apache 2.0. We have taken care to make sure any third party code included or adapted has compatible (permissive) licenses such as MIT, BSD, etc.
The legal implications of using pre-trained models in commercial services are, to our knowledge, not fully understood.
Reference
[1]
(1, 2) Bendale, A., & Boult, T. E. (2016). Towards open set deep networks. CVPR.
Miok, K., Nguyen-Doan, D., Zaharie, D., & Robnik-Šikonja, M. (2016). Dropout as a bayesian approximation: Representing model uncertainty in deep learning. ICML.
Mu, N., & Gilmer, J. (2019). MNIST-C: A robustness benchmark for computer vision. ICLR Workshop.
[17]
(1, 2) Hendrycks, D., Basart, S., Mazeika, M., Mostajabi, M., Steinhardt, J., & Song, D. (2022). Scaling out-of-distribution detection for real-world settings. ICML.
Torralba, A., Fergus, R., & Freeman, W. T. (2007). 80 million tiny images: a large dataset for non-parametric object and scene recognition. IEEE Transactions on Pattern Analysis and Machine Learning.
Elliott, D., Frank, S., Sima'an, K., & Specia, L. (2016). Multi30k: Multilingual english-german image descriptions. Proceedings of the 5th Workshop on Vision and Language.