Out-of-Distribution (OOD) Detection with Deep Neural Networks based on PyTorch.
The library provides:
Out-of-Distribution Detection Methods
Loss Functions
Datasets
Neural Network Architectures as well as pretrained weights
Useful Utilities
and is designed such that it should be compatible with frameworks
like pytorch-lightning and
pytorch-segmentation-models.
The library also covers some methods from closely related fields such as Open-Set Recognition, Novelty Detection,
Confidence Estimation and Anomaly Detection.
NOTE: An important convention adopted in pytorch-ood is that OOD detectors predict outlier scores
that should be larger for outliers than for inliers.
If you notice that the scores predicted by a detector do not match the formulas in the corresponding publication,
it may be possible that we multiplied the scores by negative one to comply with this convention.
⏳ Quick Start
Load model pre-trained on CIFAR-10 with the Energy-Bounded Learning Loss [6], and predict on some dataset data_loader using
Energy-based Out-of-Distribution Detection [6], calculating the common OOD detection metrics:
Textx from different newsgroups, as used by Hendrycks et al. in the OOD baseline paper.
🤝 Contributing
We encourage everyone to contribute to this project by adding implementations of OOD Detection methods, datasets etc,
or check the existing implementations for bugs.
📝 Citing
pytorch-ood was presented at a CVPR Workshop in 2022.
If you use it in a scientific publication, please consider citing:
@InProceedings{kirchheim2022pytorch,
author = {Kirchheim, Konstantin and Filax, Marco and Ortmeier, Frank},
title = {PyTorch-OOD: A Library for Out-of-Distribution Detection Based on PyTorch},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2022},
pages = {4351-4360}
}
🛡️ ️License
The code is licensed under Apache 2.0. We have taken care to make sure any third party code included or adapted has compatible (permissive) licenses such as MIT, BSD, etc.
The legal implications of using pre-trained models in commercial services are, to our knowledge, not fully understood.
Mu, N., & Gilmer, J. (2019). MNIST-C: A robustness benchmark for computer vision. ICLR Workshop.
[17]
(1, 2, 3) Hendrycks, D., Basart, S., Mazeika, M., Mostajabi, M., Steinhardt, J., & Song, D. (2022). Scaling out-of-distribution detection for real-world settings. ICML.
Torralba, A., Fergus, R., & Freeman, W. T. (2007). 80 million tiny images: a large dataset for non-parametric object and scene recognition. IEEE Transactions on Pattern Analysis and Machine Learning.
Elliott, D., Frank, S., Sima'an, K., & Specia, L. (2016). Multi30k: Multilingual english-german image descriptions. Proceedings of the 5th Workshop on Vision and Language.
Bergmann, P., Batzner, K., et al. (2021) The MVTec Anomaly Detection Dataset: A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection. IJCV.