Machine learning models traditionally assume that training and test data are independently and identically distributed. However, in real-world applications, the test distribution often differs from training. This problem, known as out-of-distribution (OOD) generalization, challenges conventional models. Invariant Risk Minimization (IRM) emerges as a solution that aims to identify invariant features across different environments to enhance OOD robustness. However, IRM's complexity, particularly its bi-level optimization, has led to the development of various approximate methods. Our study investigates these approximate IRM techniques, using the consistency and variance of calibration across environments as metrics to measure the invariance aimed for by IRM. Calibration, which measures the reliability of model prediction, serves as an indicator of whether models effectively capture environment-invariant features by showing how uniformly over-confident the model remains across varied environments. Through a comparative analysis of datasets with distributional shifts, we observe that Information Bottleneck-based IRM achieves consistent calibration across different environments. This observation suggests that information compression techniques, such as IB, are potentially effective in achieving model invariance. Furthermore, our empirical evidence indicates that models exhibiting consistent calibration across environments are also well-calibrated. This demonstrates that invariance and cross-environment calibration are empirically equivalent. Additionally, we underscore the necessity for a systematic approach to evaluating OOD generalization. This approach should move beyond traditional metrics, such as accuracy and F1 scores, which fail to account for the model’s degree of over-confidence, and instead focus on the nuanced interplay between accuracy, calibration, and model invariance.
Datasets to download:
- MNIST (for ColoredMNIST and RotatedMNIST)
- PACS
- VLCS
python3 -m domainbed.scripts.download \
--data_dir=./domainbed/data
- Accuracy (ACC)
- Expected Calibration Error (ECE)
- Adaptive Calibration Error (ACE)
- Negative Log-Likelihood (NLL)
TMLR 2024 OpenReview
@misc{yoshida2024understanding,
title={Towards Understanding Variants of Invariant Risk Minimization through the Lens of Calibration},
author={Kotaro Yoshida and Hiroki Naganuma},
year={2024},
eprint={2401.17541},
archivePrefix={arXiv},
primaryClass={cs.LG}
}