Research

What Is Missing in IRM Training and Evaluation? Challenges and Solutions

ICLR

Authors

  • Yihua Zhang
  • Pranay Sharma
  • Parikshit Ram
  • Mingyi Hong
  • Kush Varshney
  • Sijia Liu

Published on

05/05/2023

Categories

ICLR

Invariant risk minimization (IRM) has received increasing attention as a way to acquire environment-agnostic data representations and predictions, and as a principled solution for preventing spurious correlations from being learned and for improving models’ out-of-distribution generalization. Yet, recent works have found that the optimality of the originally-proposed IRM optimization (IRMV1) may be compromised in practice or could be impossible to achieve in some scenarios. Therefore, a series of advanced IRM algorithms have been developed that show practical improvement over IRMV1. In this work, we revisit these recent IRM advancements, and identify and resolve three practical limitations in IRM training and evaluation. First, we find that the effect of batch size during training has been chronically overlooked in previous studies, leaving room for further improvement. We propose small-batch training and highlight the improvements over a set of large-batch optimization techniques. Second, we find that improper selection of evaluation environments could give a false sense of invariance for IRM. To alleviate this effect, we leverage diversified test-time environments to precisely characterize the invariance of IRM when applied in practice. Third, we revisit Ahuja et al. (2020)’s proposal to convert IRM into an ensemble game and identify a limitation when a single invariant predictor is desired instead of an ensemble of individual predictors. We propose a new IRM variant to address this limitation based on a novel viewpoint of ensemble IRM games as consensus-constrained bilevel optimization. Lastly, we conduct extensive experiments (covering 7 existing IRM variants and 7 datasets) to justify the practical significance of revisiting IRM training and evaluation in a principled manner.

Please cite our work using the BibTeX below.

@inproceedings{
zhang2023what,
title={What Is Missing in {IRM} Training and Evaluation? Challenges and Solutions},
author={Yihua Zhang and Pranay Sharma and Parikshit Ram and Mingyi Hong and Kush R. Varshney and Sijia Liu},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={https://openreview.net/forum?id=MjsDeTcDEy}
}
Close Modal