Research

Building Calibrated Deep Models via Uncertainty Matching with Auxiliary Interval Predictors

AAAI

Authors

Published on

09/09/2019

Categories

AAAI Machine Learning

With rapid adoption of deep learning in critical applications, the question of when and how much to trust these models often arises, which drives the need to quantify the inherent uncertainties. While identifying all sources that account for the stochasticity of models is challenging, it is common to augment predictions with confidence intervals to convey the expected variations in a model’s behavior. We require prediction intervals to be well-calibrated, reflect the true uncertainties, and to be sharp. However, existing techniques for obtaining prediction intervals are known to produce unsatisfactory results in at least one of these criteria. To address this challenge, we develop a novel approach for building calibrated estimators. More specifically, we use separate models for prediction and interval estimation, and pose a bi-level optimization problem that allows the former to leverage estimates from the latter through an \textit{uncertainty matching} strategy. Using experiments in regression, time-series forecasting, and object localization, we show that our approach achieves significant improvements over existing uncertainty quantification methods, both in terms of model fidelity and calibration error.

This paper has been published at AAAI 2020

Please cite our work using the BibTeX below.

@misc{thiagarajan2019building,
      title={Building Calibrated Deep Models via Uncertainty Matching with Auxiliary Interval Predictors}, 
      author={Jayaraman J. Thiagarajan and Bindya Venkatesh and Prasanna Sattigeri and Peer-Timo Bremer},
      year={2019},
      eprint={1909.04079},
      archivePrefix={arXiv},
      primaryClass={stat.ML}
}
Close Modal