Building Calibrated Deep Models via Uncertainty Matching with Auxiliary Interval Predictors
Authors
Authors
- Jayaraman J. Thiagarajan
- Bindya Venkatesh
- Prasanna Sattigeri
- Peer-Timo Bremer
Authors
- Jayaraman J. Thiagarajan
- Bindya Venkatesh
- Prasanna Sattigeri
- Peer-Timo Bremer
Published on
09/09/2019
Categories
With rapid adoption of deep learning in critical applications, the question of when and how much to trust these models often arises, which drives the need to quantify the inherent uncertainties. While identifying all sources that account for the stochasticity of models is challenging, it is common to augment predictions with confidence intervals to convey the expected variations in a model’s behavior. We require prediction intervals to be well-calibrated, reflect the true uncertainties, and to be sharp. However, existing techniques for obtaining prediction intervals are known to produce unsatisfactory results in at least one of these criteria. To address this challenge, we develop a novel approach for building calibrated estimators. More specifically, we use separate models for prediction and interval estimation, and pose a bi-level optimization problem that allows the former to leverage estimates from the latter through an \textit{uncertainty matching} strategy. Using experiments in regression, time-series forecasting, and object localization, we show that our approach achieves significant improvements over existing uncertainty quantification methods, both in terms of model fidelity and calibration error.
Please cite our work using the BibTeX below.
@misc{thiagarajan2019building,
title={Building Calibrated Deep Models via Uncertainty Matching with Auxiliary Interval Predictors},
author={Jayaraman J. Thiagarajan and Bindya Venkatesh and Prasanna Sattigeri and Peer-Timo Bremer},
year={2019},
eprint={1909.04079},
archivePrefix={arXiv},
primaryClass={stat.ML}
}