Research

An Exact Characterization of the Generalization Error for the Gibbs Algorithm

NeurIPS

Authors

  • Gholamali Aminian
  • Yuheng Bu
  • Laura Toni
  • Miguel R. D. Rodrigues
  • Gregory Wornell

Published on

12/14/2021

Categories

NeurIPS

Various approaches have been developed to upper bound the generalization error of a supervised learning algorithm. However, existing bounds are often loose and lack of guarantees. As a result, they may fail to characterize the exact generalization ability of a learning algorithm. Our main contribution is an exact characterization of the expected generalization error of the well-known Gibbs algorithm (a.k.a. Gibbs posterior) using symmetrized KL information between the input training samples and the output hypothesis. Our result can be applied to tighten existing expected generalization error and PAC-Bayesian bounds. Our approach is versatile, as it also characterizes the generalization error of the Gibbs algorithm with data-dependent regularizer and that of the Gibbs algorithm in the asymptotic regime, where it converges to the empirical risk minimization algorithm. Of particular relevance, our results highlight the role the symmetrized KL information plays in controlling the generalization error of the Gibbs algorithm.

Please cite our work using the BibTeX below.

@inproceedings{
aminian2021an,
title={An Exact Characterization of the Generalization Error for the Gibbs Algorithm},
author={Gholamali Aminian and Yuheng Bu and Laura Toni and Miguel R. D. Rodrigues and Gregory Wornell},
booktitle={Advances in Neural Information Processing Systems},
editor={A. Beygelzimer and Y. Dauphin and P. Liang and J. Wortman Vaughan},
year={2021},
url={https://openreview.net/forum?id=XnIYa2OG2sr}
}
Close Modal