Research

Approximate Cross-Validation for Structured Models

NeurIPS

Authors

Published on

12/24/2020

Many modern data analyses benefit from explicitly modeling dependence structure in data — such as measurements across time or space, ordered words in a sentence, or genes in a genome. Cross-validation is the gold standard to evaluate these analyses but can be prohibitively slow due to the need to re-run already-expensive learning algorithms many times. Previous work has shown approximate cross-validation (ACV) methods provide a fast and provably accurate alternative in the setting of empirical risk minimization. But this existing ACV work is restricted to simpler models by the assumptions that (i) data are independent and (ii) an exact initial model fit is available. In structured data analyses, (i) is always untrue, and (ii) is often untrue. In the present work, we address (i) by extending ACV to models with dependence structure. To address (ii), we verify — both theoretically and empirically — that ACV quality deteriorates smoothly with noise in the initial fit. We demonstrate the accuracy and computational benefits of our proposed methods on a diverse set of real-world applications.

This paper has been published as a poster in the 2020 Neural Information Processing Systems (NeurIPS) conference.

Please cite our work using the BibTeX below.

@misc{ghosh2020approximate,
      title={Approximate Cross-Validation for Structured Models}, 
      author={Soumya Ghosh and William T. Stephenson and Tin D. Nguyen and Sameer K. Deshpande and Tamara Broderick},
      year={2020},
      eprint={2006.12669},
      archivePrefix={arXiv},
      primaryClass={stat.ML}
}
Close Modal