Research

Efficient Generalization with Distributionally Robust Learning

NeurIPS

Authors

  • Soumyadip Ghosh
  • Mark S. Squillante
  • Ebisa D. Wollega

Published on

12/14/2021

Categories

NeurIPS

Distributionally robust learning (DRL) is increasingly seen as a viable method to train machine learning models for improved model generalization. These minimax formulations, however, are more difficult to solve. We provide a new stochastic gradient descent algorithm to efficiently solve this DRL formulation. Our approach applies gradient descent to the outer minimization formulation and estimates the gradient of the inner maximization based on a sample average approximation. The latter uses a subset of the data sampled without replacement in each iteration, progressively increasing the subset size to ensure convergence. We rigorously establish convergence to a near-optimal solution under standard regularity assumptions and, for strongly convex losses, match the best known O( −1 ) rate of convergence up to a known threshold. Empirical results demonstrate the significant benefits of our approach over previous work in improving learning for model generalization.

Please cite our work using the BibTeX below.

@inproceedings{
ghosh2021efficient,
title={Efficient Generalization with Distributionally Robust Learning},
author={Soumyadip Ghosh and Mark S. Squillante and Ebisa D. Wollega},
booktitle={Advances in Neural Information Processing Systems},
editor={A. Beygelzimer and Y. Dauphin and P. Liang and J. Wortman Vaughan},
year={2021},
url={https://openreview.net/forum?id=S3e377aHvS9}
}
Close Modal