Research

Learning Proximal Operators to Discover Multiple Optima

ICLR

Authors

Published on

05/05/2023

Categories

ICLR

Finding multiple solutions of non-convex optimization problems is a ubiquitous yet challenging task. Most past algorithms either apply single-solution optimization methods from multiple random initial guesses or search in the vicinity of found solutions using ad hoc heuristics. We present an end-to-end method to learn the proximal operator of a family of training problems so that multiple local minima can be quickly obtained from initial guesses by iterating the learned operator, emulating the proximal-point algorithm that has fast convergence. The learned proximal operator can be further generalized to recover multiple optima for unseen problems at test time, enabling applications such as object detection. The key ingredient in our formulation is a proximal regularization term, which elevates the convexity of our training loss: by applying recent theoretical results, we show that for weakly-convex objectives with Lipschitz gradients, training of the proximal operator converges globally with a practical degree of over-parameterization. We further present an exhaustive benchmark for multi-solution optimization to demonstrate the effectiveness of our method.

Please cite our work using the BibTeX below.

@inproceedings{
li2023learning,
title={Learning Proximal Operators to Discover Multiple Optima},
author={Lingxiao Li and Noam Aigerman and Vladimir Kim and Jiajin Li and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={https://openreview.net/forum?id=PzBGIu-llo7}
}
Close Modal