Research

ZO-AdaMM: Zeroth-Order Adaptive Momentum Method for Black-Box Optimization

NeurIPS

Authors

Published on

12/14/2019

Categories

NeurIPS

The adaptive momentum method (AdaMM), which uses past gradients to update descent directions and learning rates simultaneously, has become one of the most popular first-order optimization methods for solving machine learning problems. However, AdaMM is not suited for solving black-box optimization problems, where explicit gradient forms are difficult or infeasible to obtain. In this paper, we propose a zeroth-order AdaMM (ZO-AdaMM) algorithm, that generalizes AdaMM to the gradient-free regime. We show that the convergence rate of ZOAdaMM for both convex and nonconvex optimization is roughly a factor of O( √ d) worse than that of the first-order AdaMM algorithm, where d is problem size. In particular, we provide a deep understanding on why Mahalanobis distance matters in convergence of ZO-AdaMM and other AdaMM-type methods. As a byproduct, our analysis makes the first step toward understanding adaptive learning rate methods for nonconvex constrained optimization. Furthermore, we demonstrate two applications, designing per-image and universal adversarial attacks from blackbox neural networks, respectively. We perform extensive experiments on ImageNet and empirically show that ZO-AdaMM converges much faster to a solution of high accuracy compared with 6 state-of-the-art ZO optimization methods.

This work was published in NeurIPS 2019.

Please cite our work using the BibTeX below.

@inproceedings{DBLP:conf/nips/Chen0XLLHC19,
  author={Xiangyi Chen and Sijia Liu and Kaidi Xu and Xingguo Li and Xue Lin and Mingyi Hong and David D. Cox},
  title={ZO-AdaMM: Zeroth-Order Adaptive Momentum Method for Black-Box Optimization},
  year={2019},
  cdate={1546300800000},
  pages={7202-7213},
  url={https://proceedings.neurips.cc/paper/2019/hash/576d026223582a390cd323bef4bad026-Abstract.html},
  booktitle={NeurIPS},
  crossref={conf/nips/2019}
}
Close Modal