Zeroth-Order Stochastic Variance Reduction for Nonconvex Optimization



Published on


As application demands for zeroth-order (gradient-free) optimization accelerate, the need for variance reduced and faster converging approaches is also intensifying. This paper addresses these challenges by presenting: a) a comprehensive theoretical analysis of variance reduced zeroth-order (ZO) optimization, b) a novel variance reduced ZO algorithm, called ZO-SVRG, and c) an experimental evaluation of our approach in the context of two compelling applications, black-box chemical material classification and generation of adversarial examples from black-box deep neural network models. Our theoretical analysis uncovers an essential difficulty in the analysis of ZO-SVRG: the unbiased assumption on gradient estimates no longer holds. We prove that compared to its first-order counterpart, ZO-SVRG with a two-point random gradient estimator could suffer an additional error of order O(1/b), where b is the mini-batch size. To mitigate this error, we propose two accelerated versions of ZO-SVRG utilizing variance reduced gradient estimators, which achieve the best rate known for ZO stochastic optimization (in terms of iterations). Our extensive experimental results show that our approaches outperform other state-of-the-art ZO algorithms, and strike a balance between the convergence rate and the function query complexity

Please cite our work using the BibTeX below.

  author    = {Sijia Liu and
               Bhavya Kailkhura and
               Pin{-}Yu Chen and
               Pai{-}Shun Ting and
               Shiyu Chang and
               Lisa Amini},
  title     = {Zeroth-Order Stochastic Variance Reduction for Nonconvex Optimization},
  journal   = {CoRR},
  volume    = {abs/1805.10367},
  year      = {2018},
  url       = {},
  archivePrefix = {arXiv},
  eprint    = {1805.10367},
  timestamp = {Mon, 13 Aug 2018 16:48:27 +0200},
  biburl    = {},
  bibsource = {dblp computer science bibliography,}

Close Modal