Research

Advancing Model Pruning via Bi-level Optimization

NeurIPS

Authors

  • Sijia Liu
  • Yihua Zhang
  • Yuguang Yao
  • Parikshit Ram
  • Pu Zhao
  • Tianlong Chen
  • Mingyi Hong
  • Yanzhi Wang

Published on

12/04/2022

Categories

NeurIPS Optimization

The deployment constraints in practical applications necessitate the pruning of large-scale deep learning models, i.e., promoting their weight sparsity. As illustrated by the Lottery Ticket Hypothesis (LTH), pruning also has the potential of improving their generalization ability. At the core of LTH, iterative magnitude pruning (IMP) is the predominant pruning method to successfully find ‘winning tickets’. Yet, the computation cost of IMP grows prohibitively as the targeted pruning ratio increases. To reduce the computation overhead, various efficient ‘one-shot’ pruning methods have been developed but these schemes are usually unable to find winning tickets as good as IMP. This raises the question of how to close the gap between pruning accuracy and pruning efficiency? To tackle it, we pursue the algorithmic advancement of model pruning. Specifically, we formulate the pruning problem from a fresh and novel viewpoint, bi-level optimization (BLO). We show that the BLO interpretation provides a technically-grounded optimization base for an efficient implementation of the pruning-retraining learning paradigm used in IMP. We also show that the proposed bi-level optimization-oriented pruning method (termed BIP) is a special class of BLO problems with a bi-linear problem structure. By leveraging such bi-linearity, we theoretically show that BIP can be solved as easily as first-order optimization, thus inheriting the computation efficiency. Through extensive experiments on both structured and unstructured pruning with 5 model architectures and 4 data sets, we demonstrate that BIP can find better winning tickets than IMP in most cases, and is computationally as efficient as the one-shot pruning schemes, demonstrating 2-7× speedup over IMP for the same level of model accuracy and sparsity. Codes are available at https://github.com/OPTML-Group/BiP.

Please cite our work using the BibTeX below.

@inproceedings{
zhang2022advancing,
title={Advancing Model Pruning via Bi-level Optimization},
author={Yihua Zhang and Yuguang Yao and Parikshit Ram and Pu Zhao and Tianlong Chen and Mingyi Hong and Yanzhi Wang and Sijia Liu},
booktitle={Advances in Neural Information Processing Systems},
editor={Alice H. Oh and Alekh Agarwal and Danielle Belgrave and Kyunghyun Cho},
year={2022},
url={https://openreview.net/forum?id=t6O08FxvtBY}
}
Close Modal