Defensive Quantization: When Efficiency Meets Robustness
Authors
Authors
- Ji Lin jilin
- Chuang Gan
- Song Han
Neural network quantization is becoming an industry standard to efficiently deploy deep learning models on hardware platforms, such as CPU, GPU, TPU, and FPGAs. However, we observe that the conventional quantization approaches are vulnerable to adversarial attacks. This paper aims to raise people’s awareness about the security of the quantized models, and we designed a novel quantization methodology to jointly optimize the efficiency and robustness of deep learning models. We first conduct an empirical study to show that vanilla quantization suffers more from adversarial attacks. We observe that the inferior robustness comes from the error amplification effect, where the quantization operation further enlarges the distance caused by amplified noise. Then we propose a novel Defensive Quantization (DQ) method by controlling the Lipschitz constant of the network during quantization, such that the magnitude of the adversarial noise remains non-expansive during inference. Extensive experiments on CIFAR-10 and SVHN datasets demonstrate that our new quantization method can defend neural networks against adversarial examples, and even achieves superior robustness than their full-precision counterparts, while maintaining the same hardware efficiency as vanilla quantization approaches. As a by-product, DQ can also improve the accuracy of quantized models without adversarial attack.
Please cite our work using the BibTeX below.
@article{DBLP:journals/corr/abs-1904-08444,
author = {Ji Lin and
Chuang Gan and
Song Han},
title = {Defensive Quantization: When Efficiency Meets Robustness},
journal = {CoRR},
volume = {abs/1904.08444},
year = {2019},
url = {http://arxiv.org/abs/1904.08444},
archivePrefix = {arXiv},
eprint = {1904.08444},
timestamp = {Fri, 26 Apr 2019 13:18:53 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1904-08444.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}