Research

Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach

Robustness

Authors

Published on

12/15/2018

The robustness of neural networks to adversarial examples has received great attention due to security implications. Despite various attack approaches to crafting visually imperceptible adversarial examples, little has been developed towards a comprehensive measure of robustness. In this paper, we provide a theoretical justification for converting robustness analysis into a local Lipschitz constant estimation problem, and propose to use the Extreme Value Theory for efficient evaluation. Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness. The proposed CLEVER score is attack-agnostic and computationally feasible for large neural networks. Experimental results on various networks, including ResNet, Inception-v3 and MobileNet, show that (i) CLEVER is aligned with the robustness indication measured by the ℓ2 and ℓ∞ norms of adversarial examples from powerful attacks, and (ii) defended networks using defensive distillation or bounded ReLU indeed achieve better CLEVER scores. To the best of our knowledge, CLEVER is the first attack-independent robustness metric that can be applied to any neural network classifier.

Please cite our work using the BibTeX below.

@misc{weng2018evaluating,
    title={Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach},
    author={Tsui-Wei Weng and Huan Zhang and Pin-Yu Chen and Jinfeng Yi and Dong Su and Yupeng Gao and Cho-Jui Hsieh and Luca Daniel},
    year={2018},
    eprint={1801.10578},
    archivePrefix={arXiv},
    primaryClass={stat.ML}
}
Close Modal