Structured Adversarial Attack: Towards General Implementation and Better Interpretability
Authors
Authors
- Kaidi Xu
- Sijia Liu
- Pu Zhao
- Pin-Yu Chen
- Huan Zhang
- Quanfu Fan
- Deniz Erdogmus
- Yanzhi Wang
- Xue Lin
Structured Adversarial Attack: Towards General Implementation and Better Interpretability
Authors
- Kaidi Xu
- Sijia Liu
- Pu Zhao
- Pin-Yu Chen
- Huan Zhang
- Quanfu Fan
- Deniz Erdogmus
- Yanzhi Wang
- Xue Lin
Published on
08/05/2018
Categories
When generating adversarial examples to attack deep neural networks (DNNs), Lp norm of the added perturbation is usually used to measure the similarity between original image and adversarial example. However, such adversarial attacks perturbing the raw input spaces may fail to capture structural information hidden in the input. This work develops a more general attack model, i.e., the structured attack (StrAttack), which explores group sparsity in adversarial perturbation by sliding a mask through images aiming for extracting key spatial structures. An ADMM (alternating direction method of multipliers)-based framework is proposed that can split the original problem into a sequence of analytically solvable subproblems and can be generalized to implement other attacking methods. Strong group sparsity is achieved in adversarial perturbations even with the same level of Lp-norm distortion (p∈ {1,2,∞}) as the state-of-the-art attacks. We demonstrate the effectiveness of StrAttack by extensive experimental results on MNIST, CIFAR-10 and ImageNet. We also show that StrAttack provides better interpretability (i.e., better correspondence with discriminative image regions) through adversarial saliency map (Paper-not et al., 2016b) and class activation map (Zhou et al., 2016).
Please cite our work using the BibTeX below.
@article{DBLP:journals/corr/abs-1808-01664,
author = {Kaidi Xu and
Sijia Liu and
Pu Zhao and
Pin{-}Yu Chen and
Huan Zhang and
Deniz Erdogmus and
Yanzhi Wang and
Xue Lin},
title = {Structured Adversarial Attack: Towards General Implementation and
Better Interpretability},
journal = {CoRR},
volume = {abs/1808.01664},
year = {2018},
url = {http://arxiv.org/abs/1808.01664},
archivePrefix = {arXiv},
eprint = {1808.01664},
timestamp = {Sun, 02 Sep 2018 15:01:55 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1808-01664.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}