Robust Overfitting may be mitigated by properly learned smoothening
Authors
Authors
- Tianlong Chen
- Zhenyu Zhang
- Sijia Liu
- Shiyu Chang
- Zhangyang Wang
Authors
- Tianlong Chen
- Zhenyu Zhang
- Sijia Liu
- Shiyu Chang
- Zhangyang Wang
Published on
09/28/2020
Categories
A recent study (Rice et al., 2020) revealed overfitting to be a dominant phenomenon in adversarially robust training of deep networks, and that appropriate early-stopping of adversarial training (AT) could match the performance gains of most recent algorithmic improvements. This intriguing problem of robust overfitting motivates us to seek more remedies. As a pilot study, this paper investigates two empirical means to inject more learned smoothening during AT: one leveraging knowledge distillation and self-training to smooth the logits, the other performing stochastic weight averaging (Izmailov et al., 2018) to smooth the weights. Despite the embarrassing simplicity, the two approaches are surprisingly effective and hassle-free in mitigating robust overfitting. Experiments demonstrate that by plugging in them to AT, we can simultaneously boost the standard accuracy by and robust accuracy by , across multiple datasets (STL-10, SVHN, CIFAR-10, CIFAR-100, and Tiny ImageNet), perturbation types ( and ), and robustified methods (PGD, TRADES, and FSGM), establishing the new state-of-the-art bar in AT. We present systematic visualizations and analyses to dive into their possible working mechanisms. We also carefully exclude the possibility of gradient masking by evaluating our models’ robustness against transfer attacks.
Please cite our work using the BibTeX below.
@inproceedings{
chen2021robust,
title={Robust Overfitting may be mitigated by properly learned smoothening},
author={Tianlong Chen and Zhenyu Zhang and Sijia Liu and Shiyu Chang and Zhangyang Wang},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=qZzy5urZw9}
}