Research

Practical Detection of Trojan Neural Networks: Data-Limited and Data-Free Cases

ECCV

Authors

Published on

07/31/2020

Categories

ECCV Machine Learning

When the training data are maliciously tampered, the predictions of the acquired deep neural network (DNN) can be manipulated by an adversary known as the Trojan attack (or poisoning backdoor attack). The lack of robustness of DNNs against Trojan attacks could significantly harm real-life machine learning (ML) systems in downstream applications, therefore posing widespread concern to their trustworthiness. In this paper, we study the problem of the Trojan network (TrojanNet) detection in the data-scarce regime, where only the weights of a trained DNN are accessed by the detector. We first propose a data-limited TrojanNet detector (TND), when only a few data samples are available for TrojanNet detection. We show that an effective data-limited TND can be established by exploring connections between Trojan attack and prediction-evasion adversarial attacks including per-sample attack as well as all-sample universal attack. In addition, we propose a data-free TND, which can detect a TrojanNet without accessing any data samples. We show that such a TND can be built by leveraging the internal response of hidden neurons, which exhibits the Trojan behavior even at random noise inputs. The effectiveness of our proposals is evaluated by extensive experiments under different model architectures and datasets including CIFAR-10, GTSRB, and ImageNet.

This paper has been published at ECCV 2020

Please cite our work using the BibTeX below.

@inproceedings{wang2020practical,
  title={Practical Detection of Trojan Neural Networks: Data-Limited and Data-Free Cases},  
  author={Wang, Ren and Zhang, Gaoyuan and Liu, Sijia and Chen, Pin-Yu and Xiong, Jinjun and Wang, Meng},  
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},  
  pages={},  
  year={2020}  
}
Close Modal