Robustness
All Work
Adversarial robustness for machine learning
Adversarial robustness for machine learning
MLSec Ops Podcast
↗
On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method
On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method
CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks
CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks
Structured Adversarial Attack: Towards General Implementation and Better Interpretability
Structured Adversarial Attack: Towards General Implementation and Better Interpretability
AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks
AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks
Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image Captioning
Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image Captioning