Robustness

All Work

Adversarial robustness for machine learning
Adversarial robustness for machine learning
MLSec Ops Podcast
SenSR: the first practical algorithm for individual fairness
SenSR: the first practical algorithm for individual fairness
 
Fast and efficient black-box testing for AI cybersecurity
Fast and efficient black-box testing for AI cybersecurity
 
Adversarial Robustness vs Model Compression, or Both?
Adversarial Robustness vs Model Compression, or Both?
 
On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method
On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method
 
ZO-AdaMM: Derivative-free optimization for black-box problems
ZO-AdaMM: Derivative-free optimization for black-box problems
 
Defensive Quantization: When Efficiency Meets Robustness
Defensive Quantization: When Efficiency Meets Robustness
 
Deep Leakage from Gradients
Deep Leakage from Gradients
 
Tight Certificates of Adversarial Robustness
Tight Certificates of Adversarial Robustness
 
Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach
Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach
 
Neural Network Robustness Certification with General Activation Functions
Neural Network Robustness Certification with General Activation Functions
 
CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks
CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks
 
signSGD via Zeroth-Order Oracle
signSGD via Zeroth-Order Oracle
 
Structured Adversarial Attack: Towards General Implementation and Better Interpretability
Structured Adversarial Attack: Towards General Implementation and Better Interpretability
 
AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks
AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks
 
Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image Captioning
Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image Captioning