Optimization

All Work

Higher-Order Certification For Randomized Smoothing
Higher-Order Certification For Randomized Smoothing
 
Training Stronger Baselines for Learning to Optimize
Training Stronger Baselines for Learning to Optimize
 
Why Gradient Clipping accelerates training for neural networks
Why Gradient Clipping accelerates training for neural networks
 
Implementation Matters in Deep RL: A Case Study on PPO and TRPO
Implementation Matters in Deep RL: A Case Study on PPO and TRPO
 
A Closer Look at Deep Policy Gradients
A Closer Look at Deep Policy Gradients
 
Deep Symbolic Superoptimization Without Human Knowledge
Deep Symbolic Superoptimization Without Human Knowledge
 
On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method
On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method
 
On the Convergence of A Class of Adam-Type Algorithms for Non-Convex Optimization
On the Convergence of A Class of Adam-Type Algorithms for Non-Convex Optimization
 
Automating machine learning with a joint selection framework
Automating machine learning with a joint selection framework
 
ZO-AdaMM: Derivative-free optimization for black-box problems
ZO-AdaMM: Derivative-free optimization for black-box problems
 
signSGD via Zeroth-Order Oracle
signSGD via Zeroth-Order Oracle
 
AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks
AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks
 
Zeroth-Order Stochastic Variance Reduction for Nonconvex Optimization
Zeroth-Order Stochastic Variance Reduction for Nonconvex Optimization