Efficient AI

All Work

Do Neural Networks Really Need to Be So Big?
Do Neural Networks Really Need to Be So Big?
 
The Computational Limits of Deep Learning
The Computational Limits of Deep Learning
 
Tiny Transfer Learning: Towards Memory-Efficient On-Device Learning
Tiny Transfer Learning: Towards Memory-Efficient On-Device Learning
 
The Lottery Ticket Hypothesis for the Pre-trained BERT Networks
The Lottery Ticket Hypothesis for the Pre-trained BERT Networks
 
Shrinking massive neural networks used to model language
Shrinking massive neural networks used to model language
MIT News
MCUNet: Tiny Deep Learning on IoT Devices
MCUNet: Tiny Deep Learning on IoT Devices
 
System brings deep learning to “internet of things” devices
System brings deep learning to “internet of things” devices
MIT News
What’s Next in AI // Virtual Conference
What’s Next in AI // Virtual Conference
MIT-IBM Watson AI Lab
Shrinking deep learning’s carbon footprint
Shrinking deep learning’s carbon footprint
MIT News
The path to real-world artificial intelligence
The path to real-world artificial intelligence
TechRepublic
IBM on how AI and creativity go hand-in-hand
IBM on how AI and creativity go hand-in-hand
ZDNet
MIT presents AI frameworks that compress models and encourage agents to explore
MIT presents AI frameworks that compress models and encourage agents to explore
VentureBeat
Reducing AI’s carbon footprint
Reducing AI’s carbon footprint
MIT News
MIT aims for energy efficiency in AI model training
MIT aims for energy efficiency in AI model training
Venture Beat
Why Gradient Clipping accelerates training for neural networks
Why Gradient Clipping accelerates training for neural networks
 
AI Could Save the World, If It Doesn’t Ruin the Environment First
AI Could Save the World, If It Doesn’t Ruin the Environment First
PC Magazine
Learning Rate Rewinding for elegant neural network pruning
Learning Rate Rewinding for elegant neural network pruning
 
Once for All: Train One Network and Specialize it for Efficient Deployment
Once for All: Train One Network and Specialize it for Efficient Deployment
 
Adversarial Robustness vs Model Compression, or Both?
Adversarial Robustness vs Model Compression, or Both?
 
Point-Voxel CNN for Efficient 3D Deep Learning
Point-Voxel CNN for Efficient 3D Deep Learning
 
SpotTune: Transfer Learning through Adaptive Fine-tuning
SpotTune: Transfer Learning through Adaptive Fine-tuning
 
TSM: Temporal Shift Module for Efficient Video Understanding
TSM: Temporal Shift Module for Efficient Video Understanding
 
Automating machine learning with a joint selection framework
Automating machine learning with a joint selection framework
 
Big-Little-Video-Net: Work smarter, not harder, for video understanding
Big-Little-Video-Net: Work smarter, not harder, for video understanding
 
Causal inference is expensive. Here’s an algorithm for fixing that.
Causal inference is expensive. Here’s an algorithm for fixing that.
 
New tricks from old dogs: multi-source transfer learning
New tricks from old dogs: multi-source transfer learning
 
Defensive Quantization: When Efficiency Meets Robustness
Defensive Quantization: When Efficiency Meets Robustness
 
Experimental Design for Cost-Aware Learning of Causal Graphs
Experimental Design for Cost-Aware Learning of Causal Graphs
 
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks