Efficient AI

All Work

This tiny chip can safeguard user data while enabling efficient computing on a smartphone
This tiny chip can safeguard user data while enabling efficient computing on a smartphone
MIT News
Technique enables AI on edge devices to keep learning over time
Technique enables AI on edge devices to keep learning over time
MIT News
AI model speeds up high-resolution computer vision
AI model speeds up high-resolution computer vision
MIT News
Creating space for the evolution of generative and trustworthy AI
Creating space for the evolution of generative and trustworthy AI
MIT-IBM Watson AI Lab
Learning to grow machine-learning models
Learning to grow machine-learning models
MIT News
Nine from MIT named 2023 Sloan Research Fellows
Nine from MIT named 2023 Sloan Research Fellows
MIT News
Efficient technique improves machine-learning models’ reliability
Efficient technique improves machine-learning models’ reliability
MIT News
Learning on the edge
Learning on the edge
MIT News
MIT Researchers Investigate Deep Learning’s Computational Burden
MIT Researchers Investigate Deep Learning’s Computational Burden
InfoQ
Q&A: Vivienne Sze on crossing the hardware-software divide for efficient AI
Q&A: Vivienne Sze on crossing the hardware-software divide for efficient AI
MIT News
Do Neural Networks Really Need to Be So Big?
Do Neural Networks Really Need to Be So Big?
 
The Computational Limits of Deep Learning
The Computational Limits of Deep Learning
 
Tiny Transfer Learning: Towards Memory-Efficient On-Device Learning
Tiny Transfer Learning: Towards Memory-Efficient On-Device Learning
 
The Lottery Ticket Hypothesis for the Pre-trained BERT Networks
The Lottery Ticket Hypothesis for the Pre-trained BERT Networks
 
Shrinking massive neural networks used to model language
Shrinking massive neural networks used to model language
MIT News
MCUNet: Tiny Deep Learning on IoT Devices
MCUNet: Tiny Deep Learning on IoT Devices
 
System brings deep learning to “internet of things” devices
System brings deep learning to “internet of things” devices
MIT News
What’s Next in AI // Virtual Conference
What’s Next in AI // Virtual Conference
MIT-IBM Watson AI Lab
Shrinking deep learning’s carbon footprint
Shrinking deep learning’s carbon footprint
MIT News
The path to real-world artificial intelligence
The path to real-world artificial intelligence
TechRepublic
IBM on how AI and creativity go hand-in-hand
IBM on how AI and creativity go hand-in-hand
ZDNet
MIT presents AI frameworks that compress models and encourage agents to explore
MIT presents AI frameworks that compress models and encourage agents to explore
VentureBeat
Reducing AI’s carbon footprint
Reducing AI’s carbon footprint
MIT News
MIT aims for energy efficiency in AI model training
MIT aims for energy efficiency in AI model training
Venture Beat
Why Gradient Clipping accelerates training for neural networks
Why Gradient Clipping accelerates training for neural networks
 
AI Could Save the World, If It Doesn’t Ruin the Environment First
AI Could Save the World, If It Doesn’t Ruin the Environment First
PC Magazine
Learning Rate Rewinding for elegant neural network pruning
Learning Rate Rewinding for elegant neural network pruning
 
Once for All: Train One Network and Specialize it for Efficient Deployment
Once for All: Train One Network and Specialize it for Efficient Deployment
 
Adversarial Robustness vs Model Compression, or Both?
Adversarial Robustness vs Model Compression, or Both?
 
SpotTune: Transfer Learning through Adaptive Fine-tuning
SpotTune: Transfer Learning through Adaptive Fine-tuning
 
TSM: Temporal Shift Module for Efficient Video Understanding
TSM: Temporal Shift Module for Efficient Video Understanding
 
Automating machine learning with a joint selection framework
Automating machine learning with a joint selection framework
 
Point-Voxel CNN for Efficient 3D Deep Learning
Point-Voxel CNN for Efficient 3D Deep Learning
 
Big-Little-Video-Net: Work smarter, not harder, for video understanding
Big-Little-Video-Net: Work smarter, not harder, for video understanding
 
Causal inference is expensive. Here’s an algorithm for fixing that.
Causal inference is expensive. Here’s an algorithm for fixing that.
 
New tricks from old dogs: multi-source transfer learning
New tricks from old dogs: multi-source transfer learning
 
Defensive Quantization: When Efficiency Meets Robustness
Defensive Quantization: When Efficiency Meets Robustness
 
Experimental Design for Cost-Aware Learning of Causal Graphs
Experimental Design for Cost-Aware Learning of Causal Graphs
 
Deriving Machine Attention from Human Rationales
Deriving Machine Attention from Human Rationales
 
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks