Explainability

Black box models won’t cut it for business leaders implementing AI in their core systems and processes. Understanding what an AI model is doing and “thinking” is critical for good decision-making as well as for regulatory compliance. For all the talk about explainability in AI, today’s narrow AI systems are still far too opaque.

Our researchers are at the forefront of solving the explainability problem. One team has developed a method for “interrogating” individual neurons of a neural network to unpack their contributions to the model’s overall output (for example, the classification of an image). Another team is developing new layout algorithms for complex network data (for example, financial transactions) to help a human analyst visually explore the patterns and interactions in that data with a graph neural network.

From image recognition to finance and beyond, explainability is a key theme across our entire portfolio because it’s essential for making AI ready for the real world.

All Work

AI agents help explain other AI systems
AI agents help explain other AI systems
MIT News
Automated system teaches users when to collaborate with an AI assistant
Automated system teaches users when to collaborate with an AI assistant
MIT News
New tool helps people choose the right method for evaluating AI models
New tool helps people choose the right method for evaluating AI models
MIT News
Creating space for the evolution of generative and trustworthy AI
Creating space for the evolution of generative and trustworthy AI
MIT-IBM Watson AI Lab
Explained: How to tell if artificial intelligence is working the way we want it to
Explained: How to tell if artificial intelligence is working the way we want it to
MIT News
Does this artificial intelligence think like a human?
Does this artificial intelligence think like a human?
MIT News
The man who challenges AI every day
The man who challenges AI every day
IBM Research
The path to real-world artificial intelligence
The path to real-world artificial intelligence
TechRepublic
Artificial intelligence is struggling to cope with how the world has changed
Artificial intelligence is struggling to cope with how the world has changed
ZDNet
CLEVRER: The first video dataset for neuro-symbolic reasoning
CLEVRER: The first video dataset for neuro-symbolic reasoning
 
New tricks from old dogs: multi-source transfer learning
New tricks from old dogs: multi-source transfer learning
 
SimVAE: Simulator-Assisted Training for Interpretable Generative Models
SimVAE: Simulator-Assisted Training for Interpretable Generative Models
 
GAN Dissection: Visualizing and Understanding Generative Adversarial Networks
GAN Dissection: Visualizing and Understanding Generative Adversarial Networks
 
Deriving Machine Attention from Human Rationales
Deriving Machine Attention from Human Rationales
 
Structured Adversarial Attack: Towards General Implementation and Better Interpretability
Structured Adversarial Attack: Towards General Implementation and Better Interpretability