Explainability

Black box models won’t cut it for business leaders implementing AI in their core systems and processes. Understanding what an AI model is doing and “thinking” is critical for good decision-making as well as for regulatory compliance. For all the talk about explainability in AI, today’s narrow AI systems are still far too opaque.

Our researchers are at the forefront of solving the explainability problem. One team has developed a method for “interrogating” individual neurons of a neural network to unpack their contributions to the model’s overall output (for example, the classification of an image). Another team is developing new layout algorithms for complex network data (for example, financial transactions) to help a human analyst visually explore the patterns and interactions in that data with a graph neural network.

From image recognition to finance and beyond, explainability is a key theme across our entire portfolio because it’s essential for making AI ready for the real world.

All Work

Artificial intelligence is struggling to cope with how the world has changed
Artificial intelligence is struggling to cope with how the world has changed
ZDNet
CLEVRER: The first video dataset for neuro-symbolic reasoning
CLEVRER: The first video dataset for neuro-symbolic reasoning
New tricks from old dogs: multi-source transfer learning
New tricks from old dogs: multi-source transfer learning
SimVAE: Simulator-Assisted Training for Interpretable Generative Models
SimVAE: Simulator-Assisted Training for Interpretable Generative Models
GAN Dissection: Visualizing and Understanding Generative Adversarial Networks
GAN Dissection: Visualizing and Understanding Generative Adversarial Networks
Structured Adversarial Attack: Towards General Implementation and Better Interpretability
Structured Adversarial Attack: Towards General Implementation and Better Interpretability