Explainability
Black box models won’t cut it for business leaders implementing AI in their core systems and processes. Understanding what an AI model is doing and “thinking” is critical for good decision-making as well as for regulatory compliance. For all the talk about explainability in AI, today’s narrow AI systems are still far too opaque.
Our researchers are at the forefront of solving the explainability problem. One team has developed a method for “interrogating” individual neurons of a neural network to unpack their contributions to the model’s overall output (for example, the classification of an image). Another team is developing new layout algorithms for complex network data (for example, financial transactions) to help a human analyst visually explore the patterns and interactions in that data with a graph neural network.
From image recognition to finance and beyond, explainability is a key theme across our entire portfolio because it’s essential for making AI ready for the real world.
All Work