Empowering AI with symbolic reasoning
Joshua B Tenenbaum (MIT), Chuang Gan (IBM Research)
For humans, viewing a visual scene and then answering questions about it—whether simple (e.g., what color is the cube?) or complex (e.g., what shape is the object farthest away from the cube?)—is fairly easy. We quickly recognize objects and their attributes, understand complicated questions, and apply knowledge and reasoning to answer the questions. It is not so for AI: most deep learning models perform this task rather poorly. MIT and IBM scientists are helping AI over this hurdle by teaching it to separate reasoning from visual recognition and language processing, like humans do. They have developed a unique approach that marries two powerful AI techniques: deep neural networks for visual recognition and language understanding, and symbolic program execution for reasoning. Symbolic programs can generalize to new examples and are fully interpretable so that people can easily analyze and explain their reasoning process. Explainability is an essential component of building trusted AI systems and is also useful in detecting and defending against adversarial attacks. The integration of neural and symbolic techniques will enable us to build more powerful AI that can perform more complex tasks by thinking more like we do.