Explainability

All Work

3 Questions: To catch financial crimes and bad actors
3 Questions: To catch financial crimes and bad actors
MIT-IBM Watson AI Lab
3 Questions: To catch financial crimes and bad actors
3 Questions: To catch financial crimes and bad actors
 
MIT researchers advance automated interpretability in AI models
MIT researchers advance automated interpretability in AI models
MIT News
How to assess a general-purpose AI model’s reliability before it’s deployed
How to assess a general-purpose AI model’s reliability before it’s deployed
MIT News
Reasoning skills of large language models are often overestimated
Reasoning skills of large language models are often overestimated
MIT News
AI agents help explain other AI systems
AI agents help explain other AI systems
MIT News
Automated system teaches users when to collaborate with an AI assistant
Automated system teaches users when to collaborate with an AI assistant
MIT News
New tool helps people choose the right method for evaluating AI models
New tool helps people choose the right method for evaluating AI models
MIT News
Creating space for the evolution of generative and trustworthy AI
Creating space for the evolution of generative and trustworthy AI
MIT-IBM Watson AI Lab
Explained: How to tell if artificial intelligence is working the way we want it to
Explained: How to tell if artificial intelligence is working the way we want it to
MIT News
Does this artificial intelligence think like a human?
Does this artificial intelligence think like a human?
MIT News
The man who challenges AI every day
The man who challenges AI every day
IBM Research
The path to real-world artificial intelligence
The path to real-world artificial intelligence
TechRepublic
Artificial intelligence is struggling to cope with how the world has changed
Artificial intelligence is struggling to cope with how the world has changed
ZDNet
CLEVRER: The first video dataset for neuro-symbolic reasoning
CLEVRER: The first video dataset for neuro-symbolic reasoning
 
New tricks from old dogs: multi-source transfer learning
New tricks from old dogs: multi-source transfer learning
 
SimVAE: Simulator-Assisted Training for Interpretable Generative Models
SimVAE: Simulator-Assisted Training for Interpretable Generative Models
 
GAN Dissection: Visualizing and Understanding Generative Adversarial Networks
GAN Dissection: Visualizing and Understanding Generative Adversarial Networks
 
Deriving Machine Attention from Human Rationales
Deriving Machine Attention from Human Rationales
 
Structured Adversarial Attack: Towards General Implementation and Better Interpretability
Structured Adversarial Attack: Towards General Implementation and Better Interpretability