Natural Language Processing

What separates humans from the rest of the life on our planet? There are many factors, of course, but high on the list is the ability to form and convey complex ideas with a discernible language.

So if the goal is to maximize the utility of AI systems for humanity, they need to understand our natural mode of thought – and to communicate the way we do.

AI systems powered by neural networks have made great progress
in interpreting and mimicking language. But they’re still a long way from truly understanding language. We’re building AI systems that will cross the bridge from mimicry to comprehension. They’ll actually understand words, parse the meaning of rich ideas, and convert them into actual knowledge.

This new class of natural language processing systems will be powered by new types of neuro-symbolic systems that can understand both
the syntax and semantics of vast streams of language. They’ll connect complex language structures to the ideas they represent – and transcend today’s purely statistical approaches to language.

Top Work

Class-wise rationalization: teaching AI to weigh pros and cons

Class-wise rationalization: teaching AI to weigh pros and cons

Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding

Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding

Teaching language models grammar really does make them smarter

Teaching language models grammar really does make them smarter

05/29/2019 - MIT News

All Work

How to Spot Robots in a World of A.I.-Generated Text
How to Spot Robots in a World of A.I.-Generated Text
The New York Times
ChatGPT detector could help spot cheaters using AI to write essays
ChatGPT detector could help spot cheaters using AI to write essays
NewScientist
Was this written by a robot? These tools help detect AI-generated text
Was this written by a robot? These tools help detect AI-generated text
Fast Company
This is your brain. This is your brain on code
This is your brain. This is your brain on code
MIT News
Daniel Huttenlocher
Daniel Huttenlocher
MIT ILP
Converting several audio streams into one voice makes it easier for AI to learn
Converting several audio streams into one voice makes it easier for AI to learn
IBM Research
AI that can learn the patterns of human language
AI that can learn the patterns of human language
MIT News
Jacob Andreas
Jacob Andreas
MIT ILP
Will transformers take over artificial intelligence?
Will transformers take over artificial intelligence?
Quanta Magazine
Q&A: More-sustainable concrete with machine learning
Q&A: More-sustainable concrete with machine learning
MIT News
Fruit Fly Brain Hacked For Language Processing
Fruit Fly Brain Hacked For Language Processing
Discover Magazine
An intro to the fast-paced world of artificial intelligence
An intro to the fast-paced world of artificial intelligence
MIT News
The Lottery Ticket Hypothesis for the Pre-trained BERT Networks
The Lottery Ticket Hypothesis for the Pre-trained BERT Networks
 
Shrinking massive neural networks used to model language
Shrinking massive neural networks used to model language
MIT News
Toward a machine learning model that can reason about everyday actions
Toward a machine learning model that can reason about everyday actions
MIT News
Research Highlights: ExBERT
Research Highlights: ExBERT
InsideBIGDATA
Here’s what’s stopping AI from reaching human-like understanding
Here’s what’s stopping AI from reaching human-like understanding
TheNextWeb
Reading between the lines with graph deep learning for NLP
Reading between the lines with graph deep learning for NLP
 
IBM highlights new approach to infuse knowledge into NLP models
IBM highlights new approach to infuse knowledge into NLP models
Tech Republic
Finding a good read among billions of choices
Finding a good read among billions of choices
MIT News
Embedding Compression with Isotropic Iterative Quantization
Embedding Compression with Isotropic Iterative Quantization
 
Topics are more meaningful than words. AI for comparative literature.
Topics are more meaningful than words. AI for comparative literature.
 
Using geometry to understand documents
Using geometry to understand documents
 
Neural language models as psycholinguistic subjects: Representations of syntactic state
Neural language models as psycholinguistic subjects: Representations of syntactic state
 
Beyond RNNs: Positional Self-Attention with Co-Attention for Video Question Answering
Beyond RNNs: Positional Self-Attention with Co-Attention for Video Question Answering
 
Imposing Label-Relational Inductive Bias for Extremely Fine-Grained Entity Typing
Imposing Label-Relational Inductive Bias for Extremely Fine-Grained Entity Typing
 
Teaching language models grammar really does make them smarter
Teaching language models grammar really does make them smarter
MIT News
IBM, MIT, and Harvard’s AI uses grammar rules to catch linguistic nuances of U.S. English
IBM, MIT, and Harvard’s AI uses grammar rules to catch linguistic nuances of U.S. English
Venture Beat
The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision
The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision
 
An AI for generating fake news could also help detect it
An AI for generating fake news could also help detect it
MIT Technology Review
This Site Detects Whether Text Was Written by a Bot
This Site Detects Whether Text Was Written by a Bot
Futurism
Structural Supervision Improves Learning of Non-Local Grammatical Dependencies
Structural Supervision Improves Learning of Non-Local Grammatical Dependencies
 
Weakly Supervised Dense Event Captioning in Videos
Weakly Supervised Dense Event Captioning in Videos
 
emrQA: A Large Corpus for Question Answering on Electronic Medical Records
emrQA: A Large Corpus for Question Answering on Electronic Medical Records
 
Dialog-based Interactive Image Retrieval
Dialog-based Interactive Image Retrieval
 
Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image Captioning
Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image Captioning