Associate Professor, Department of Brain and Cognitive Sciences; Director, MIT Computational Psycholinguistics Laboratory
Roger Levy is an associate professor in MIT’s Department of Brain and Cognitive Sciences. His research focuses on theoretical and applied questions in the processing and acquisition of natural language. Linguistic communication involves the resolution of uncertainty over a potentially unbounded set of possible signals and meanings. How can a fixed set of knowledge and resources be deployed to manage this uncertainty? And how is this knowledge acquired? To address these questions he combines computational modeling, psycholinguistic experimentation, and analysis of large naturalistic language datasets. This work furthers our understanding of the cognitive underpinning of language processing and acquisition, and helps us design models and algorithms that will allow machines to process human language.
Before coming to MIT, Levy was a professor at the University of California at San Diego. He was a Fulbright Fellow at the Inter-University Program for Chinese Language Study in Taipei, a research student at the University of Tokyo, and a postdoc at the University of Edinburgh. His awards include a Sloan Research Fellowship, an NSF Career Award, and a fellowship at Stanford’s Center for Advanced Study in the Behavioral Sciences. Levy earned a BS in mathematics at the University of Arizona, and a PhD at Stanford University.
- Wilcox, E., Qian, P., Futrell, R., Ballesteros, M., Levy, R. (2019) Structural Supervision Improves Learning of Non-Local Grammatical Dependencies Proceedings of the 18th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
- Futrell, R., Wilcox, E., Morita, T., Qian, P., Ballesteros, M., Levy, R. (2019) Neural Language Models as Psycholinguistic Subjects: Representations of Syntactic State Proceedings of the 18th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
- Wilcox, E., Levy, R., Futrell, R. (2019) Hierarchical Representation in Neural Language Models: Suppression and Recovery of Expectations. Proceedings of the Second BlackboxNLP Workshop.
- Futrell, R., Levy, R. (2019) Do RNNs learn human-like abstract word order preferences?. SCiL2019, pp. 50–59.
- Jan. 24, 2020. New York Times, “She’s the Next President. Wait, Did You Read That Right?” on the Psychological Science study: Implicit Gender Bias in Linguistic Descriptions for Expected Events: The Cases of the 2016 United States and 2017 United Kingdom Elections.
- May 29, 2019. MIT News, “Teaching Language Models Grammar Really Does Make Them Smarter” on the NAACL study: Neural Language Models as Psycholinguistic Subjects.
- May 22, 2018. MIT News, “Gauging Language Proficiency through Eye Movement” on the NAACL study: Assessing Language Proficiency from Eye Movements in Reading.
- April 6, 2017: MIT News, “Articles of Faith” on the Psychological Science study: The Emergence of an Abstract Grammatical Category in Children’s Early Speech.