Research

Neural language models as psycholinguistic subjects: Representations of syntactic state

Natural Language Processing

Authors

  • Richard Futrell
  • Ethan Wilcox
  • Takashi Morita
  • Peng Qian
  • Miguel Ballesteros
  • Roger Levy

Published on

03/08/2019

We deploy the methods of controlled psycholinguistic experimentation to shed light on the extent to which the behavior of neural network language models reflects incremental representations of syntactic state. To do so, we examine model behavior on artificial sentences containing a variety of syntactically complex structures. We test four models: two publicly available LSTM sequence models of English (Jozefowicz et al., 2016; Gulordava et al., 2018) trained on large datasets; an RNNG (Dyer et al., 2016) trained on a small, parsed dataset; and an LSTM trained on the same small corpus as the RNNG. We find evidence that the LSTMs trained on large datasets represent syntactic state over large spans of text in a way that is comparable to the RNNG, while the LSTM trained on the small dataset does not or does so only weakly.

Please cite our work using the BibTeX below.

@article{DBLP:journals/corr/abs-1903-03260,
  author    = {Richard Futrell and
               Ethan Wilcox and
               Takashi Morita and
               Peng Qian and
               Miguel Ballesteros and
               Roger Levy},
  title     = {Neural Language Models as Psycholinguistic Subjects: Representations
               of Syntactic State},
  journal   = {CoRR},
  volume    = {abs/1903.03260},
  year      = {2019},
  url       = {http://arxiv.org/abs/1903.03260},
  archivePrefix = {arXiv},
  eprint    = {1903.03260},
  timestamp = {Sun, 31 Mar 2019 19:01:24 +0200},
  biburl    = {https://dblp.org/rec/journals/corr/abs-1903-03260.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

Close Modal