Research

Lexicon Learning for Few-Shot Neural Sequence Modeling

ACL

Authors

Published on

08/06/2021

Sequence-to-sequence transduction is the core problem in language processing applications as diverse as semantic parsing, machine translation, and instruction following. The neural network models that provide the dominant solution to these problems are brittle, especially in low-resource settings: they fail to generalize correctly or systematically from small datasets. Past work has shown that many failures of systematic generalization arise from neural models’ inability to disentangle lexical phenomena from syntactic ones. To address this, we augment neural decoders with a lexical translation mechanism that generalizes existing copy mechanisms to incorporate learned, decontextualized, token-level translation rules. We describe how to initialize this mechanism using a variety of lexicon learning algorithms, and show that it improves systematic generalization on a diverse set of sequence modeling tasks drawn from cognitive science, formal semantics, and machine translation.

Please cite our work using the BibTeX below.

@inproceedings{akyurek-andreas-2021-lexicon,
    title = "Lexicon Learning for Few Shot Sequence Modeling",
    author = "Akyurek, Ekin  and
      Andreas, Jacob",
    booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
    month = aug,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.acl-long.382",
    doi = "10.18653/v1/2021.acl-long.382",
    pages = "4934--4946",
}
Close Modal