Research

Sequence-Level Mixed Sample Data Augmentation

EMNLP

Authors

Published on

11/18/2020

Despite their empirical success, neural networks still have difficulty capturing compositional aspects of natural language. This work proposes a simple data augmentation approach to encourage compositional behavior in neural models for sequence-to-sequence problems. Our approach, SeqMix, creates new synthetic examples by softly combining input/output sequences from the training set. We connect this approach to existing techniques such as SwitchOut (Wang et al., 2018) and word dropout (Sennrich et al., 2016), and show that these techniques are all approximating variants of a single objective. SeqMix consistently yields approximately 1.0 BLEU improvement on five different translation datasets over strong Transformer baselines. On tasks that require strong compositional generalization such as SCAN and semantic parsing, SeqMix also offers further improvements.

This paper has been published at EMNLP 2020

Please cite our work using the BibTeX below.

@misc{guo2020sequencelevel,
      title={Sequence-Level Mixed Sample Data Augmentation}, 
      author={Demi Guo and Yoon Kim and Alexander M. Rush},
      year={2020},
      eprint={2011.09039},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Close Modal