Research

Deriving Machine Attention from Human Rationales

EMNLP

Attention-based models are successful when trained on large amounts of data. In this paper, we demonstrate that even in the low-resource scenario, attention can be learned effectively. To this end, we start with discrete human-annotated rationales and map them into continuous attention. Our central hypothesis is that this mapping is general across domains, and thus can be transferred from resource-rich domains to low-resource ones. Our model jointly learns a domain-invariant representation and induces the desired mapping between rationales and attention. Our empirical results validate this hypothesis and show that our approach delivers significant gains over state-of-the-art baselines, yielding over 15% average error reduction on benchmark datasets.

Please cite our work using the BibTeX below.

@article{DBLP:journals/corr/abs-1808-09367,
  author    = {Yujia Bao and
               Shiyu Chang and
               Mo Yu and
               Regina Barzilay},
  title     = {Deriving Machine Attention from Human Rationales},
  journal   = {CoRR},
  volume    = {abs/1808.09367},
  year      = {2018},
  url       = {http://arxiv.org/abs/1808.09367},
  archivePrefix = {arXiv},
  eprint    = {1808.09367},
  timestamp = {Mon, 03 Sep 2018 13:36:40 +0200},
  biburl    = {https://dblp.org/rec/journals/corr/abs-1808-09367.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}
Close Modal