Research

Extracting Multiple-Relations in One-Pass with Pre-Trained Transformers

ACL

Authors

Published on

11/07/2019

The state-of-the-art solutions for extracting multiple entity-relations from an input paragraph always require a multiple-pass encoding on the input. This paper proposes a new solution that can complete the multiple entityrelations extraction task with only one-pass encoding on the input corpus, and achieve a new state-of-the-art accuracy performance, as demonstrated in the ACE 2005 benchmark. Our solution is built on top of the pre-trained self-attentive models (Transformer). Since our method uses a single-pass to compute all relations at once, it scales to larger datasets easily; which makes it more usable in real-world applications.

Please cite our work using the BibTeX below.

@inproceedings{wang-etal-2019-extracting,
    title = "Extracting Multiple-Relations in One-Pass with Pre-Trained Transformers",
    author = "Wang, Haoyu  and
      Tan, Ming  and
      Yu, Mo  and
      Chang, Shiyu  and
      Wang, Dakuo  and
      Xu, Kun  and
      Guo, Xiaoxiao  and
      Potdar, Saloni",
    booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
    month = jul,
    year = "2019",
    address = "Florence, Italy",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/P19-1132",
    doi = "10.18653/v1/P19-1132",
    pages = "1371--1377",
}
Close Modal