Research

Sentence Embedding Alignment for Lifelong Relation Extraction

Natural Language Processing

Authors

Published on

03/06/2019

Conventional approaches to relation extraction usually require a fixed set of pre-defined relations. Such requirement is hard to meet in many real applications, especially when new data and relations are emerging incessantly and it is computationally expensive to store all data and re-train the whole model every time new data and relations come in. We formulate such a challenging problem as lifelong relation extraction and investigate memory-efficient incremental learning methods without catastrophically forgetting knowledge learned from previous tasks. We first investigate a modified version of the stochastic gradient methods with a replay memory, which surprisingly outperforms recent state-of-the-art lifelong learning methods. We further propose to improve this approach to alleviate the forgetting problem by anchoring the sentence embedding space. Specifically, we utilize an explicit alignment model to mitigate the sentence embedding distortion of the learned model when training on new data and new relations. Experiment results on multiple benchmarks show that our proposed method significantly outperforms the state-of-the-art lifelong learning approaches.

Please cite our work using the BibTeX below.

@article{DBLP:journals/corr/abs-1903-02588,
  author    = {Hong Wang and
               Wenhan Xiong and
               Mo Yu and
               Xiaoxiao Guo and
               Shiyu Chang and
               William Yang Wang},
  title     = {Sentence Embedding Alignment for Lifelong Relation Extraction},
  journal   = {CoRR},
  volume    = {abs/1903.02588},
  year      = {2019},
  url       = {http://arxiv.org/abs/1903.02588},
  archivePrefix = {arXiv},
  eprint    = {1903.02588},
  timestamp = {Sun, 31 Mar 2019 19:01:24 +0200},
  biburl    = {https://dblp.org/rec/journals/corr/abs-1903-02588.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}
Close Modal