Research

Self-Supervised Learning for Contextualized Extractive Summarization

ACL

Authors

Published on

11/07/2019

Existing models for extractive summarization are usually trained from scratch with a crossentropy loss, which does not explicitly capture the global context at the document level. In this paper, we aim to improve this task by introducing three auxiliary pre-training tasks that learn to capture the document-level context in a self-supervised fashion. Experiments on the widely-used CNN/DM dataset validate the effectiveness of the proposed auxiliary tasks. Furthermore, we show that after pretraining, a clean model with simple building blocks is able to outperform previous state-ofthe-art that are carefully designed.

Please cite our work using the BibTeX below.

@inproceedings{wang-etal-2019-self-supervised,
    title = "Self-Supervised Learning for Contextualized Extractive Summarization",
    author = "Wang, Hong  and
      Wang, Xin  and
      Xiong, Wenhan  and
      Yu, Mo  and
      Guo, Xiaoxiao  and
      Chang, Shiyu  and
      Wang, William Yang",
    booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
    month = jul,
    year = "2019",
    address = "Florence, Italy",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/P19-1214",
    doi = "10.18653/v1/P19-1214",
    pages = "2221--2227",
}
Close Modal