Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader
Authors
Authors
- Shiyu Chang
- Mo Yu
- Xiaoxiao Guo
- William Yang Wang
- Wenhan Xiong
Authors
- Shiyu Chang
- Mo Yu
- Xiaoxiao Guo
- William Yang Wang
- Wenhan Xiong
Published on
11/07/2019
We propose a new end-to-end question answering model, which learns to aggregate answer evidence from an incomplete knowledge base (KB) and a set of retrieved text snippets. Under the assumptions that the structured KB is easier to query and the acquired knowledge can help the understanding of unstructured text, our model first accumulates knowledge of entities from a question-related KB subgraph; then reformulates the question in the latent space and reads the texts with the accumulated entity knowledge at hand. The evidence from KB and texts are finally aggregated to predict answers. On the widely-used KBQA benchmark WebQSP, our model achieves consistent improvements across settings with different extents of KB incompleteness.
Please cite our work using the BibTeX below.
@inproceedings{xiong-etal-2019-improving,
title = "Improving Question Answering over Incomplete {KB}s with Knowledge-Aware Reader",
author = "Xiong, Wenhan and
Yu, Mo and
Chang, Shiyu and
Guo, Xiaoxiao and
Wang, William Yang",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P19-1417",
doi = "10.18653/v1/P19-1417",
pages = "4258--4264",
}