Imposing Label-Relational Inductive Bias for Extremely Fine-Grained Entity Typing

Graph Deep Learning


Published on


Existing entity typing systems usually exploit the type hierarchy provided by knowledge base (KB) schema to model label correlations and thus improve the overall performance. Such techniques, however, are not directly applicable to more open and practical scenarios where the type set is not restricted by KB schema and includes a vast number of free-form types. To model the underlying label correlations without access to manually annotated label structures, we introduce a novel label-relational inductive bias, represented by a graph propagation layer that effectively encodes both global label co-occurrence statistics and word-level similarities. On a large dataset with over 10,000 free-form types, the graph-enhanced model equipped with an attention-based matching module is able to achieve a much higher recall score while maintaining a high-level precision. Specifically, it achieves a 15.3% relative F1 improvement and also less inconsistency in the outputs. We further show that a simple modification of our proposed graph layer can also improve the performance on a conventional and widely-tested dataset that only includes KB-schema types.

Please cite our work using the BibTeX below.

  author    = {Wenhan Xiong and
               Jiawei Wu and
               Deren Lei and
               Mo Yu and
               Shiyu Chang and
               Xiaoxiao Guo and
               William Yang Wang},
  title     = {Imposing Label-Relational Inductive Bias for Extremely Fine-Grained
               Entity Typing},
  journal   = {CoRR},
  volume    = {abs/1903.02591},
  year      = {2019},
  url       = {},
  archivePrefix = {arXiv},
  eprint    = {1903.02591},
  timestamp = {Sun, 31 Mar 2019 19:01:24 +0200},
  biburl    = {},
  bibsource = {dblp computer science bibliography,}
Close Modal