Research

Rethinking Cooperative Rationalization: Introspective Extraction and Complement Control

EMNLP

Published on

11/07/2019

Selective rationalization has become a common mechanism to ensure that predictive models reveal how they use any available features. The selection may be soft or hard, and identifies a subset of input features relevant for prediction. The setup can be viewed as a co-operate game between the selector (aka rationale generator) and the predictor making use of only the selected features. The co-operative setting may, however, be compromised for two reasons. First, the generator typically has no direct access to the outcome it aims to justify, resulting in poor performance. Second, there’s typically no control exerted on the information left outside the selection. We revise the overall co-operative framework to address these challenges. We introduce an introspective model which explicitly predicts and incorporates the outcome into the selection process. Moreover, we explicitly control the rationale complement via an adversary so as not to leave any useful information out of the selection. We show that the two complementary mechanisms maintain both high predictive accuracy and lead to comprehensive rationales.

Please cite our work using the BibTeX below.

@inproceedings{yu-etal-2019-rethinking,
    title = "Rethinking Cooperative Rationalization: Introspective Extraction and Complement Control",
    author = "Yu, Mo  and
      Chang, Shiyu  and
      Zhang, Yang  and
      Jaakkola, Tommi",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
    month = nov,
    year = "2019",
    address = "Hong Kong, China",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D19-1420",
    doi = "10.18653/v1/D19-1420",
    pages = "4094--4103",
}
Close Modal