Controlled Evaluation of Grammatical Knowledge in Mandarin Chinese Language Models
Authors
Authors
- Roger Levy
- Yiwen Wang
- Jennifer Hu
- Peng Qian
Authors
- Roger Levy
- Yiwen Wang
- Jennifer Hu
- Peng Qian
Published on
11/11/2021
Prior work has shown that structural supervision helps English language models learn generalizations about syntactic phenomena such as subject-verb agreement. However, it remains unclear if such an inductive bias would also improve language models’ ability to learn grammatical dependencies in typologically different languages. Here we investigate this question in Mandarin Chinese, which has a logographic, largely syllable-based writing system; different word order; and sparser morphology than English. We train LSTMs, Recurrent Neural Network Grammars, Transformer language models, and Transformer-parameterized generative parsing models on two Mandarin Chinese datasets of different sizes. We evaluate the models’ ability to learn different aspects of Mandarin grammar that assess syntactic and semantic relationships. We find suggestive evidence that structural supervision helps with representing syntactic state across intervening content and improves performance in low-data settings, suggesting that the benefits of hierarchical inductive biases in acquiring dependency relationships may extend beyond English.
Please cite our work using the BibTeX below.
@inproceedings{wang-etal-2021-controlled,
title = "Controlled Evaluation of Grammatical Knowledge in {M}andarin {C}hinese Language Models",
author = "Wang, Yiwen and
Hu, Jennifer and
Levy, Roger and
Qian, Peng",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.454",
doi = "10.18653/v1/2021.emnlp-main.454",
pages = "5604--5620",
}