Assessing the Generalization Capacity of Pre-trained Language Models through Japanese Adversarial Natural Language Inference - ACL Anthology

Assessing the Generalization Capacity of Pre-trained Language Models through Japanese Adversarial Natural Language Inference

Hitomi Yanaka, Koji Mineshima


Abstract
Despite the success of multilingual pre-trained language models, it remains unclear to what extent these models have human-like generalization capacity across languages. The aim of this study is to investigate the out-of-distribution generalization of pre-trained language models through Natural Language Inference (NLI) in Japanese, the typological properties of which are different from those of English. We introduce a synthetically generated Japanese NLI dataset, called the Japanese Adversarial NLI (JaNLI) dataset, which is inspired by the English HANS dataset and is designed to require understanding of Japanese linguistic phenomena and illuminate the vulnerabilities of models. Through a series of experiments to evaluate the generalization performance of both Japanese and multilingual BERT models, we demonstrate that there is much room to improve current models trained on Japanese NLI tasks. Furthermore, a comparison of human performance and model performance on the different types of garden-path sentences in the JaNLI dataset shows that structural phenomena that ease interpretation of garden-path sentences for human readers do not help models in the same way, highlighting a difference between human readers and the models.
Anthology ID:
2021.blackboxnlp-1.26
Volume:
Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Jasmijn Bastings, Yonatan Belinkov, Emmanuel Dupoux, Mario Giulianelli, Dieuwke Hupkes, Yuval Pinter, Hassan Sajjad
Venue:
BlackboxNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
337–349
Language:
URL:
https://aclanthology.org/2021.blackboxnlp-1.26
DOI:
10.18653/v1/2021.blackboxnlp-1.26
Bibkey:
Cite (ACL):
Hitomi Yanaka and Koji Mineshima. 2021. Assessing the Generalization Capacity of Pre-trained Language Models through Japanese Adversarial Natural Language Inference. In Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 337–349, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Assessing the Generalization Capacity of Pre-trained Language Models through Japanese Adversarial Natural Language Inference (Yanaka & Mineshima, BlackboxNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.blackboxnlp-1.26.pdf
Code
 verypluming/janli
Data
JaNLISICKSNLI