Predicting Antonyms in Context using BERT - ACL Anthology

Predicting Antonyms in Context using BERT

Ayana Niwa, Keisuke Nishiguchi, Naoaki Okazaki


Abstract
We address the task of antonym prediction in a context, which is a fill-in-the-blanks problem. This task setting is unique and practical because it requires contrastiveness to the other word and naturalness as a text in filling a blank. We propose methods for fine-tuning pre-trained masked language models (BERT) for context-aware antonym prediction. The experimental results demonstrate that these methods have positive impacts on the prediction of antonyms within a context. Moreover, human evaluation reveals that more than 85% of predictions using the proposed method are acceptable as antonyms.
Anthology ID:
2021.inlg-1.6
Volume:
Proceedings of the 14th International Conference on Natural Language Generation
Month:
August
Year:
2021
Address:
Aberdeen, Scotland, UK
Editors:
Anya Belz, Angela Fan, Ehud Reiter, Yaji Sripada
Venue:
INLG
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
48–54
Language:
URL:
https://aclanthology.org/2021.inlg-1.6
DOI:
10.18653/v1/2021.inlg-1.6
Bibkey:
Cite (ACL):
Ayana Niwa, Keisuke Nishiguchi, and Naoaki Okazaki. 2021. Predicting Antonyms in Context using BERT. In Proceedings of the 14th International Conference on Natural Language Generation, pages 48–54, Aberdeen, Scotland, UK. Association for Computational Linguistics.
Cite (Informal):
Predicting Antonyms in Context using BERT (Niwa et al., INLG 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.inlg-1.6.pdf
Data
SemEval-2018 Task-9