Representation of Word Meaning in the Intermediate Projection Layer of a Neural Language Model - ACL Anthology

Representation of Word Meaning in the Intermediate Projection Layer of a Neural Language Model

Steven Derby, Paul Miller, Brian Murphy, Barry Devereux


Abstract
Performance in language modelling has been significantly improved by training recurrent neural networks on large corpora. This progress has come at the cost of interpretability and an understanding of how these architectures function, making principled development of better language models more difficult. We look inside a state-of-the-art neural language model to analyse how this model represents high-level lexico-semantic information. In particular, we investigate how the model represents words by extracting activation patterns where they occur in the text, and compare these representations directly to human semantic knowledge.
Anthology ID:
W18-5449
Volume:
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2018
Address:
Brussels, Belgium
Editors:
Tal Linzen, Grzegorz Chrupała, Afra Alishahi
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
362–364
Language:
URL:
https://aclanthology.org/W18-5449
DOI:
10.18653/v1/W18-5449
Bibkey:
Cite (ACL):
Steven Derby, Paul Miller, Brian Murphy, and Barry Devereux. 2018. Representation of Word Meaning in the Intermediate Projection Layer of a Neural Language Model. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 362–364, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Representation of Word Meaning in the Intermediate Projection Layer of a Neural Language Model (Derby et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-5449.pdf