Transparency Helps Reveal When Language Models Learn Meaning

Zhaofeng Wu, William Merrill, Hao Peng, Iz Beltagy, Noah A Smith

Research output: Contribution to journalArticlepeer-review

Abstract

Many current NLP systems are built from language models trained to optimize unsupervised objectives on large amounts of raw text. Under what conditions might such a procedure acquire meaning? Our systematic experiments with synthetic data reveal that, with languages where all expressions have context-independent denotations (i.e., languages with strong transparency), both autoregressive and masked language models successfully learn to emulate semantic relations between expressions. However, when denotations are changed to be context-dependent with the language otherwise unmodified, this ability degrades. Turning to natural language, our experiments with a specific phenomenon—referential opacity—add to the growing body of evidence that current language models do not represent natural language semantics well. We show this failure relates to the context-dependent nature of natural language form-meaning mappings.
Original languageEnglish (US)
Pages (from-to)617-634
JournalTransactions of the Association for Computational Linguistics
Volume11
Early online dateJun 20 2023
DOIs
StatePublished - Jun 20 2023
Externally publishedYes

Fingerprint

Dive into the research topics of 'Transparency Helps Reveal When Language Models Learn Meaning'. Together they form a unique fingerprint.

Cite this