Comprehension without segmentation: a proof of concept with naive discriminative learning

R. Harald Baayen, Cyrus Shaoul, Jon Willits, Michael Ramscar

Research output: Contribution to journalArticlepeer-review

Abstract

Current theories of auditory comprehension assume that the segmentation of speech into word forms is an essential prerequisite to understanding. We present a computational model that does not seek to learn word forms, but instead decodes the experiences discriminated by the speech input. At the heart of this model is a discrimination learning network trained on full utterances. This network constitutes an atemporal long-term memory system. A fixed-width short-term memory buffer projects a constantly updated moving window over the incoming speech onto the network's input layer. In response, the memory generates temporal activation functions for each of the output units. We show that this new discriminative perspective on auditory comprehension is consistent with young infants' sensitivity to the statistical structure of the input. Simulation studies, both with artificial language and with English child-directed speech, provide a first computational proof of concept and demonstrate the importance of utterance-wide co-learning.

Original languageEnglish (US)
Pages (from-to)106-128
Number of pages23
JournalLanguage, Cognition and Neuroscience
Volume31
Issue number1
DOIs
StatePublished - Jan 2 2016
Externally publishedYes

Keywords

  • Auditory comprehension
  • Discriminative learning
  • Phonotactics
  • Rescorla–Wagner equations
  • Word segmentation

ASJC Scopus subject areas

  • Language and Linguistics
  • Experimental and Cognitive Psychology
  • Linguistics and Language
  • Cognitive Neuroscience

Fingerprint

Dive into the research topics of 'Comprehension without segmentation: a proof of concept with naive discriminative learning'. Together they form a unique fingerprint.

Cite this