The time-course of cortical responses to speech revealed by fast optical imaging

Joseph C. Toscano, Nathaniel D. Anderson, Monica Fabiani, Gabriele Gratton, Susan M. Garnsey

Research output: Contribution to journalArticle

Abstract

Recent work has sought to describe the time-course of spoken word recognition, from initial acoustic cue encoding through lexical activation, and identify cortical areas involved in each stage of analysis. However, existing methods are limited in either temporal or spatial resolution, and as a result, have only provided partial answers to the question of how listeners encode acoustic information in speech. We present data from an experiment using a novel neuroimaging method, fast optical imaging, to directly assess the time-course of speech perception, providing non-invasive measurement of speech sound representations, localized to specific cortical areas. We find that listeners encode speech in terms of continuous acoustic cues at early stages of processing (ca. 96 ms post-stimulus onset), and begin activating phonological category representations rapidly (ca. 144 ms post-stimulus). Moreover, cue-based representations are widespread in the brain and overlap in time with graded category-based representations, suggesting that spoken word recognition involves simultaneous activation of both continuous acoustic cues and phonological categories.

Original languageEnglish (US)
Pages (from-to)32-42
Number of pages11
JournalBrain and Language
Volume184
DOIs
StatePublished - Sep 2018

Fingerprint

Optical Imaging
Acoustics
acoustics
Cues
listener
activation
stimulus
Speech Perception
Phonetics
Neuroimaging
brain
time
Optical
Acoustic Cues
Time Course
Imaging
experiment
Brain
Listeners
Spoken Word Recognition

Keywords

  • Event-related potentials
  • Optical imaging
  • Phonological categorization
  • Speech perception
  • Spoken language processing

ASJC Scopus subject areas

  • Experimental and Cognitive Psychology
  • Language and Linguistics
  • Linguistics and Language
  • Cognitive Neuroscience
  • Speech and Hearing

Cite this

The time-course of cortical responses to speech revealed by fast optical imaging. / Toscano, Joseph C.; Anderson, Nathaniel D.; Fabiani, Monica; Gratton, Gabriele; Garnsey, Susan M.

In: Brain and Language, Vol. 184, 09.2018, p. 32-42.

Research output: Contribution to journalArticle

Toscano, Joseph C. ; Anderson, Nathaniel D. ; Fabiani, Monica ; Gratton, Gabriele ; Garnsey, Susan M. / The time-course of cortical responses to speech revealed by fast optical imaging. In: Brain and Language. 2018 ; Vol. 184. pp. 32-42.
@article{d35504a456b74eb2832be22788d0f704,
title = "The time-course of cortical responses to speech revealed by fast optical imaging",
abstract = "Recent work has sought to describe the time-course of spoken word recognition, from initial acoustic cue encoding through lexical activation, and identify cortical areas involved in each stage of analysis. However, existing methods are limited in either temporal or spatial resolution, and as a result, have only provided partial answers to the question of how listeners encode acoustic information in speech. We present data from an experiment using a novel neuroimaging method, fast optical imaging, to directly assess the time-course of speech perception, providing non-invasive measurement of speech sound representations, localized to specific cortical areas. We find that listeners encode speech in terms of continuous acoustic cues at early stages of processing (ca. 96 ms post-stimulus onset), and begin activating phonological category representations rapidly (ca. 144 ms post-stimulus). Moreover, cue-based representations are widespread in the brain and overlap in time with graded category-based representations, suggesting that spoken word recognition involves simultaneous activation of both continuous acoustic cues and phonological categories.",
keywords = "Event-related potentials, Optical imaging, Phonological categorization, Speech perception, Spoken language processing",
author = "Toscano, {Joseph C.} and Anderson, {Nathaniel D.} and Monica Fabiani and Gabriele Gratton and Garnsey, {Susan M.}",
year = "2018",
month = "9",
doi = "10.1016/j.bandl.2018.06.006",
language = "English (US)",
volume = "184",
pages = "32--42",
journal = "Brain and Language",
issn = "0093-934X",
publisher = "Academic Press Inc.",

}

TY - JOUR

T1 - The time-course of cortical responses to speech revealed by fast optical imaging

AU - Toscano, Joseph C.

AU - Anderson, Nathaniel D.

AU - Fabiani, Monica

AU - Gratton, Gabriele

AU - Garnsey, Susan M.

PY - 2018/9

Y1 - 2018/9

N2 - Recent work has sought to describe the time-course of spoken word recognition, from initial acoustic cue encoding through lexical activation, and identify cortical areas involved in each stage of analysis. However, existing methods are limited in either temporal or spatial resolution, and as a result, have only provided partial answers to the question of how listeners encode acoustic information in speech. We present data from an experiment using a novel neuroimaging method, fast optical imaging, to directly assess the time-course of speech perception, providing non-invasive measurement of speech sound representations, localized to specific cortical areas. We find that listeners encode speech in terms of continuous acoustic cues at early stages of processing (ca. 96 ms post-stimulus onset), and begin activating phonological category representations rapidly (ca. 144 ms post-stimulus). Moreover, cue-based representations are widespread in the brain and overlap in time with graded category-based representations, suggesting that spoken word recognition involves simultaneous activation of both continuous acoustic cues and phonological categories.

AB - Recent work has sought to describe the time-course of spoken word recognition, from initial acoustic cue encoding through lexical activation, and identify cortical areas involved in each stage of analysis. However, existing methods are limited in either temporal or spatial resolution, and as a result, have only provided partial answers to the question of how listeners encode acoustic information in speech. We present data from an experiment using a novel neuroimaging method, fast optical imaging, to directly assess the time-course of speech perception, providing non-invasive measurement of speech sound representations, localized to specific cortical areas. We find that listeners encode speech in terms of continuous acoustic cues at early stages of processing (ca. 96 ms post-stimulus onset), and begin activating phonological category representations rapidly (ca. 144 ms post-stimulus). Moreover, cue-based representations are widespread in the brain and overlap in time with graded category-based representations, suggesting that spoken word recognition involves simultaneous activation of both continuous acoustic cues and phonological categories.

KW - Event-related potentials

KW - Optical imaging

KW - Phonological categorization

KW - Speech perception

KW - Spoken language processing

UR - http://www.scopus.com/inward/record.url?scp=85049069255&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85049069255&partnerID=8YFLogxK

U2 - 10.1016/j.bandl.2018.06.006

DO - 10.1016/j.bandl.2018.06.006

M3 - Article

C2 - 29960165

AN - SCOPUS:85049069255

VL - 184

SP - 32

EP - 42

JO - Brain and Language

JF - Brain and Language

SN - 0093-934X

ER -