Combining partial information from speech and text

Daniel Fogerty, Irraj Iftikhar, Rachel Madorskiy

Research output: Contribution to journalArticlepeer-review

Abstract

The current study investigated how partial speech and text information, distributed at various interruption rates, is combined to support sentence recognition in quiet. Speech and text stimuli were interrupted by silence and presented unimodally or combined in multimodal conditions. Across all conditions, performance was best at the highest interruption rates. Listeners were able to gain benefit from most multimodal presentations, even when the rate of interruption was mismatched between modalities. Supplementing partial speech with incomplete visual cues can improve sentence intelligibility and compensate for degraded speech in adverse listening conditions. However, individual variability in benefit depends on unimodal performance.

Original languageEnglish (US)
Pages (from-to)EL189-EL195
JournalJournal of the Acoustical Society of America
Volume147
Issue number2
DOIs
StatePublished - Feb 1 2020
Externally publishedYes

ASJC Scopus subject areas

  • Arts and Humanities (miscellaneous)
  • Acoustics and Ultrasonics

Fingerprint

Dive into the research topics of 'Combining partial information from speech and text'. Together they form a unique fingerprint.

Cite this