Speech-To-Screen: Spatial separation of dialogue from noise towards improved speech intelligibility for the small screen

  • Philippa J. Demonte
  • , Yan Tang
  • , Richard J. Hughes
  • , Trevor J. Cox
  • , Bruno M. Fazenda
  • , Ben G. Shirley

Research output: Contribution to journalConference articlepeer-review

Abstract

Can externalizing dialogue when in the presence of stereo background noise improve speech intelligibility? This has been investigated for audio over headphones using head-tracking in order to explore potential future developments for small-screen devices. A quantitative listening experiment tasked participants with identifying target words in spoken sentences played in the presence of background noise via headphones. 16 different combinations of 3 independent variables were tested: speech and noise locations (internalized/externalized), video (on/off), and masking noise (stationary/fluctuating noise). The results revealed that the best improvements to speech intelligibility were generated by both the video-on condition and externalizing speech at the screen whilst retaining masking noise in the stereo mix.

Original languageEnglish (US)
Article number10011
Number of pages9
JournalProceedings of Audio Engineering Society Convention 144
StatePublished - 2018
Externally publishedYes
Event144th Audio Engineering Society Convention 2018 - Milan, Italy
Duration: May 23 2018May 26 2018

ASJC Scopus subject areas

  • Acoustics and Ultrasonics
  • Modeling and Simulation

Fingerprint

Dive into the research topics of 'Speech-To-Screen: Spatial separation of dialogue from noise towards improved speech intelligibility for the small screen'. Together they form a unique fingerprint.

Cite this