Speech-To-Screen: Spatial separation of dialogue from noise towards improved speech intelligibility for the small screen

Philippa J. Demonte, Yan Tang, Richard J. Hughes, Trevor J. Cox, Bruno M. Fazenda, Ben G. Shirley

Research output: Contribution to conferencePaperpeer-review

Abstract

Can externalizing dialogue when in the presence of stereo background noise improve speech intelligibility? This has been investigated for audio over headphones using head-tracking in order to explore potential future developments for small-screen devices. A quantitative listening experiment tasked participants with identifying target words in spoken sentences played in the presence of background noise via headphones. 16 different combinations of 3 independent variables were tested: speech and noise locations (internalized/externalized), video (on/off), and masking noise (stationary/fluctuating noise). The results revealed that the best improvements to speech intelligibility were generated by both the video-on condition and externalizing speech at the screen whilst retaining masking noise in the stereo mix.

Original languageEnglish (US)
StatePublished - 2018
Externally publishedYes
Event144th Audio Engineering Society Convention 2018 - Milan, Italy
Duration: May 23 2018May 26 2018

Conference

Conference144th Audio Engineering Society Convention 2018
Country/TerritoryItaly
CityMilan
Period5/23/185/26/18

ASJC Scopus subject areas

  • Acoustics and Ultrasonics
  • Modeling and Simulation

Fingerprint

Dive into the research topics of 'Speech-To-Screen: Spatial separation of dialogue from noise towards improved speech intelligibility for the small screen'. Together they form a unique fingerprint.

Cite this