Abstract
Can externalizing dialogue when in the presence of stereo background noise improve speech intelligibility? This has been investigated for audio over headphones using head-tracking in order to explore potential future developments for small-screen devices. A quantitative listening experiment tasked participants with identifying target words in spoken sentences played in the presence of background noise via headphones. 16 different combinations of 3 independent variables were tested: speech and noise locations (internalized/externalized), video (on/off), and masking noise (stationary/fluctuating noise). The results revealed that the best improvements to speech intelligibility were generated by both the video-on condition and externalizing speech at the screen whilst retaining masking noise in the stereo mix.
| Original language | English (US) |
|---|---|
| Article number | 10011 |
| Number of pages | 9 |
| Journal | Proceedings of Audio Engineering Society Convention 144 |
| State | Published - 2018 |
| Externally published | Yes |
| Event | 144th Audio Engineering Society Convention 2018 - Milan, Italy Duration: May 23 2018 → May 26 2018 |
ASJC Scopus subject areas
- Acoustics and Ultrasonics
- Modeling and Simulation
Fingerprint
Dive into the research topics of 'Speech-To-Screen: Spatial separation of dialogue from noise towards improved speech intelligibility for the small screen'. Together they form a unique fingerprint.Cite this
- APA
- Standard
- Harvard
- Vancouver
- Author
- BIBTEX
- RIS