Abstract

The development of a systematic psychoacoustic "3-D method" was done in order to explore the perceptual cues of stop consonants from naturally produced speech sounds. This 3-D method uses time-truncating, high-pass/low-pass filtering, and masking the noise to measure each sound with their respective contribution of each subcomponent. On the other hand, the AI-gram was utilized to predict the audible components of the speech sound which is a visualization tool that simulates the auditory peripheral processing. It has been found out that the short duration bursts have defined the plosive consonants which were characterized by their center frequency and their delay to the onset of voicing. Meanwhile, the hearing-impaired (HI) speech perception done by pilot studies have illustrated that cochlear dead regions have a considerable impact on consonant identification. Thus, an HI listener may not understand a speech as he/she cannot hear certain sounds while some of the events are missing due to the fact that they have lost their hearing or due to the masking effect introduced by the noise.

Original languageEnglish (US)
Pages (from-to)73-77
Number of pages5
JournalIEEE Signal Processing Magazine
Volume26
Issue number4
DOIs
StatePublished - Jan 1 2009

    Fingerprint

ASJC Scopus subject areas

  • Signal Processing
  • Electrical and Electronic Engineering
  • Applied Mathematics

Cite this