TY - GEN
T1 - Automatic estimation of intelligibility measure for consonants in speech
AU - Abavisani, Ali
AU - Hasegawa-Johnson, Mark
N1 - Publisher Copyright:
Copyright © 2020 ISCA
PY - 2020
Y1 - 2020
N2 - In this article, we provide a model to estimate a real-valued measure of the intelligibility of individual speech segments. We trained regression models based on Convolutional Neural Networks (CNN) for stop consonants /p,t,k,b,d,g/ associated with vowel /A/, to estimate the corresponding Signal to Noise Ratio (SNR) at which the Consonant-Vowel (CV) sound becomes intelligible for Normal Hearing (NH) ears. The intelligibility measure for each sound is called SNR90, and is defined to be the SNR level at which human participants are able to recognize the consonant at least 90% correctly, on average, as determined in prior experiments with NH subjects. Performance of the CNN is compared to a baseline prediction based on automatic speech recognition (ASR), specifically, a constant offset subtracted from the SNR at which the ASR becomes capable of correctly labeling the consonant. Compared to baseline, our models were able to accurately estimate the SNR90 intelligibility measure with less than 2 [dB2] Mean Squared Error (MSE) on average, while the baseline ASR-defined measure computes SNR90 with a variance of 5.2 to 26.6 [dB2], depending on the consonant.
AB - In this article, we provide a model to estimate a real-valued measure of the intelligibility of individual speech segments. We trained regression models based on Convolutional Neural Networks (CNN) for stop consonants /p,t,k,b,d,g/ associated with vowel /A/, to estimate the corresponding Signal to Noise Ratio (SNR) at which the Consonant-Vowel (CV) sound becomes intelligible for Normal Hearing (NH) ears. The intelligibility measure for each sound is called SNR90, and is defined to be the SNR level at which human participants are able to recognize the consonant at least 90% correctly, on average, as determined in prior experiments with NH subjects. Performance of the CNN is compared to a baseline prediction based on automatic speech recognition (ASR), specifically, a constant offset subtracted from the SNR at which the ASR becomes capable of correctly labeling the consonant. Compared to baseline, our models were able to accurately estimate the SNR90 intelligibility measure with less than 2 [dB2] Mean Squared Error (MSE) on average, while the baseline ASR-defined measure computes SNR90 with a variance of 5.2 to 26.6 [dB2], depending on the consonant.
KW - Human speech recognition
KW - Objective intelligibility measures
KW - Speech perception in noise
UR - http://www.scopus.com/inward/record.url?scp=85098187190&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85098187190&partnerID=8YFLogxK
U2 - 10.21437/Interspeech.2020-2121
DO - 10.21437/Interspeech.2020-2121
M3 - Conference contribution
AN - SCOPUS:85098187190
SN - 9781713820697
T3 - Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
SP - 1161
EP - 1165
BT - Interspeech 2020
PB - International Speech Communication Association
T2 - 21st Annual Conference of the International Speech Communication Association, INTERSPEECH 2020
Y2 - 25 October 2020 through 29 October 2020
ER -