TY - GEN
T1 - DEVELOPING A MACHINE-LEARNING MODEL FOR DETECTING INTELLIGIBILITY DIFFERENCES IN INDIVIDUALS WITH VOICE DISORDERS
T2 - 10th Convention of the European Acoustics Association, EAA 2023
AU - Pietrowicz, Mary
AU - Orbelo, Diana
AU - Kamboj, Amrit
AU - Yarlagadda, Manoj Krishna
AU - Buller, Kevin
AU - Charney, Sara
AU - Leggett, Cadman
AU - Ishikawa, Keiko
N1 - Publisher Copyright:
© 2023 First author et al. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 Unported License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
PY - 2023
Y1 - 2023
N2 - Voice disorders can reduce an individual's ability to produce intelligible speech; however, intelligibility in dysphonia has limited study. Current methods of intelligibility assessment are subjective and time-consuming, making reliable, efficient monitoring of patient progress difficult for clinicians. Machine-learning techniques, however, may provide novel, automated assessment solutions. This study aims to discover machine-learning models that differentiate habitual speech (HS) from hyperarticulated or “clear speech” (CS). Two corpora with same-subject recordings of HS and CS were used. The corpus consisted of 115 speakers, 65 healthy and 50 with mild-to-moderate voice disorders, saying six sentences from the Consensus of Auditory-Perceptual Evaluation. Acoustic analyses revealed significant differences between HS and CS in speech rate and CPP for female speakers. Various machine modeling techniques are explored for their ability to differentiate HS and CS, and the results are reported.
AB - Voice disorders can reduce an individual's ability to produce intelligible speech; however, intelligibility in dysphonia has limited study. Current methods of intelligibility assessment are subjective and time-consuming, making reliable, efficient monitoring of patient progress difficult for clinicians. Machine-learning techniques, however, may provide novel, automated assessment solutions. This study aims to discover machine-learning models that differentiate habitual speech (HS) from hyperarticulated or “clear speech” (CS). Two corpora with same-subject recordings of HS and CS were used. The corpus consisted of 115 speakers, 65 healthy and 50 with mild-to-moderate voice disorders, saying six sentences from the Consensus of Auditory-Perceptual Evaluation. Acoustic analyses revealed significant differences between HS and CS in speech rate and CPP for female speakers. Various machine modeling techniques are explored for their ability to differentiate HS and CS, and the results are reported.
KW - AI
KW - clear speech
KW - intelligibility
KW - machine learning
KW - voice disorders
UR - http://www.scopus.com/inward/record.url?scp=85191254242&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85191254242&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85191254242
T3 - Proceedings of Forum Acusticum
BT - Forum Acusticum 2023 - 10th Convention of the European Acoustics Association, EAA 2023
PB - European Acoustics Association, EAA
Y2 - 11 September 2023 through 15 September 2023
ER -