TY - JOUR
T1 - Synthesis of New Words for Improved Dysarthric Speech Recognition on An Expanded Vocabulary
AU - Harvill, John
AU - Issa, Dias
AU - Hasegawa-Johnson, Mark
AU - Yoo, Changdong
N1 - Funding Information:
This work was supported by Institute for Information communications Technology Planning Evaluation(IITP) grant funded by the Korea government(MSIT) (No. 2019-0-01396, Development of framework for analyzing, detecting, mitigating of bias in AI model and training data)
Publisher Copyright:
©2021 IEEE.
PY - 2021
Y1 - 2021
N2 - Dysarthria is a condition where people experience a reduction in speech intelligibility due to a neuromotor disorder. Previous works in dysarthric speech recognition have focused on accurate recognition of words encountered in training data. Due to the rarity of dysarthria in the general population, a relatively small amount of publicly-available training data exists for dysarthric speech. The number of unique words in these datasets is small, so ASR systems trained with existing dysarthric speech data are limited to recognition of those words. In this paper, we propose a data augmentation method using voice conversion that allows dysarthric ASR systems to accurately recognize words outside of the training set vocabulary. We demonstrate that a small amount of dysarthric speech data can be used to capture the relevant vocal characteristics of a speaker with dysarthria through a parallel voice conversion system. We show that it's possible to synthesize utterances of new words that were never recorded by speakers with dysarthria, and that these synthesized utterances can be used to train a dysarthric ASR system.
AB - Dysarthria is a condition where people experience a reduction in speech intelligibility due to a neuromotor disorder. Previous works in dysarthric speech recognition have focused on accurate recognition of words encountered in training data. Due to the rarity of dysarthria in the general population, a relatively small amount of publicly-available training data exists for dysarthric speech. The number of unique words in these datasets is small, so ASR systems trained with existing dysarthric speech data are limited to recognition of those words. In this paper, we propose a data augmentation method using voice conversion that allows dysarthric ASR systems to accurately recognize words outside of the training set vocabulary. We demonstrate that a small amount of dysarthric speech data can be used to capture the relevant vocal characteristics of a speaker with dysarthria through a parallel voice conversion system. We show that it's possible to synthesize utterances of new words that were never recorded by speakers with dysarthria, and that these synthesized utterances can be used to train a dysarthric ASR system.
KW - Asr
KW - Ctc
KW - Data augmentation
KW - Dysarthric speech
KW - Voice conversion
UR - http://www.scopus.com/inward/record.url?scp=85109986543&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85109986543&partnerID=8YFLogxK
U2 - 10.1109/ICASSP39728.2021.9414869
DO - 10.1109/ICASSP39728.2021.9414869
M3 - Conference article
AN - SCOPUS:85109986543
SN - 1520-6149
VL - 2021-June
SP - 6428
EP - 6432
JO - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
JF - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
T2 - 2021 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2021
Y2 - 6 June 2021 through 11 June 2021
ER -