TY - GEN
T1 - G2PU
T2 - 49th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024
AU - Gao, Heting
AU - Hasegawa-Johnson, Mark
AU - Yoo, Chang D.
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Most phoneme transcripts are generated using forced alignment: typically a grapheme-to-phoneme transducer (G2P) is applied to text sequences to generate candidate phoneme transcripts, which are then time-aligned to the waveform using an acoustic model. This paper demonstrates, for the first time, simultaneous optimization of the G2P, the acoustic model, and the acoustic alignment to a corpus. To this end, we propose G2PU, a joint CTC-attention model consisting of an encoder-decoder G2P network and an encoder-CTC unit-to-phoneme (U2P) network, where the units are extracted from speech. We demonstrate that the G2P and U2P, operating in parallel, produce lower phone error rates than those of state-of-the-art open-source G2P and forced alignment systems. Furthermore, although the G2P and U2P are trained using parallel speech and text, their synergy can be generalized to text-only test corpora if we also train a grapheme-to-unit (G2U) network that generates speech units from text in the absence of parallel speech. Our G2PU model is trained using phoneme transcripts generated by a teacher G2P tool. Our experiments on Chinese and Japanese show that G2PU reduces phoneme error rate by 7% to 29% relative compared to its teacher. Finally, we include case studies to provide insights into the system's workings.
AB - Most phoneme transcripts are generated using forced alignment: typically a grapheme-to-phoneme transducer (G2P) is applied to text sequences to generate candidate phoneme transcripts, which are then time-aligned to the waveform using an acoustic model. This paper demonstrates, for the first time, simultaneous optimization of the G2P, the acoustic model, and the acoustic alignment to a corpus. To this end, we propose G2PU, a joint CTC-attention model consisting of an encoder-decoder G2P network and an encoder-CTC unit-to-phoneme (U2P) network, where the units are extracted from speech. We demonstrate that the G2P and U2P, operating in parallel, produce lower phone error rates than those of state-of-the-art open-source G2P and forced alignment systems. Furthermore, although the G2P and U2P are trained using parallel speech and text, their synergy can be generalized to text-only test corpora if we also train a grapheme-to-unit (G2U) network that generates speech units from text in the absence of parallel speech. Our G2PU model is trained using phoneme transcripts generated by a teacher G2P tool. Our experiments on Chinese and Japanese show that G2PU reduces phoneme error rate by 7% to 29% relative compared to its teacher. Finally, we include case studies to provide insights into the system's workings.
KW - g2p
KW - grapheme-to-phoneme transducer
KW - speech recognition
UR - http://www.scopus.com/inward/record.url?scp=85195428964&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85195428964&partnerID=8YFLogxK
U2 - 10.1109/ICASSP48485.2024.10448105
DO - 10.1109/ICASSP48485.2024.10448105
M3 - Conference contribution
AN - SCOPUS:85195428964
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
SP - 10061
EP - 10065
BT - 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 14 April 2024 through 19 April 2024
ER -