Abstract
State-of-the-art speech recognition systems are trained using transcribed utterances, preparation of which is labor intensive and time-consuming. In this paper, we describe a new method for reducing the transcription effort for training in automatic speech recognition (ASR). Active learning aims at reducing the number of training examples to be labeled by automatically processing the unlabeled examples, and then selecting the most informative ones with respect to a given cost function for a human to label. We automatically estimate a confidence score for each word of the utterance, exploiting the lattice output of a speech recognizer, which was trained on a small set of transcribed data. We compute utterance confidence scores based on these word confidence scores, then selectively sample the utterances to be transcribed using the utterance confidence scores. In our experiments, we show that we reduce the amount of labeled data needed for a given word accuracy by 27%.
Original language | English (US) |
---|---|
Pages (from-to) | IV/3904-IV/3907 |
Journal | ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings |
Volume | 4 |
DOIs | |
State | Published - 2002 |
Externally published | Yes |
Event | 2002 IEEE International Conference on Acoustic, Speech, and Signal Processing - Orlando, FL, United States Duration: May 13 2002 → May 17 2002 |
ASJC Scopus subject areas
- Software
- Signal Processing
- Electrical and Electronic Engineering