TY - JOUR
T1 - Prosody-based automatic segmentation of speech into sentences and topics
AU - Shriberg, Elizabeth
AU - Stolcke, Andreas
AU - Hakkani-Tür, Dilek
AU - Tür, Gökhan
N1 - We thank Kemal Sönmez for providing the model for F0 stylization used in this work; Rebecca Bates, Mari Ostendorf, Ze'ev Rivlin, Ananth Sankar and Kemal Sönmez for invaluable assistance in data preparation and discussions; Madelaine Plauché for hand-checking of F0 stylization output and regions of non-modal voicing; and Klaus Ries, Paul Taylor and an anonymous reviewer for helpful comments on earlier drafts. This research was supported by DARPA under contract no. N66001-97-C-8544 and by NSF under STIMULATE grant IRI-9619921. The views herein are those of the authors and should not be interpreted as representing the policies of the funding agencies.
PY - 2000/9
Y1 - 2000/9
N2 - A crucial step in processing speech audio data for information extraction, topic detection, or browsing/playback is to segment the input into sentence and topic units. Speech segmentation is challenging, since the cues typically present for segmenting text (headers, paragraphs, punctuation) are absent in spoken language. We investigate the use of prosody (information gleaned from the timing and melody of speech) for these tasks. Using decision tree and hidden Markov modeling techniques, we combine prosodic cues with word-based approaches, and evaluate performance on two speech corpora, Broadcast News and Switchboard. Results show that the prosodic model alone performs on par with, or better than, word-based statistical language models-for both true and automatically recognized words in news speech. The prosodic model achieves comparable performance with significantly less training data, and requires no hand-labeling of prosodic events. Across tasks and corpora, we obtain a significant improvement over word-only models using a probabilistic combination of prosodic and lexical information. Inspection reveals that the prosodic models capture language-independent boundary indicators described in the literature. Finally, cue usage is task and corpus dependent. For example, pause and pitch features are highly informative for segmenting news speech, whereas pause, duration and word-based cues dominate for natural conversation.
AB - A crucial step in processing speech audio data for information extraction, topic detection, or browsing/playback is to segment the input into sentence and topic units. Speech segmentation is challenging, since the cues typically present for segmenting text (headers, paragraphs, punctuation) are absent in spoken language. We investigate the use of prosody (information gleaned from the timing and melody of speech) for these tasks. Using decision tree and hidden Markov modeling techniques, we combine prosodic cues with word-based approaches, and evaluate performance on two speech corpora, Broadcast News and Switchboard. Results show that the prosodic model alone performs on par with, or better than, word-based statistical language models-for both true and automatically recognized words in news speech. The prosodic model achieves comparable performance with significantly less training data, and requires no hand-labeling of prosodic events. Across tasks and corpora, we obtain a significant improvement over word-only models using a probabilistic combination of prosodic and lexical information. Inspection reveals that the prosodic models capture language-independent boundary indicators described in the literature. Finally, cue usage is task and corpus dependent. For example, pause and pitch features are highly informative for segmenting news speech, whereas pause, duration and word-based cues dominate for natural conversation.
UR - http://www.scopus.com/inward/record.url?scp=0034275920&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0034275920&partnerID=8YFLogxK
U2 - 10.1016/S0167-6393(00)00028-5
DO - 10.1016/S0167-6393(00)00028-5
M3 - Article
AN - SCOPUS:0034275920
SN - 0167-6393
VL - 32
SP - 127
EP - 154
JO - Speech Communication
JF - Speech Communication
IS - 1
ER -