TY - JOUR
T1 - Cognitive state classification in a spoken tutorial dialogue system
AU - Zhang, Tong
AU - Hasegawa-Johnson, Mark
AU - Levinson, Stephen E.
N1 - Funding Information:
We would like to thank Brian Pianfetti for providing us with the ITS Wizard-of-Oz audio data, and Thomas S. Huang for providing us with all kinds of help in our work. This work is supported by NSF grant number 0085980. Statements in this paper reflect the opinions and conclusions of the authors, and are not endorsed by the NSF.
PY - 2006/6
Y1 - 2006/6
N2 - This paper addresses the manual and automatic labelling, from spontaneous speech, of a particular type of user affect that we call the cognitive state in a tutorial dialogue system with students of primary and early middle school ages. Our definition of the cognitive state is based on analysis of children's spontaneous speech, which is acquired during Wizard-of-Oz simulations of an intelligent math and physics tutor. The cognitive states of children are categorized into three classes: confidence, puzzlement, and hesitation. The manual labelling of cognitive states had an inter-transcriber agreement of kappa score 0.93. The automatic cognitive state labels are generated by classifying prosodic features, text features, and spectral features. Text features are generated from an automatic speech recognition (ASR) system; features include indicator functions of keyword classes and part-of-speech sequences. Spectral features are created based on acoustic likelihood scores of a cognitive state-dependent ASR system, in which phoneme models are adapted to utterances labelled for a particular cognitive state. The effectiveness of the proposed method has been tested on both manually and automatically transcribed speech, and the test yielded very high correctness: 96.6% for manually transcribed speech and 95.7% for automatically recognized speech. Our study shows that the proposed spectral features greatly outperformed the other types of features in the cognitive state classification experiments. Our study also shows that the spectral and prosodic features derived directly from speech signals were very robust to speech recognition errors, much more than the lexical and part-of-speech based features.
AB - This paper addresses the manual and automatic labelling, from spontaneous speech, of a particular type of user affect that we call the cognitive state in a tutorial dialogue system with students of primary and early middle school ages. Our definition of the cognitive state is based on analysis of children's spontaneous speech, which is acquired during Wizard-of-Oz simulations of an intelligent math and physics tutor. The cognitive states of children are categorized into three classes: confidence, puzzlement, and hesitation. The manual labelling of cognitive states had an inter-transcriber agreement of kappa score 0.93. The automatic cognitive state labels are generated by classifying prosodic features, text features, and spectral features. Text features are generated from an automatic speech recognition (ASR) system; features include indicator functions of keyword classes and part-of-speech sequences. Spectral features are created based on acoustic likelihood scores of a cognitive state-dependent ASR system, in which phoneme models are adapted to utterances labelled for a particular cognitive state. The effectiveness of the proposed method has been tested on both manually and automatically transcribed speech, and the test yielded very high correctness: 96.6% for manually transcribed speech and 95.7% for automatically recognized speech. Our study shows that the proposed spectral features greatly outperformed the other types of features in the cognitive state classification experiments. Our study also shows that the spectral and prosodic features derived directly from speech signals were very robust to speech recognition errors, much more than the lexical and part-of-speech based features.
KW - Intelligent tutoring system
KW - Spoken language processing
KW - User affect recognition
UR - http://www.scopus.com/inward/record.url?scp=33646257071&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=33646257071&partnerID=8YFLogxK
U2 - 10.1016/j.specom.2005.09.006
DO - 10.1016/j.specom.2005.09.006
M3 - Article
AN - SCOPUS:33646257071
SN - 0167-6393
VL - 48
SP - 616
EP - 632
JO - Speech Communication
JF - Speech Communication
IS - 6
ER -