Abstract
Most research that explores the emotional state of users of spoken dialog systems does not fully utilize the contextual nature that the dialog structure provides. This paper reports results of machine learning experiments designed to automatically classify the emotional state of user turns using a corpus of 5,690 dialogs collected with the "How May I Help YouSM" spoken dialog system. We show that augmenting standard lexical and prosodic features with contextual features that exploit the structure of spoken dialog and track user state increases classification accuracy by 2.6%.
Original language | English (US) |
---|---|
Pages | 1845-1848 |
Number of pages | 4 |
State | Published - 2005 |
Externally published | Yes |
Event | 9th European Conference on Speech Communication and Technology - Lisbon, Portugal Duration: Sep 4 2005 → Sep 8 2005 |
Other
Other | 9th European Conference on Speech Communication and Technology |
---|---|
Country/Territory | Portugal |
City | Lisbon |
Period | 9/4/05 → 9/8/05 |
ASJC Scopus subject areas
- General Engineering