Using context to improve emotion detection in spoken dialog systems

Jackson Liscombe, Giuseppe Riccardi, Dilek Hakkani-Tür

Research output: Contribution to conferencePaperpeer-review

Abstract

Most research that explores the emotional state of users of spoken dialog systems does not fully utilize the contextual nature that the dialog structure provides. This paper reports results of machine learning experiments designed to automatically classify the emotional state of user turns using a corpus of 5,690 dialogs collected with the "How May I Help YouSM" spoken dialog system. We show that augmenting standard lexical and prosodic features with contextual features that exploit the structure of spoken dialog and track user state increases classification accuracy by 2.6%.

Original languageEnglish (US)
Pages1845-1848
Number of pages4
StatePublished - 2005
Externally publishedYes
Event9th European Conference on Speech Communication and Technology - Lisbon, Portugal
Duration: Sep 4 2005Sep 8 2005

Other

Other9th European Conference on Speech Communication and Technology
Country/TerritoryPortugal
CityLisbon
Period9/4/059/8/05

ASJC Scopus subject areas

  • General Engineering

Fingerprint

Dive into the research topics of 'Using context to improve emotion detection in spoken dialog systems'. Together they form a unique fingerprint.

Cite this