Dialog Acts for Task-Driven Embodied Agents

Spandana Gella, Aishwarya Padmakumar, Patrick Lange, Dilek Hakkani-Tur

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Embodied agents need to be able to interact in natural language - understanding task descriptions and asking appropriate follow up questions to obtain necessary information to be effective at successfully accomplishing tasks for a wide range of users. In this work, we propose a set of dialog acts for modelling such dialogs and annotate the TEACh dataset that includes over 3,000 situated, task oriented conversations (consisting of 39.5k utterances in total) with dialog acts. TEACh-DA is one of the first large scale dataset of dialog act annotations for embodied task completion. Furthermore, we demonstrate the use of this annotated dataset in training models for tagging the dialog acts of a given utterance, predicting the dialog act of the next response given a dialog history, and use the dialog acts to guide agent's non-dialog behaviour. In particular, our experiments on the TEACh Execution from Dialog History task where the model predicts the sequence of low level actions to be executed in the environment for embodied task completion, demonstrate that dialog acts can improve end task success rate by up to 2 points compared to the system without dialog acts.

Original languageEnglish (US)
Title of host publicationSIGDIAL 2022 - 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, Proceedings of the Conference
PublisherAssociation for Computational Linguistics (ACL)
Pages111-123
Number of pages13
ISBN (Electronic)9781955917667
DOIs
StatePublished - 2022
Externally publishedYes
Event23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIGDIAL 2022 - Edinburgh, United Kingdom
Duration: Sep 7 2022Sep 9 2022

Publication series

NameSIGDIAL 2022 - 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, Proceedings of the Conference

Conference

Conference23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIGDIAL 2022
Country/TerritoryUnited Kingdom
CityEdinburgh
Period9/7/229/9/22

ASJC Scopus subject areas

  • Modeling and Simulation
  • Computer Graphics and Computer-Aided Design
  • Computer Vision and Pattern Recognition
  • Human-Computer Interaction

Fingerprint

Dive into the research topics of 'Dialog Acts for Task-Driven Embodied Agents'. Together they form a unique fingerprint.

Cite this