On the Limits of Evaluating Embodied Agent Model Generalization Using Validation Sets

Hyounghun Kim, Aishwarya Padmakumar, Di Jin, Mohit Bansal, Dilek Hakkani-Tur

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Natural language guided embodied task completion is a challenging problem since it requires understanding natural language instructions, aligning them with egocentric visual observations, and choosing appropriate actions to execute in the environment to produce desired changes. We experiment with augmenting a transformer model for this task with modules that effectively utilize a wider field of view and learn to choose whether the next step requires a navigation or manipulation action. We observed that the proposed modules resulted in improved, and in fact state-of-the-art performance on an unseen validation set of a popular benchmark dataset, ALFRED. However, our best model selected using the unseen validation set underperforms on the unseen test split of ALFRED, indicating that performance on the unseen validation set may not in itself be a sufficient indicator of whether model improvements generalize to unseen test sets. We highlight this result as we believe it may be a wider phenomenon in machine learning tasks but primarily noticeable only in benchmarks that limit evaluations on test splits, and highlights the need to modify benchmark design to better account for variance in model performance.

Original languageEnglish (US)
Title of host publicationInsights 2022 - 3rd Workshop on Insights from Negative Results in NLP, Proceedings of the Workshop
EditorsShabnam Tafreshi, Joao Sedoc, Anna Rogers, Aleksandr Drozd, Anna Rumshisky, Arjun Reddy Akula
PublisherAssociation for Computational Linguistics (ACL)
Pages113-118
Number of pages6
ISBN (Electronic)9781955917407
StatePublished - 2022
Externally publishedYes
Event3rd Workshop on Insights from Negative Results in NLP, Insights 2022 - Dublin, Ireland
Duration: May 26 2022 → …

Publication series

NameInsights 2022 - 3rd Workshop on Insights from Negative Results in NLP, Proceedings of the Workshop

Conference

Conference3rd Workshop on Insights from Negative Results in NLP, Insights 2022
Country/TerritoryIreland
CityDublin
Period5/26/22 → …

ASJC Scopus subject areas

  • Language and Linguistics
  • Computational Theory and Mathematics
  • Computer Science Applications
  • Software

Fingerprint

Dive into the research topics of 'On the Limits of Evaluating Embodied Agent Model Generalization Using Validation Sets'. Together they form a unique fingerprint.

Cite this