A Data-Efficient Visual-Audio Representation with Intuitive Fine-tuning for Voice-Controlled Robots

Peixin Chang, Shuijing Liu, Tianchen Ji, Neeloy Chakraborty, Kaiwen Hong, Katherine Driggs-Campbell

Research output: Contribution to journalConference articlepeer-review

Abstract

A command-following robot that serves people in everyday life must continually improve itself in deployment domains with minimal help from its end users, instead of engineers. Previous methods are either difficult to continuously improve after the deployment or require a large number of new labels during fine-tuning. Motivated by (self-)supervised contrastive learning, we propose a novel representation that generates an intrinsic reward function for command-following robot tasks by associating images with sound commands. After the robot is deployed in a new domain, the representation can be updated intuitively and data-efficiently by non-experts without any hand-crafted reward functions. We demonstrate our approach on various sound types and robotic tasks, including navigation and manipulation with raw sensor inputs. In simulated and real-world experiments, we show that our system can continually self-improve in previously unseen scenarios given fewer new labeled data, while still achieving better performance over previous methods.

Original languageEnglish (US)
JournalProceedings of Machine Learning Research
Volume229
StatePublished - 2023
Event7th Conference on Robot Learning, CoRL 2023 - Atlanta, United States
Duration: Nov 6 2023Nov 9 2023

Keywords

  • Command Following
  • Human-in-the-Loop
  • Multimodal Representation
  • Reinforcement Learning

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'A Data-Efficient Visual-Audio Representation with Intuitive Fine-tuning for Voice-Controlled Robots'. Together they form a unique fingerprint.

Cite this