TY - JOUR
T1 - VoxPoser
T2 - 7th Conference on Robot Learning, CoRL 2023
AU - Huang, Wenlong
AU - Wang, Chen
AU - Zhang, Ruohan
AU - Li, Yunzhu
AU - Wu, Jiajun
AU - Fei-Fei, Li
N1 - We would like to thank Andy Zeng, Igor Mordatch, and the members of the Stanford Vision and Learning Lab for the fruitful discussions. This work was in part supported by AFOSR YIP FA9550-23-1-0127, ONR MURI N00014-22-1-2740, ONR MURI N00014-21-1-2801, ONR N00014-23-1-2355, the Stanford Institute for Human-Centered AI (HAI), JPMC, and Analog Devices. Wenlong Huang is partially supported by Stanford School of Engineering Fellowship. Ruohan Zhang is partially supported by Wu Tsai Human Performance Alliance Fellowship.
PY - 2023
Y1 - 2023
N2 - Large language models (LLMs) are shown to possess a wealth of actionable knowledge that can be extracted for robot manipulation in the form of reasoning and planning. Despite the progress, most still rely on pre-defined motion primitives to carry out the physical interactions with the environment, which remains a major bottleneck. In this work, we aim to synthesize robot trajectories, i.e., a dense sequence of 6-DoF end-effector waypoints, for a large variety of manipulation tasks given an open-set of instructions and an open-set of objects. We achieve this by first observing that LLMs excel at inferring affordances and constraints given a free-form language instruction. More importantly, by leveraging their code-writing capabilities, they can interact with a vision-language model (VLM) to compose 3D value maps to ground the knowledge into the observation space of the agent. The composed value maps are then used in a model-based planning framework to zero-shot synthesize closed-loop robot trajectories with robustness to dynamic perturbations. We further demonstrate how the proposed framework can benefit from online experiences by efficiently learning a dynamics model for scenes that involve contact-rich interactions. We present a large-scale study of the proposed method in both simulated and real-robot environments, showcasing the ability to perform a large variety of everyday manipulation tasks specified in free-form natural language. Project website: voxposer.github.io.
AB - Large language models (LLMs) are shown to possess a wealth of actionable knowledge that can be extracted for robot manipulation in the form of reasoning and planning. Despite the progress, most still rely on pre-defined motion primitives to carry out the physical interactions with the environment, which remains a major bottleneck. In this work, we aim to synthesize robot trajectories, i.e., a dense sequence of 6-DoF end-effector waypoints, for a large variety of manipulation tasks given an open-set of instructions and an open-set of objects. We achieve this by first observing that LLMs excel at inferring affordances and constraints given a free-form language instruction. More importantly, by leveraging their code-writing capabilities, they can interact with a vision-language model (VLM) to compose 3D value maps to ground the knowledge into the observation space of the agent. The composed value maps are then used in a model-based planning framework to zero-shot synthesize closed-loop robot trajectories with robustness to dynamic perturbations. We further demonstrate how the proposed framework can benefit from online experiences by efficiently learning a dynamics model for scenes that involve contact-rich interactions. We present a large-scale study of the proposed method in both simulated and real-robot environments, showcasing the ability to perform a large variety of everyday manipulation tasks specified in free-form natural language. Project website: voxposer.github.io.
KW - Large Language Models
KW - Manipulation
KW - Model-based Planning
UR - http://www.scopus.com/inward/record.url?scp=85184346233&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85184346233&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85184346233
SN - 2640-3498
VL - 229
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
Y2 - 6 November 2023 through 9 November 2023
ER -