TY - JOUR
T1 - DRAGON
T2 - A Dialogue-Based Robot for Assistive Navigation with Visual Language Grounding
AU - Liu, Shuijing
AU - Hasan, Aamir
AU - Hong, Kaiwen
AU - Wang, Runxuan
AU - Chang, Peixin
AU - Mizrachi, Zachary
AU - Lin, Justin
AU - McPherson, D. Livingston
AU - Rogers, Wendy A.
AU - Driggs-Campbell, Katherine
N1 - This work was supported in part by a Research Support Award from the University of Illinois Urbana- Champaign Campus Research Board, the National Science Foundation under Grant 2143435, and in part by the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR) under Grant 90REGE0021.
PY - 2024/4/1
Y1 - 2024/4/1
N2 - Persons with visual impairments (PwVI) have difficulties understanding and navigating spaces around them. Current wayfinding technologies either focus solely on navigation or provide limited communication about the environment. Motivated by recent advances in visual-language grounding and semantic navigation, we propose DRAGON, a guiding robot powered by a dialogue system and the ability to associate the environment with natural language. By understanding the commands from the user, DRAGON is able to guide the user to the desired landmarks on the map, describe the environment, and answer questions from visual observations. Through effective utilization of dialogue, the robot can ground the user's free-form language to the environment, and give the user semantic information through spoken language. We conduct a user study with blindfolded participants in an everyday indoor environment. Our results demonstrate that DRAGON is able to communicate with the user smoothly, provide a good guiding experience, and connect users with their surrounding environment in an intuitive manner.
AB - Persons with visual impairments (PwVI) have difficulties understanding and navigating spaces around them. Current wayfinding technologies either focus solely on navigation or provide limited communication about the environment. Motivated by recent advances in visual-language grounding and semantic navigation, we propose DRAGON, a guiding robot powered by a dialogue system and the ability to associate the environment with natural language. By understanding the commands from the user, DRAGON is able to guide the user to the desired landmarks on the map, describe the environment, and answer questions from visual observations. Through effective utilization of dialogue, the robot can ground the user's free-form language to the environment, and give the user semantic information through spoken language. We conduct a user study with blindfolded participants in an everyday indoor environment. Our results demonstrate that DRAGON is able to communicate with the user smoothly, provide a good guiding experience, and connect users with their surrounding environment in an intuitive manner.
KW - AI-enabled robotics
KW - Human-centered robotics
KW - natural dialog for HRI
UR - http://www.scopus.com/inward/record.url?scp=85184830414&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85184830414&partnerID=8YFLogxK
U2 - 10.1109/LRA.2024.3362591
DO - 10.1109/LRA.2024.3362591
M3 - Article
AN - SCOPUS:85184830414
SN - 2377-3766
VL - 9
SP - 3712
EP - 3719
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 4
ER -