TY - GEN
T1 - DetGPT
T2 - 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023
AU - Pi, Renjie
AU - Gao, Jiahui
AU - Diao, Shizhe
AU - Pan, Rui
AU - Dong, Hanze
AU - Zhang, Jipeng
AU - Yao, Lewei
AU - Han, Jianhua
AU - Xu, Hang
AU - Kong, Lingpeng
AU - Zhang, Tong
N1 - Publisher Copyright:
©2023 Association for Computational Linguistics.
PY - 2023
Y1 - 2023
N2 - Recently, vision-language models (VLMs) such as GPT4, LLAVA, and MiniGPT4 have witnessed remarkable breakthroughs, which are great at generating image descriptions and visual question answering. However, it is difficult to apply them to an embodied agent for completing real-world tasks, such as grasping, since they can not localize the object of interest. In this paper, we introduce a new task termed reasoning-based object detection, which aims at localizing the objects of interest in the visual scene based on any human instructs. Our proposed method, called DetGPT, leverages instruction-tuned VLMs to perform reasoning and find the object of interest, followed by an open-vocabulary object detector to localize these objects. DetGPT can automatically locate the object of interest based on the user's expressed desires, even if the object is not explicitly mentioned. This ability makes our system potentially applicable across a wide range of fields, from robotics to autonomous driving. To facilitate research in the proposed reasoning-based object detection, we curate and open-source a benchmark named RD-Bench for instruction tuning and evaluation. Overall, our proposed task and DetGPT demonstrate the potential for more sophisticated and intuitive interactions between humans and machines.
AB - Recently, vision-language models (VLMs) such as GPT4, LLAVA, and MiniGPT4 have witnessed remarkable breakthroughs, which are great at generating image descriptions and visual question answering. However, it is difficult to apply them to an embodied agent for completing real-world tasks, such as grasping, since they can not localize the object of interest. In this paper, we introduce a new task termed reasoning-based object detection, which aims at localizing the objects of interest in the visual scene based on any human instructs. Our proposed method, called DetGPT, leverages instruction-tuned VLMs to perform reasoning and find the object of interest, followed by an open-vocabulary object detector to localize these objects. DetGPT can automatically locate the object of interest based on the user's expressed desires, even if the object is not explicitly mentioned. This ability makes our system potentially applicable across a wide range of fields, from robotics to autonomous driving. To facilitate research in the proposed reasoning-based object detection, we curate and open-source a benchmark named RD-Bench for instruction tuning and evaluation. Overall, our proposed task and DetGPT demonstrate the potential for more sophisticated and intuitive interactions between humans and machines.
UR - https://www.scopus.com/pages/publications/85184817138
UR - https://www.scopus.com/pages/publications/85184817138#tab=citedBy
U2 - 10.18653/v1/2023.emnlp-main.876
DO - 10.18653/v1/2023.emnlp-main.876
M3 - Conference contribution
AN - SCOPUS:85184817138
T3 - EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings
SP - 14172
EP - 14189
BT - EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings
A2 - Bouamor, Houda
A2 - Pino, Juan
A2 - Bali, Kalika
PB - Association for Computational Linguistics (ACL)
Y2 - 6 December 2023 through 10 December 2023
ER -