TY - GEN
T1 - Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models
AU - Chen, Yangyi
AU - Sikka, Karan
AU - Cogswell, Michael
AU - Ji, Heng
AU - Divakaran, Ajay
N1 - We thank the reviewers for their suggestions and comments. This research is based upon work supported by U.S. DARPA ECOLE Program No. HR00112390060 and U.S. DARPA KAIROS Program No. FA8750-19-2-1004. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
PY - 2024
Y1 - 2024
N2 - Vision-language models (VLMs) can effectively act as visual assistants, interpreting questions about images and producing human-like responses. This work explores their abilities to demonstrate human-like reasoning. To address concerns about the consistency of VLMs’ reasoning, we introduce a chain-of-thought (CoT) consistency measure. We tackle the challenge of extensive human annotations by proposing an LLM-Human-in-the-Loop pipeline. Based on this pipeline, we build the CURE benchmark to measure both the zero-shot reasoning performance and consistency of VLMs. We evaluate state-of-the-art VLMs and find that even the best-performing model is unable to demonstrate strong visual reasoning capabilities and consistency, indicating that substantial efforts are required to enable VLMs to perform visual reasoning as systematically and consistently as humans. As an early step, we propose a two-stage training framework aimed at improving both the reasoning performance and consistency of VLMs without human annotations. The framework consists of two primary stages: supervised fine-tuning and learning from feedback, to guide VLMs in generating reasoning chains that exhibit both consistency and groundedness. Our framework exhibits a 4% relative improvement in reasoning performance and consistency. We release the dataset at https://github.com/Yangyi-Chen/CoTConsistency.
AB - Vision-language models (VLMs) can effectively act as visual assistants, interpreting questions about images and producing human-like responses. This work explores their abilities to demonstrate human-like reasoning. To address concerns about the consistency of VLMs’ reasoning, we introduce a chain-of-thought (CoT) consistency measure. We tackle the challenge of extensive human annotations by proposing an LLM-Human-in-the-Loop pipeline. Based on this pipeline, we build the CURE benchmark to measure both the zero-shot reasoning performance and consistency of VLMs. We evaluate state-of-the-art VLMs and find that even the best-performing model is unable to demonstrate strong visual reasoning capabilities and consistency, indicating that substantial efforts are required to enable VLMs to perform visual reasoning as systematically and consistently as humans. As an early step, we propose a two-stage training framework aimed at improving both the reasoning performance and consistency of VLMs without human annotations. The framework consists of two primary stages: supervised fine-tuning and learning from feedback, to guide VLMs in generating reasoning chains that exhibit both consistency and groundedness. Our framework exhibits a 4% relative improvement in reasoning performance and consistency. We release the dataset at https://github.com/Yangyi-Chen/CoTConsistency.
UR - http://www.scopus.com/inward/record.url?scp=85199118976&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85199118976&partnerID=8YFLogxK
U2 - 10.18653/v1/2024.naacl-long.11
DO - 10.18653/v1/2024.naacl-long.11
M3 - Conference contribution
AN - SCOPUS:85199118976
T3 - Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2024
SP - 192
EP - 210
BT - Long Papers
A2 - Duh, Kevin
A2 - Gomez, Helena
A2 - Bethard, Steven
PB - Association for Computational Linguistics (ACL)
T2 - 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2024
Y2 - 16 June 2024 through 21 June 2024
ER -