TY - GEN
T1 - Generate-on-Graph
T2 - 2024 Conference on Empirical Methods in Natural Language Processing, EMNLP 2024
AU - Xu, Yao
AU - He, Shizhu
AU - Chen, Jiabei
AU - Wang, Zihao
AU - Song, Yangqiu
AU - Tong, Hanghang
AU - Liu, Guang
AU - Zhao, Jun
AU - Liu, Kang
N1 - This work was supported by Beijing Natural Science Foundation (L243006) and the National Natural Science Foundation of China (No.62376270).This work was supported by the Youth Innovation Promotion Association CAS.
PY - 2024
Y1 - 2024
N2 - To address the issues of insufficient knowledge and hallucination in Large Language Models (LLMs), numerous studies have explored integrating LLMs with Knowledge Graphs (KGs).However, these methods are typically evaluated on conventional Knowledge Graph Question Answering (KGQA) with complete KGs, where all factual triples required for each question are entirely covered by the given KG.In such cases, LLMs primarily act as an agent to find answer entities within the KG, rather than effectively integrating the internal knowledge of LLMs and external knowledge sources such as KGs.In fact, KGs are often incomplete to cover all the knowledge required to answer questions.To simulate these real-world scenarios and evaluate the ability of LLMs to integrate internal and external knowledge, we propose leveraging LLMs for QA under Incomplete Knowledge Graph (IKGQA), where the provided KG lacks some of the factual triples for each question, and construct corresponding datasets.To handle IKGQA, we propose a training-free method called Generate-on-Graph (GoG), which can generate new factual triples while exploring KGs.Specifically, GoG performs reasoning through a Thinking-Searching-Generating framework, which treats LLM as both Agent and KG in IKGQA.Experimental results on two datasets demonstrate that our GoG outperforms all previous methods.
AB - To address the issues of insufficient knowledge and hallucination in Large Language Models (LLMs), numerous studies have explored integrating LLMs with Knowledge Graphs (KGs).However, these methods are typically evaluated on conventional Knowledge Graph Question Answering (KGQA) with complete KGs, where all factual triples required for each question are entirely covered by the given KG.In such cases, LLMs primarily act as an agent to find answer entities within the KG, rather than effectively integrating the internal knowledge of LLMs and external knowledge sources such as KGs.In fact, KGs are often incomplete to cover all the knowledge required to answer questions.To simulate these real-world scenarios and evaluate the ability of LLMs to integrate internal and external knowledge, we propose leveraging LLMs for QA under Incomplete Knowledge Graph (IKGQA), where the provided KG lacks some of the factual triples for each question, and construct corresponding datasets.To handle IKGQA, we propose a training-free method called Generate-on-Graph (GoG), which can generate new factual triples while exploring KGs.Specifically, GoG performs reasoning through a Thinking-Searching-Generating framework, which treats LLM as both Agent and KG in IKGQA.Experimental results on two datasets demonstrate that our GoG outperforms all previous methods.
UR - http://www.scopus.com/inward/record.url?scp=85213825199&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85213825199&partnerID=8YFLogxK
U2 - 10.18653/v1/2024.emnlp-main.1023
DO - 10.18653/v1/2024.emnlp-main.1023
M3 - Conference contribution
AN - SCOPUS:85213825199
T3 - EMNLP 2024 - 2024 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference
SP - 18410
EP - 18430
BT - EMNLP 2024 - 2024 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference
A2 - Al-Onaizan, Yaser
A2 - Bansal, Mohit
A2 - Chen, Yun-Nung
PB - Association for Computational Linguistics (ACL)
Y2 - 12 November 2024 through 16 November 2024
ER -