TY - JOUR
T1 - Knowledge-Centered Dual-Process Reasoning for Math Word Problems With Large Language Models
AU - Liu, Jiayu
AU - Huang, Zhenya
AU - Liu, Qi
AU - Ma, Zhiyuan
AU - Zhai, Chengxiang
AU - Chen, Enhong
N1 - This research was partially supported in part by the National Key Research and Development Program of China under Grant 2021YFF0901005, in part by the Key Technologies R&D Program of Anhui Province under Grant 202423k09020039, in part by the National Natural Science Foundation of China under Grant 62477044 and Grant 62337001, and in part by the Fundamental Research Funds for the Central Universities under Grant WK2150110038. The work of Zhenya Huang was supported by the Young Elite Scientists Sponsorship Program by CAST under Grant 2024QNRC001. Recommended for acceptance byW. Ding. This paperwas produced by the IEEE Publication Technology Group. They are in Piscataway, NJ.
PY - 2025
Y1 - 2025
N2 - Math word problem (MWP) serves as a critical milestone for assessing the text mining ability and knowledge mastery level of models. Recent advancements have witnessed large language models (LLMs) showcasing remarkable performance on MWP. However, current LLMs still frequently exhibit logical errors, which highlights their inability to fully grasp the knowledge required for genuine step-by-step mathematical reasoning. To this end, in this paper, we propose a novel Knowledge-guided Solver (KNOS) framework that empowers LLMs to simulate human mathematical reasoning, whose core idea is to Invoke-Verify-Inject necessary knowledge to solve MWP. We draw inspiration from the dual-process theory to construct two cooperative systems: a Knowledge System and an Inference System. Specifically, the Knowledge System employs LLMs as the knowledge base and develops a novel knowledge invoker that can elicit their relevant knowledge to support the strict step-level mathematical reasoning. In the Inference System, we propose a knowledge verifier and a knowledge injector to evaluate the knowledge rationality and further guide the step-wise symbolic deduction in an interpretable manner based on human cognitive mechanism, respectively. Moreover, to tackle the potential scarcity issue of mathematics-specific knowledge in LLMs, we consider an open-book exam scenario and propose an improved version of KNOS called EKNOS. In EKNOS, we meticulously design knowledge selectors to extract the most relevant commonsense and math formulas from external knowledge sources for each reasoning step. This knowledge is utilized to assist the knowledge invoker in better stimulating LLMs' reasoning abilities. Both KNOS and EKNOS are flexible to empower different LLMs. Our experiments with GPT3, ChatGPT, and GPT4 not only demonstrate their reasoning accuracy improvement but also show how they bring the strict step-wise interpretability of mathematical thinking.
AB - Math word problem (MWP) serves as a critical milestone for assessing the text mining ability and knowledge mastery level of models. Recent advancements have witnessed large language models (LLMs) showcasing remarkable performance on MWP. However, current LLMs still frequently exhibit logical errors, which highlights their inability to fully grasp the knowledge required for genuine step-by-step mathematical reasoning. To this end, in this paper, we propose a novel Knowledge-guided Solver (KNOS) framework that empowers LLMs to simulate human mathematical reasoning, whose core idea is to Invoke-Verify-Inject necessary knowledge to solve MWP. We draw inspiration from the dual-process theory to construct two cooperative systems: a Knowledge System and an Inference System. Specifically, the Knowledge System employs LLMs as the knowledge base and develops a novel knowledge invoker that can elicit their relevant knowledge to support the strict step-level mathematical reasoning. In the Inference System, we propose a knowledge verifier and a knowledge injector to evaluate the knowledge rationality and further guide the step-wise symbolic deduction in an interpretable manner based on human cognitive mechanism, respectively. Moreover, to tackle the potential scarcity issue of mathematics-specific knowledge in LLMs, we consider an open-book exam scenario and propose an improved version of KNOS called EKNOS. In EKNOS, we meticulously design knowledge selectors to extract the most relevant commonsense and math formulas from external knowledge sources for each reasoning step. This knowledge is utilized to assist the knowledge invoker in better stimulating LLMs' reasoning abilities. Both KNOS and EKNOS are flexible to empower different LLMs. Our experiments with GPT3, ChatGPT, and GPT4 not only demonstrate their reasoning accuracy improvement but also show how they bring the strict step-wise interpretability of mathematical thinking.
KW - Knowledge reasoning
KW - large language model
KW - math word problem
UR - http://www.scopus.com/inward/record.url?scp=105002223734&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=105002223734&partnerID=8YFLogxK
U2 - 10.1109/TKDE.2025.3556367
DO - 10.1109/TKDE.2025.3556367
M3 - Article
AN - SCOPUS:105002223734
SN - 1041-4347
VL - 37
SP - 3457
EP - 3471
JO - IEEE Transactions on Knowledge and Data Engineering
JF - IEEE Transactions on Knowledge and Data Engineering
IS - 6
ER -