TY - GEN
T1 - The Plastic Surgery Hypothesis in the Era of Large Language Models
AU - Xia, Chunqiu Steven
AU - Ding, Yifeng
AU - Zhang, Lingming
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Automated Program Repair (APR) aspires to automatically generate patches for an input buggy program. Traditional APR tools typically focus on specific bug types and fixes through the use of templates, heuristics, and formal specifications. However, these techniques are limited in terms of the bug types and patch variety they can produce. As such, researchers have designed various learning-based APR tools with recent work focused on directly using Large Language Models (LLMs) for APR. While LLM-based APR tools are able to achieve state-of-the-art performance on many repair datasets, the LLMs used for direct repair are not fully aware of the project-specific information such as unique variable or method names. The plastic surgery hypothesis is a well-known insight for APR, which states that the code ingredients to fix the bug usually already exist within the same project. Traditional APR tools have largely leveraged the plastic surgery hypothesis by designing manual or heuristic-based approaches to exploit such existing code ingredients. However, as recent APR research starts focusing on LLM-based approaches, the plastic surgery hypothesis has been largely ignored. In this paper, we ask the following question: How useful is the plastic surgery hypothesis in the era of LLMs? Interestingly, LLM-based APR presents a unique opportunity to fully automate the plastic surgery hypothesis via fine-tuning (training on the buggy project) and prompting (directly providing valuable code ingredients as hints to the LLM). To this end, we propose FitRepair, which combines the direct usage of LLMs with two domain-specific fine-tuning strategies and one prompting strategy (via information retrieval and static analysis) for more powerful APR. While traditional APR techniques require intensive manual efforts in both generating patches based on the plastic surgery hypothesis and guaranteeing patch validity, our approach is fully automated and general. Moreover, while it is very challenging to manually design heuristics/patterns for effectively leveraging the hypothesis, due to the power of LLMs in code vectorization/understanding, even partial/imprecise project-specific information can still guide LLMs in generating correct patches! Our experiments on the widely studied Defects4j 1.2 and 2.0 datasets show that FitRepair fixes 89 and 44 bugs (substantially outperforming baseline techniques by 15 and 8), respectively, demonstrating a promising future of the plastic surgery hypothesis in the era of LLMs.
AB - Automated Program Repair (APR) aspires to automatically generate patches for an input buggy program. Traditional APR tools typically focus on specific bug types and fixes through the use of templates, heuristics, and formal specifications. However, these techniques are limited in terms of the bug types and patch variety they can produce. As such, researchers have designed various learning-based APR tools with recent work focused on directly using Large Language Models (LLMs) for APR. While LLM-based APR tools are able to achieve state-of-the-art performance on many repair datasets, the LLMs used for direct repair are not fully aware of the project-specific information such as unique variable or method names. The plastic surgery hypothesis is a well-known insight for APR, which states that the code ingredients to fix the bug usually already exist within the same project. Traditional APR tools have largely leveraged the plastic surgery hypothesis by designing manual or heuristic-based approaches to exploit such existing code ingredients. However, as recent APR research starts focusing on LLM-based approaches, the plastic surgery hypothesis has been largely ignored. In this paper, we ask the following question: How useful is the plastic surgery hypothesis in the era of LLMs? Interestingly, LLM-based APR presents a unique opportunity to fully automate the plastic surgery hypothesis via fine-tuning (training on the buggy project) and prompting (directly providing valuable code ingredients as hints to the LLM). To this end, we propose FitRepair, which combines the direct usage of LLMs with two domain-specific fine-tuning strategies and one prompting strategy (via information retrieval and static analysis) for more powerful APR. While traditional APR techniques require intensive manual efforts in both generating patches based on the plastic surgery hypothesis and guaranteeing patch validity, our approach is fully automated and general. Moreover, while it is very challenging to manually design heuristics/patterns for effectively leveraging the hypothesis, due to the power of LLMs in code vectorization/understanding, even partial/imprecise project-specific information can still guide LLMs in generating correct patches! Our experiments on the widely studied Defects4j 1.2 and 2.0 datasets show that FitRepair fixes 89 and 44 bugs (substantially outperforming baseline techniques by 15 and 8), respectively, demonstrating a promising future of the plastic surgery hypothesis in the era of LLMs.
UR - http://www.scopus.com/inward/record.url?scp=85179011197&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85179011197&partnerID=8YFLogxK
U2 - 10.1109/ASE56229.2023.00047
DO - 10.1109/ASE56229.2023.00047
M3 - Conference contribution
AN - SCOPUS:85179011197
T3 - Proceedings - 2023 38th IEEE/ACM International Conference on Automated Software Engineering, ASE 2023
SP - 522
EP - 534
BT - Proceedings - 2023 38th IEEE/ACM International Conference on Automated Software Engineering, ASE 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 38th IEEE/ACM International Conference on Automated Software Engineering, ASE 2023
Y2 - 11 September 2023 through 15 September 2023
ER -