TY - GEN
T1 - FixEval
T2 - 3rd IEEE/ACM International Workshop on Automated Program Repair, APR 2023
AU - Anjum Haque, Md Mahim
AU - Ahmad, Wasi Uddin
AU - Lourentzou, Ismini
AU - Brown, Chris
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - The complexity of modern software has led to a drastic increase in the time and cost associated with detecting and rectifying software bugs. In response, researchers have explored various methods to automatically generate fixes for buggy code. However, due to the large combinatorial space of possible fixes for any given bug, few tools and datasets are available to evaluate model-generated fixes effectively. To address this issue, we introduce FIXEvAL, a benchmark comprising of buggy code submissions to competitive programming problems and their corresponding fixes. FixeValoffers an extensive collection of unit tests to evaluate the correctness of model-generated program fixes and assess further information regarding time, memory constraints, and acceptance based on a verdict. We consider two Transformer language models pretrained on programming languages as our baseline and compare them using match-based and execution-based evaluation metrics. Our experiments show that match-based metrics do not reflect model-generated program fixes accurately. At the same time, execution-based methods evaluate programs through all cases and scenarios designed explicitly for that solution. Therefore, we believe FixeValprovides a step towards real-world automatic bug fixing and model-generated code evaluation. The dataset and models are open-sourced.11https://github.com/FixEval/FixEval_official
AB - The complexity of modern software has led to a drastic increase in the time and cost associated with detecting and rectifying software bugs. In response, researchers have explored various methods to automatically generate fixes for buggy code. However, due to the large combinatorial space of possible fixes for any given bug, few tools and datasets are available to evaluate model-generated fixes effectively. To address this issue, we introduce FIXEvAL, a benchmark comprising of buggy code submissions to competitive programming problems and their corresponding fixes. FixeValoffers an extensive collection of unit tests to evaluate the correctness of model-generated program fixes and assess further information regarding time, memory constraints, and acceptance based on a verdict. We consider two Transformer language models pretrained on programming languages as our baseline and compare them using match-based and execution-based evaluation metrics. Our experiments show that match-based metrics do not reflect model-generated program fixes accurately. At the same time, execution-based methods evaluate programs through all cases and scenarios designed explicitly for that solution. Therefore, we believe FixeValprovides a step towards real-world automatic bug fixing and model-generated code evaluation. The dataset and models are open-sourced.11https://github.com/FixEval/FixEval_official
UR - http://www.scopus.com/inward/record.url?scp=85168407420&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85168407420&partnerID=8YFLogxK
U2 - 10.1109/APR59189.2023.00009
DO - 10.1109/APR59189.2023.00009
M3 - Conference contribution
AN - SCOPUS:85168407420
T3 - Proceedings - 2023 IEEE/ACM International Workshop on Automated Program Repair, APR 2023
SP - 11
EP - 18
BT - Proceedings - 2023 IEEE/ACM International Workshop on Automated Program Repair, APR 2023
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 16 May 2023
ER -