TY - GEN
T1 - Accelerating Accurate Assignment Authoring Using Solution-Generated Autograders
AU - Challen, Geoffrey
AU - Nordick, Ben
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s).
PY - 2025/2/18
Y1 - 2025/2/18
N2 - Students learning to program benefit from access to large numbers of practice problems. Autograders are commonly used to support programming questions by providing quick feedback on submissions. But authoring accurate autograders remains challenging. Autograders are frequently created by enumerating test cases-a tedious process that can produce inaccurate autograders that fail to correctly classify submissions. When authoring accurate autograders is slow, it is difficult to create large banks of practice problems to support beginning programmers. We present solution-generated autograding: a faster, more accurate, and more enjoyable way to create autograders. Our approach leverages a key difference between software testing and autograding: The question author can provide a solution. By starting with a solution, we can eliminate the need to manually enumerate test cases, validate the autograder’s accuracy, and evaluate other aspects of submission code quality beyond behavioral correctness. We describe Questioner, an implementation of solution-generated autograding for Java and Kotlin, and share experiences from four years using Questioner to support a large CS1 course: authoring nearly 800 programming questions used by thousands of students to evaluate millions of submissions.
AB - Students learning to program benefit from access to large numbers of practice problems. Autograders are commonly used to support programming questions by providing quick feedback on submissions. But authoring accurate autograders remains challenging. Autograders are frequently created by enumerating test cases-a tedious process that can produce inaccurate autograders that fail to correctly classify submissions. When authoring accurate autograders is slow, it is difficult to create large banks of practice problems to support beginning programmers. We present solution-generated autograding: a faster, more accurate, and more enjoyable way to create autograders. Our approach leverages a key difference between software testing and autograding: The question author can provide a solution. By starting with a solution, we can eliminate the need to manually enumerate test cases, validate the autograder’s accuracy, and evaluate other aspects of submission code quality beyond behavioral correctness. We describe Questioner, an implementation of solution-generated autograding for Java and Kotlin, and share experiences from four years using Questioner to support a large CS1 course: authoring nearly 800 programming questions used by thousands of students to evaluate millions of submissions.
KW - Autograding
KW - Code Quality Evaluation
KW - Problem Authoring
UR - http://www.scopus.com/inward/record.url?scp=86000203542&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=86000203542&partnerID=8YFLogxK
U2 - 10.1145/3641554.3701862
DO - 10.1145/3641554.3701862
M3 - Conference contribution
AN - SCOPUS:86000203542
T3 - SIGCSE TS 2025 - Proceedings of the 56th ACM Technical Symposium on Computer Science Education
SP - 227
EP - 233
BT - SIGCSE TS 2025 - Proceedings of the 56th ACM Technical Symposium on Computer Science Education
PB - Association for Computing Machinery
T2 - 56th Annual SIGCSE Technical Symposium on Computer Science Education, SIGCSE TS 2025
Y2 - 26 February 2025 through 1 March 2025
ER -