TY - GEN
T1 - Using a computer-based testing facility to improve student learning in a programming languages and compilers course
AU - Nip, Terence
AU - Gunter, Elsa L.
AU - Herman, Geoffrey L.
AU - Morphew, Jason W.
AU - West, Matthew
N1 - Publisher Copyright:
© 2018 Copyright held by the owner/author(s).
PY - 2018/2/21
Y1 - 2018/2/21
N2 - While most efforts to improve students' learning in computer science education have focused on designing new pedagogies or tools, comparatively little research has focused on redesigning examinations to improve students' learning. Cognitive science research, however, has robustly demonstrated that getting students to practice using their knowledge in testing environments can significantly improve learning through a phenomenon known as the testing effect. The testing effect has been shown to improve learning more than rehearsal strategies such as re-reading a textbook or re-watching lectures. In this paper, we present a quasi-experimental study to examine the effect of using frequent, automated examinations in an advanced computer science course, "Programming Languages and Compilers" (CS 421). In Fall 2014, students were given traditional paper-based exams, but in Fall 2015 a computer-based testing facility enabled the course to offer more frequent examinations while other aspects of the course were held constant. A comparison of 292 student scores across the two semesters revealed a significant change in the distribution of students' grades with fewer students failing the final examination, and proportionately more students now earning grades of B and C instead. This data suggests that focusing on redesigning the nature of examinations may indeed be a relatively untapped opportunity to improve students' learning.
AB - While most efforts to improve students' learning in computer science education have focused on designing new pedagogies or tools, comparatively little research has focused on redesigning examinations to improve students' learning. Cognitive science research, however, has robustly demonstrated that getting students to practice using their knowledge in testing environments can significantly improve learning through a phenomenon known as the testing effect. The testing effect has been shown to improve learning more than rehearsal strategies such as re-reading a textbook or re-watching lectures. In this paper, we present a quasi-experimental study to examine the effect of using frequent, automated examinations in an advanced computer science course, "Programming Languages and Compilers" (CS 421). In Fall 2014, students were given traditional paper-based exams, but in Fall 2015 a computer-based testing facility enabled the course to offer more frequent examinations while other aspects of the course were held constant. A comparison of 292 student scores across the two semesters revealed a significant change in the distribution of students' grades with fewer students failing the final examination, and proportionately more students now earning grades of B and C instead. This data suggests that focusing on redesigning the nature of examinations may indeed be a relatively untapped opportunity to improve students' learning.
KW - Compilers
KW - Computer-based testing
KW - Programming languages
KW - Testing effect
UR - http://www.scopus.com/inward/record.url?scp=85046028459&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85046028459&partnerID=8YFLogxK
U2 - 10.1145/3159450.3159500
DO - 10.1145/3159450.3159500
M3 - Conference contribution
AN - SCOPUS:85046028459
T3 - SIGCSE 2018 - Proceedings of the 49th ACM Technical Symposium on Computer Science Education
SP - 568
EP - 573
BT - SIGCSE 2018 - Proceedings of the 49th ACM Technical Symposium on Computer Science Education
PB - Association for Computing Machinery
T2 - 49th ACM Technical Symposium on Computer Science Education, SIGCSE 2018
Y2 - 21 February 2018 through 24 February 2018
ER -