TY - GEN
T1 - A Quantitative Analysis of When Students Choose to Grade Questions on Computerized Exams with Multiple Attempts
AU - Verma, Ashank
AU - Bretl, Timothy Wolfe
AU - West, Matthew
AU - Zilles, Craig
N1 - Funding Information:
This work was partially supported by NSF DUE-1915257 and the College of Engineering at the University of Illinois at Urbana-Champaign under the Strategic Instructional Initiatives Program (SIIP). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
PY - 2020/8/12
Y1 - 2020/8/12
N2 - In this paper, we study a computerized exam system that allows students to attempt the same question multiple times. This system permits students either to receive feedback on their submitted answer immediately or to defer the feedback and grade questions in bulk. An analysis of student behavior in three courses across two semesters found similar student behaviors across courses and student groups. We found that only a small minority of students used the deferred feedback option. A clustering analysis that considered both when students chose to receive feedback and either to immediately retry incorrect problems or to attempt other unfinished problems identified four main student strategies. These strategies were correlated to statistically significant differences in exam scores, but it was not clear if some strategies improved outcomes or if stronger students tended to prefer certain strategies.
AB - In this paper, we study a computerized exam system that allows students to attempt the same question multiple times. This system permits students either to receive feedback on their submitted answer immediately or to defer the feedback and grade questions in bulk. An analysis of student behavior in three courses across two semesters found similar student behaviors across courses and student groups. We found that only a small minority of students used the deferred feedback option. A clustering analysis that considered both when students chose to receive feedback and either to immediately retry incorrect problems or to attempt other unfinished problems identified four main student strategies. These strategies were correlated to statistically significant differences in exam scores, but it was not clear if some strategies improved outcomes or if stronger students tended to prefer certain strategies.
KW - agency
KW - assessment
KW - computer-based testing
KW - computerized exams
KW - multiple attempts
UR - http://www.scopus.com/inward/record.url?scp=85094895597&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85094895597&partnerID=8YFLogxK
U2 - 10.1145/3386527.3406740
DO - 10.1145/3386527.3406740
M3 - Conference contribution
AN - SCOPUS:85094895597
T3 - L@S 2020 - Proceedings of the 7th ACM Conference on Learning @ Scale
SP - 329
EP - 332
BT - L@S 2020 - Proceedings of the 7th ACM Conference on Learning @ Scale
PB - Association for Computing Machinery
T2 - 7th Annual ACM Conference on Learning at Scale, L@S 2020
Y2 - 12 August 2020 through 14 August 2020
ER -