Abstract

In this paper, we study a computerized exam system that allows students to attempt the same question multiple times. This system permits students either to receive feedback on their submitted answer immediately or to defer the feedback and grade questions in bulk. An analysis of student behavior in three courses across two semesters found similar student behaviors across courses and student groups. We found that only a small minority of students used the deferred feedback option. A clustering analysis that considered both when students chose to receive feedback and either to immediately retry incorrect problems or to attempt other unfinished problems identified four main student strategies. These strategies were correlated to statistically significant differences in exam scores, but it was not clear if some strategies improved outcomes or if stronger students tended to prefer certain strategies.

Original languageEnglish (US)
Title of host publicationL@S 2020 - Proceedings of the 7th ACM Conference on Learning @ Scale
PublisherAssociation for Computing Machinery
Pages329-332
Number of pages4
ISBN (Electronic)9781450379519
DOIs
StatePublished - Aug 12 2020
Event7th Annual ACM Conference on Learning at Scale, L@S 2020 - Virtual, Online, United States
Duration: Aug 12 2020Aug 14 2020

Publication series

NameL@S 2020 - Proceedings of the 7th ACM Conference on Learning @ Scale

Conference

Conference7th Annual ACM Conference on Learning at Scale, L@S 2020
Country/TerritoryUnited States
CityVirtual, Online
Period8/12/208/14/20

Keywords

  • agency
  • assessment
  • computer-based testing
  • computerized exams
  • multiple attempts

ASJC Scopus subject areas

  • Software
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'A Quantitative Analysis of When Students Choose to Grade Questions on Computerized Exams with Multiple Attempts'. Together they form a unique fingerprint.

Cite this