TY - GEN
T1 - Measuring the score advantage on asynchronous exams in an undergraduate cs course
AU - Silva, Mariana
AU - West, Matthew
AU - Zilles, Craig
N1 - Publisher Copyright:
© 2020 Copyright held by the owner/author(s). Publication rights licensed to ACM.
PY - 2020/2/26
Y1 - 2020/2/26
N2 - This paper presents the results of a controlled crossover experiment designed to measure the score advantage that students have when taking exams asynchronously (i.e., the students can select a time to take the exam in a multi-day window) compared to synchronous exams (i.e., all students take the exam at the same time). The study was performed in an upper-division undergraduate computer science course with 321 students. Stratified sampling was used to randomly assign the students to two groups that alternated between the two treatments (synchronous versus asynchronous exams) across a series of four exams during the semester. These non-programming exams consisted of a mix of multiple choice, checkbox, and numeric input questions. For some questions, the parameters were randomized so that students received different versions of the question and some questions were identical for all students. In our results, students taking the exams asynchronously had scores that were on average only 3% higher (0.2 of a standard deviation). Furthermore, we found that the score advantage was decreased by the use of randomized questions, and it did not significantly differ based on the type of question. Thus, our results suggest that asynchronous exams can be a compelling alternative to synchronous exams.
AB - This paper presents the results of a controlled crossover experiment designed to measure the score advantage that students have when taking exams asynchronously (i.e., the students can select a time to take the exam in a multi-day window) compared to synchronous exams (i.e., all students take the exam at the same time). The study was performed in an upper-division undergraduate computer science course with 321 students. Stratified sampling was used to randomly assign the students to two groups that alternated between the two treatments (synchronous versus asynchronous exams) across a series of four exams during the semester. These non-programming exams consisted of a mix of multiple choice, checkbox, and numeric input questions. For some questions, the parameters were randomized so that students received different versions of the question and some questions were identical for all students. In our results, students taking the exams asynchronously had scores that were on average only 3% higher (0.2 of a standard deviation). Furthermore, we found that the score advantage was decreased by the use of randomized questions, and it did not significantly differ based on the type of question. Thus, our results suggest that asynchronous exams can be a compelling alternative to synchronous exams.
KW - Asynchronous exams
KW - Cheating
KW - Computer-based testing
KW - Randomized questions
UR - http://www.scopus.com/inward/record.url?scp=85081535488&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85081535488&partnerID=8YFLogxK
U2 - 10.1145/3328778.3366859
DO - 10.1145/3328778.3366859
M3 - Conference contribution
AN - SCOPUS:85081535488
T3 - SIGCSE 2020 - Proceedings of the 51st ACM Technical Symposium on Computer Science Education
SP - 873
EP - 879
BT - SIGCSE 2020 - Proceedings of the 51st ACM Technical Symposium on Computer Science Education
T2 - 51st ACM SIGCSE Technical Symposium on Computer Science Education, SIGCSE 2020
Y2 - 11 March 2020 through 14 March 2020
ER -