Predicting the difficulty of automatic item generators on exams from their difficulty on homeworks

Binglin Chen, Matthew West, Craig Zilles

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

To design good assessments, it is useful to have an estimate of the difficulty of a novel exam question before running an exam. In this paper, we study a collection of a few hundred automatic item generators (short computer programs that generate a variety of unique item instances) and show that their exam difficulty can be roughly predicted from student performance on the same generator during pre-exam practice. Specifically, we show that the rate that students correctly respond to a generator on an exam is on average within 5% of the correct rate for those students on their last practice attempt. This study is conducted with data from introductory undergraduate Computer Science and Mechanical Engineering courses.

Original languageEnglish (US)
Title of host publicationProceedings of the 6th 2019 ACM Conference on Learning at Scale, L@S 2019
PublisherAssociation for Computing Machinery
ISBN (Electronic)9781450368049
DOIs
StatePublished - Jun 24 2019
Event6th ACM Conference on Learning at Scale, L@S 2019 - Chicago, United States
Duration: Jun 24 2019Jun 25 2019

Publication series

NameProceedings of the 6th 2019 ACM Conference on Learning at Scale, L@S 2019

Conference

Conference6th ACM Conference on Learning at Scale, L@S 2019
Country/TerritoryUnited States
CityChicago
Period6/24/196/25/19

ASJC Scopus subject areas

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Education

Fingerprint

Dive into the research topics of 'Predicting the difficulty of automatic item generators on exams from their difficulty on homeworks'. Together they form a unique fingerprint.

Cite this