TY - JOUR
T1 - If at First You Don’t Succeed, Try, Try Again
T2 - Applications of Sequential IRT Models to Cognitive Assessments
AU - Culpepper, Steven Andrew
N1 - Publisher Copyright:
© The Author(s) 2014.
PY - 2014/11/13
Y1 - 2014/11/13
N2 - Previous research has considered sequential item response theory (SIRT) models for circumstances where examinees are allowed at least one opportunity to correctly answer questions. Research suggests that employing answer-until-correct assessment frameworks with partial feedback can promote student learning and improve score precision. This article describes SIRT models for cases when test takers are allowed a finite number of repeated attempts on items. An overview of SIRT models is provided and the Rasch SIRT is discussed as a special case. Three applications are presented using assessment data from a calculus-based probability theory course. The first application estimates a Rasch SIRT model using marginal maximum likelihood and Markov chain Monte Carlo procedures and students with higher latent variable scores tend to have more knowledge and are better able to retrieve that knowledge in fewer attempts. The second application uses R to estimate growth-curve SIRT models that account for individual differences in content knowledge and recovery/retrieval rates. The third application is a multidimensional SIRT model that estimates an attempt-specific latent proficiency variable. The implications of SIRT models and answer-until-correct assessment frameworks are discussed for researchers, psychometricians, and test developers.
AB - Previous research has considered sequential item response theory (SIRT) models for circumstances where examinees are allowed at least one opportunity to correctly answer questions. Research suggests that employing answer-until-correct assessment frameworks with partial feedback can promote student learning and improve score precision. This article describes SIRT models for cases when test takers are allowed a finite number of repeated attempts on items. An overview of SIRT models is provided and the Rasch SIRT is discussed as a special case. Three applications are presented using assessment data from a calculus-based probability theory course. The first application estimates a Rasch SIRT model using marginal maximum likelihood and Markov chain Monte Carlo procedures and students with higher latent variable scores tend to have more knowledge and are better able to retrieve that knowledge in fewer attempts. The second application uses R to estimate growth-curve SIRT models that account for individual differences in content knowledge and recovery/retrieval rates. The third application is a multidimensional SIRT model that estimates an attempt-specific latent proficiency variable. The implications of SIRT models and answer-until-correct assessment frameworks are discussed for researchers, psychometricians, and test developers.
KW - Answer-until-correct
KW - Computerized assessment
KW - Item information
KW - Repeated attempts
KW - Sequential item response theory models
UR - http://www.scopus.com/inward/record.url?scp=84912051249&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84912051249&partnerID=8YFLogxK
U2 - 10.1177/0146621614536464
DO - 10.1177/0146621614536464
M3 - Article
AN - SCOPUS:84912051249
SN - 0146-6216
VL - 38
SP - 632
EP - 644
JO - Applied Psychological Measurement
JF - Applied Psychological Measurement
IS - 8
ER -