A Hierarchical Model for Accuracy and Choice on Standardized Tests

Steven Andrew Culpepper, James Joseph Balamuta

Research output: Contribution to journalArticlepeer-review

Abstract

This paper assesses the psychometric value of allowing test-takers choice in standardized testing. New theoretical results examine the conditions where allowing choice improves score precision. A hierarchical framework is presented for jointly modeling the accuracy of cognitive responses and item choices. The statistical methodology is disseminated in the ‘cIRT’ R package. An ‘answer two, choose one’ (A2C1) test administration design is introduced to avoid challenges associated with nonignorable missing data. Experimental results suggest that the A2C1 design and payout structure encouraged subjects to choose items consistent with their cognitive trait levels. Substantively, the experimental data suggest that item choices yielded comparable information and discrimination ability as cognitive items. Given there are no clear guidelines for writing more or less discriminating items, one practical implication is that choice can serve as a mechanism to improve score precision.

Original languageEnglish (US)
Pages (from-to)820-845
Number of pages26
JournalPsychometrika
Volume82
Issue number3
DOIs
StatePublished - Sep 1 2017

Keywords

  • Bayesian statistics
  • Thurstonian models
  • choice
  • high-stakes testing
  • item response theory

ASJC Scopus subject areas

  • General Psychology
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'A Hierarchical Model for Accuracy and Choice on Standardized Tests'. Together they form a unique fingerprint.

Cite this