Severity of organized item theft in computerized adaptive testing: A simulation study

Research output: Contribution to journalArticlepeer-review

Abstract

Criteria had been proposed for assessing the severity of possible test security violations for computerized tests with high-stakes outcomes. However, these criteria resulted from theoretical derivations that assumed uniformly randomized item selection. This study investigated potential damage caused by organized item theft in computerized adaptive testing (CAT) for two realistic item selection methods, maximum item information and a-stratified with content blocking, using the randomized method as a baseline for comparison. Damage caused by organized item theft was evaluated by the number of compromised items each examinee could encounter and the impact of the compromised items on examinees' ability estimates. Severity of test security violation was assessed under self-organized and organized item theft simulation scenarios. Results indicated that though item theft could cause severe damage to CAT with either item selection method, the maximum item information method was more vulnerable to the organized item theft simulation than was the a-stratified method.

Original languageEnglish (US)
Pages (from-to)543-558
Number of pages16
JournalApplied Psychological Measurement
Volume32
Issue number7
DOIs
StatePublished - Oct 2008

Keywords

  • Computerized adaptive testing
  • Item selection methods
  • Organized item theft
  • Test security

ASJC Scopus subject areas

  • General Psychology
  • Psychology (miscellaneous)
  • Social Sciences (miscellaneous)

Fingerprint

Dive into the research topics of 'Severity of organized item theft in computerized adaptive testing: A simulation study'. Together they form a unique fingerprint.

Cite this