To weight or not to weight? Balancing influence of initial items in adaptive testing

Hua Hua Chang, Zhiliang Ying

Research output: Contribution to journalArticlepeer-review

Abstract

It has been widely reported that in computerized adaptive testing some examinees may get much lower scores than they would normally if an alternative paper-and-pencil version were given. The main purpose of this investigation is to quantitatively reveal the cause for the underestimation phenomenon. The logistic models, including the 1PL, 2PL, and 3PL models, are used to demonstrate our assertions. Our analytical derivation shows that, under the maximum information item selection strategy, if an examinee failed a few items at the beginning of the test, easy but more discriminating items are likely to be administered. Such items are ineffective to move the estimate close to the true θ, unless the test is sufficiently long or a variable-length test is used. Our results also indicate that a certain weighting mechanism is necessary to make the algorithm rely less on the items administered at the beginning of the test.

Original languageEnglish (US)
Pages (from-to)441-450
Number of pages10
JournalPsychometrika
Volume73
Issue number3
DOIs
StatePublished - Sep 2008

Keywords

  • A-stratified method
  • Computerized adaptive testing
  • Fisher information
  • Item selection algorithm
  • MLE

ASJC Scopus subject areas

  • General Psychology
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'To weight or not to weight? Balancing influence of initial items in adaptive testing'. Together they form a unique fingerprint.

Cite this