Adaptive Testing With Multidimensional Pairwise Preference Items: Improving the Efficiency of Personality and Other Noncognitive Assessments

Stephen Stark, Oleksandr S. Chernyshenko, Fritz Drasgow, Leonard A. White

Research output: Contribution to journalArticle

Abstract

Assessment of noncognitive constructs in organizational research and practice is challenging because of response biases that can distort test scores. Researchers must also deal with time constraints and the ensuing trade-offs between test length and the number of constructs measured. This article describes a novel way of improving the efficiency of noncognitive assessments using computer adaptive testing (CAT) with multidimensional pairwise preference (MDPP) items. Tests composed of MDPP items are part of a broader family of forced choice measures that ask respondents to choose between two or more equally desirable statements in an effort to combat response distortion. The authors conducted four computer simulations to explore the influences of test design, dimensionality, and the advantages of adaptive item selection for trait score and error estimation with tests involving as many as 25 dimensions. Overall, adaptive MDPP testing produced gains in accuracy over nonadaptive MDPP tests comparable to those observed with traditional unidimensional CATs. In addition, an empirical illustration involving a 15-dimension MDPP CAT administered in a field setting showed patterns of correlations that were consistent with expectations, thus showing construct validity.

Original languageEnglish (US)
Pages (from-to)463-487
Number of pages25
JournalOrganizational Research Methods
Volume15
Issue number3
DOIs
StatePublished - Jul 1 2012

    Fingerprint

Keywords

  • CAT
  • IRT
  • computerized adaptive testing
  • ideal point
  • item response theory
  • multidimensional forced choice
  • pairwise preference

ASJC Scopus subject areas

  • Decision Sciences(all)
  • Strategy and Management
  • Management of Technology and Innovation

Cite this