More than g: Evidence for the Incremental Validity of Performance-Based Assessments for Predicting Training Performance

Christopher D. Nye, Oleksandr S. Chernyshenko, Stephen Stark, Fritz Drasgow, Henry L. Phillips, Jeffrey B. Phillips, Justin S. Campbell

Research output: Contribution to journalArticlepeer-review

Abstract

Past research has consistently shown that tests measuring specific cognitive abilities provide little if any incremental validity over tests of general mental ability when predicting performance on the job. In this study, we suggest that the seeming lack of incremental validity may have been due to the type of content that has traditionally been assessed. Therefore, we hypothesised that incremental validity can be obtained using specific cognitive abilities that are less highly correlated with g and are matched to the tasks performed on the job. To test this, we examined a recently developed performance-based measure that assesses a number of cognitive abilities related to training performance. In a sample of 310 US Navy student pilots, results indicated that performance-based scores added sizeable incremental validity to a measure of g. The significant increases in R2 ranged from.08 to.10 across criteria. Similar results were obtained after correcting correlations for range restriction, though the magnitude of incremental validity was slightly smaller (ΔR2 ranged from.05 to.07).

Original languageEnglish (US)
Pages (from-to)302-324
Number of pages23
JournalApplied Psychology
Volume69
Issue number2
DOIs
StatePublished - Apr 1 2020

ASJC Scopus subject areas

  • Developmental and Educational Psychology
  • Arts and Humanities (miscellaneous)
  • Applied Psychology

Fingerprint

Dive into the research topics of 'More than g: Evidence for the Incremental Validity of Performance-Based Assessments for Predicting Training Performance'. Together they form a unique fingerprint.

Cite this