A Comparison of the Response-Pattern-Based Faking Detection Methods

Weiwen Nie, Ivan Hernandez, Louis Tay, Bo Zhang, Mengyang Cao

Research output: Contribution to journalArticlepeer-review

Abstract

The covariance index method, the idiosyncratic item response method, and the machine learning method are the three primary response-pattern-based (RPB) approaches to detect faking on personality tests. However, less is known about how their performance is affected by different practical factors (e.g., scale length, training sample size, proportion of faking participants) and when they perform optimally. In the present study, we systematically compared the three RPB faking detection methods across different conditions in three empirical-data-based resampling studies. Overall, we found that the machine learning method outperforms the other two RPB faking detection methods in most simulation conditions. It was also found that the faking probabilities produced by all three RPB faking detection methods had moderate to strong positive correlations with true personality scores, suggesting that these RPB faking detection methods are likely to misclassify honest respondents with truly high personality trait scores as fakers. Fortunately, we found that the benefit of removing suspicious fakers still outweighs the consequences of misclassification. Finally, we provided practical guidance to researchers and practitioners to optimally implement the machine learning method and offered step-by-step code.

Original languageEnglish (US)
JournalJournal of Applied Psychology
Early online dateJan 20 2025
DOIs
StateE-pub ahead of print - Jan 20 2025

Keywords

  • faking
  • machine learning
  • personality assessment
  • selection

ASJC Scopus subject areas

  • Applied Psychology

Fingerprint

Dive into the research topics of 'A Comparison of the Response-Pattern-Based Faking Detection Methods'. Together they form a unique fingerprint.

Cite this