Evaluation of Two Methods for Estimating Item Response Theory Parameters When Assessing Differential Item Functioning

Rodney G. Lim, Fritz Drasgow

Research output: Contribution to journalArticlepeer-review

Abstract

Recent legal developments appear to sanction the use of psychometrically unsound procedures for examining differential item functioning (DIF) on standardized tests. More appropriate approaches involve the use of item response theory (IRT). However, many IRT-based DIF studies have used Lord's (1968) joint maximum likelihood procedure, which can lead to incorrect and misleading results. A Monte Carlo simulation was conducted to evaluate the effectiveness of two other methods of parameter estimation: marginal maximum likelihood estimation and Bayes modal estimation. Sample size and data dimensionality were manipulated in the simulation. Results indicated that both estimation methods (a) provided more accurate parameter estimates and less inflated Type I error rates than joint maximum likelihood, (b) were robust to multidimensionality, and (c) produced more accurate parameter estimates and higher rates of identifying DIF with larger samples.

Original languageEnglish (US)
Pages (from-to)164-174
Number of pages11
JournalJournal of Applied Psychology
Volume75
Issue number2
DOIs
StatePublished - Apr 1990

ASJC Scopus subject areas

  • Applied Psychology

Fingerprint

Dive into the research topics of 'Evaluation of Two Methods for Estimating Item Response Theory Parameters When Assessing Differential Item Functioning'. Together they form a unique fingerprint.

Cite this