Chang and Zhang (2002, 2003) proposed several baseline criteria for assessing the severity of possible test security violations for computerized tests with high‐stakes outcomes. However, these criteria were obtained from theoretical derivations that assumed uniformly randomized item selection. The current study investigated potential damage caused by organized item theft in computerized adaptive testing (CAT) for two more realistic item selection methods, the maximum item information and the a‐stratified, while using the randomized method as a baseline for comparison. The results of the study indicated that the damage could be very severe, especially when the thieves took the test in the early stage of utilization of an item pool. Among the three CAT methods examined in this study, the maximum item information method with Sympson‐Hetter exposure control was most vulnerable to organized item theft.
|Original language||English (US)|
|Journal||ETS Research Report Series|
|State||Published - Dec 2006|
- test security
- computerized adaptive testing
- organized item theft
- item selection methods