Abstract
Rating scales that language testers design should be tailored to the specific test purpose and score use as well as reflect the target construct. Researchers have long argued for the value of data-driven scales for classroom performance assessment, because they are specific to pedagogical tasks and objectives, have rich descriptors to offer useful diagnostic information, and exhibit robust content representativeness and stable measurement properties. This sequential mixed methods study compares two data-driven rating scales with multiple criteria that use different formats for pragmatic performance. They were developed using roleplays performed by 43 second-language learners of Mandarin—the hierarchical-binary (HB) scale, developed through close analysis of performance data, and the multi-trait (MT) scale derived from the HB, which has the same criteria but takes the format of an analytic scale. Results revealed the influence of format, albeit to a limited extent: MT showed a marginal advantage over HB in terms of overall reliability, practicality, and discriminatory power, though measurement properties of the two scales were largely comparable. All raters were positive about the pedagogical value of both scales. This study reveals that rater perceptions of the ease of use and effectiveness of both scales provide further insights into scale functioning.
Original language | English (US) |
---|---|
Pages (from-to) | 357-383 |
Number of pages | 27 |
Journal | Language Testing |
Volume | 41 |
Issue number | 2 |
DOIs | |
State | Published - Apr 2024 |
Keywords
- Data-driven scales
- performance assessment
- pragmatic competence
- rating scale functioning
- refusals
- roleplay
ASJC Scopus subject areas
- Language and Linguistics
- Social Sciences (miscellaneous)
- Linguistics and Language