Keeping up with the times: Revising and refreshing a rating scale

Jayanti Banerjee, Xun Yan, Mark Chapman, Heather Elliott

Research output: Contribution to journalArticlepeer-review

Abstract

In performance-based writing assessment, regular monitoring and modification of the rating scale is essential to ensure reliable test scores and valid score inferences. However, the development and modification of rating scales (particularly writing scales) is rarely discussed in language assessment literature. The few studies documenting the scale development process have derived the rating scale from analyzing one or two data sources: expert intuition, rater discussion, and/or real performance. This study reports on the review and revision of a rating scale for the writing section of a large-scale, advanced-level English language proficiency examination. Specifically, this study first identified from literature, the features of written text that tend to reliably distinguish between essays across levels of proficiency. Next, using corpus-based tools, 796 essays were analyzed for text features that predict writing proficiency levels. Lastly, rater discussions were analyzed to identify components of the existing scale that raters found helpful for assigning scores. Based on these findings, a new rating scale has been prepared. The results of this work demonstrate the benefits of triangulating information from writing research, rater discussions, and real performances in rating scale design.

Original languageEnglish (US)
Pages (from-to)5-19
Number of pages15
JournalAssessing Writing
Volume26
DOIs
StatePublished - Oct 1 2015

Keywords

  • Corpora
  • Discriminant function analysis
  • Rating scale design
  • Scale validation

ASJC Scopus subject areas

  • Language and Linguistics
  • Education
  • Linguistics and Language

Fingerprint

Dive into the research topics of 'Keeping up with the times: Revising and refreshing a rating scale'. Together they form a unique fingerprint.

Cite this