Arab World English Journal (AWEJ) Volume 9. Number 2. June 2018 Pp. 157 -174
DOI: https://dx.doi.org/10.24093/awej/vol9no2.11
Human versus Automated Essay Scoring: A Critical Review
Beata Lewis Sevcikova
Applied Linguistics Department, College of Humanities
Prince Sultan University, Riyadh, Saudi Arabia
Abstract:
In the last 30 years, numerous scholars have described the possible changes in marking writing assignments. The paper reflects these developments as it charts the paths recently taken in the field, evaluates automated and human essay scoring systems in academic environments and analyzes the implications that both systems offer. In recent years, ways and opportunities for giving feedback have changed as computer programs have been more widely used in assessing students writing. Numerous researchers have studied computerized feedback and its potential. Different problems, such as quality of this type of feedback, validity, and reliability have been analyzed. This critical review examines two major types of academic writing support. The objective of the study based on the literature review is to examine the potential support of human and automated proofreaders for teaching and learning purposes.
Keywords: assessment, rubrics, feedback, writing, automated essay scoring, human raters
Cite as: Lewis Sevcikova, B. (2018). Human versus Automated Essay Scoring: A Critical Review. Arab World English Journal, 9 (2). DOI: https://dx.doi.org/10.24093/awej/vol9no2.11