Arab World English Journal (AWEJ) Volume 9. Number 2. June 2018                 Pp. 157 -174

Abstract PDF

Full Paper PDF 

Human versus Automated Essay Scoring: A Critical Review

 Beata Lewis Sevcikova
Applied Linguistics Department, College of Humanities
Prince Sultan University, Riyadh, Saudi Arabia


In the last 30 years, numerous scholars have described the possible changes in marking writing assignments. The paper reflects these developments as it charts the paths recently taken in the field, evaluates automated and human essay scoring systems in academic environments and analyzes the implications that both systems offer. In recent years, ways and opportunities for giving feedback have changed as computer programs have been more widely used in assessing students writing.  Numerous researchers have studied computerized feedback and its potential. Different problems, such as quality of this type of feedback, validity, and reliability have been analyzed. This critical review examines two major types of academic writing support. The objective of the study based on the literature review is to examine the potential support of human and automated proofreaders for teaching and learning purposes.
Keywords: assessment, rubrics, feedback, writing, automated essay scoring, human raters

Cite as: Lewis Sevcikova, B. (2018). Human versus Automated Essay Scoring: A Critical Review.  Arab World English Journal, 9 (2).  DOI:

ORCID ID: 0000-0002-4347-0489

Dr. Beata Lewis Sevcikova is an experienced educator who has participated in numerous in-house
and international workshops and symposiums. In her research, she focuses on teachers’
experiences in the use of new technology in the classroom. ORCID ID: 0000-0002-4347-0489