Arab World English Journal (AWEJ) Special Issue on CALL Number 7. July 2021                       Pp.155-164
DOI: https://dx.doi.org/10.24093/awej/call7.11

Full Paper PDF

Automated Complexity Assessment of English Informational Texts
for EFL Pre-service Teachers and Translators 

Valentyna Parashchuk

Department of English Language and ELT Methodology, Volodymyr Vynnychenko
Central Ukrainian State Pedagogical University, Kropyvnytskyi, Ukraine
Corresponding Author: valparashchuk@gmail.com

Laryssa Yarova
Department of Translation, Applied and General Linguistics, Volodymyr Vynnychenko
Central Ukrainian State Pedagogical University, Kropyvnytskyi, Ukraine 

Stepan Parashchuk

Department of Informatics and Information Technologies, Volodymyr Vynnychenko
Central Ukrainian State Pedagogical University, Kropyvnytskyi, Ukraine

 

Received:  5/16/2021              Accepted:  7/10/2021             Published: 7/26/2021 

Abstract:
Automated text complexity assessment tools are of enormous practical value in solving the time-consuming task of analyzing English informational texts for their complexity at the pre-reading stage. The present study depicts the application of the automated text analysis system the TextEvaluator as an effective tool that helps analyze texts on eight dimensions of text complexity as follows: syntactic complexity; academic vocabulary; word unfamiliarity; word concreteness; lexical cohesion; interactive style; level of argumentation; degree of narrativity, with further summarizing them with an overall genre-dependent complexity score. This research examines the complexity dimensions of English informational texts of four genres – legal, linguistic, news, and medical – that are used for teaching reading comprehension to EFL (English as a foreign language) pre-service teachers and translators at universities in Ukraine. The data obtained with the help of the TextEvaluator has shown that English legal texts are the most difficult for reading comprehension in comparison to linguistic, news, and medical texts. In contrast, medical texts are the least challenging out of the four genres compared. The TextEvaluator has provided insight into the complexity of English informational texts across their different genres that would be useful for assembling the corpora of reading passages scaled on specific dimensions of text complexity that predict text difficulty to EFL pre-service teachers and translators.
Keywords: automated сomplexity assessment, informational texts, text complexity, text complexity indices, the TextEvaluator,
EFL pre-service teachers and translators.

Cite as: Parashchuk, V.,  Yarova, L., &  Parashchuk, S. (2021).  Automated Complexity Assessment of English Informational Texts for EFL Pre-service Teachers and Translators.  Arab World English Journal (AWEJ) Special Issue on CALL (7) 155-164.
DOI: https://dx.doi.org/10.24093/awej/call7.11

References

Dowell, N., Graesser, A., & Cai, Z. (2016). Language and discourse analysis with Coh-Metrix: Applications from Educational material to learning environments at scale. Journal of Learning Analytics, 3(3), 72–95. http://dx.doi.org/10.18608/jla.2016.33.5

Flor, M., & Klebanov, B. (2014). Associative Lexical Cohesion as a Factor in Text Complexity. International Journal of Applied Linguistics, 165(2), 223–258. doi: https://doi.org/10.1075/itl.165.2.05flo

Frantz, R., Starr, L., & Bailey, A. (2015). Syntactic Complexity as an Aspect of Text Complexity. Educational Researcher. 44(7), 387-393. 10.3102/0013189X15603980.

Graesser, A. C., McNamara, D. S., Louwerse, M. M., & Cai, Z. (2004). Coh-metrix: analysis of text on cohesion and language. Behavior Research Methods, Instruments, & Computers: A Journal of the Psychonomic Society, Inc, 36(2), 193–202. 10.3758/BF03195564.

Graesser, A. C., & McNamara, D. S. (2011). Computational Analyses of Multilevel Discourse Comprehension. Topics in Cognitive Science, 3(2), 371–398. doi:10.1111/j.1756- 8765.2010.01081.x

Graesser, A. C., McNamara, D. S., & Kulikowich, J. M. (2011). Coh-Metrix: Providing Multilevel Analyses of Text Characteristics. Educational Researcher, 40(5), 223–234. doi: 10.3102/0013189X11413260

Graesser, A. C., McNamara, D. S., Cai, Z., Conley, M., Li, H., & Pennebaker, J. (2014). Coh-Metrix Measures Text Characteristics at Multiple Levels of Language and Discourse. The Elementary School Journal, 115(2). doi:10.1086/678293

Hall, Ch., et al. (2007). Using Coh-Metrix to Assess Differences between English Language Varieties. Coyote Papers, 15, 40–54.

Lapp, D., Moss, B., Grant, M., & Johnson, K. (2015). A Close Look at Close Reading: Teaching Students to Analyze Complex Texts, Grades K–5. ASCD

Liben, D. (2010). Why Complex Text Matters. Retrieved from https://achievethecore.org/content/upload/Why_Text_Complexity_Matters.pdf

Lee, D. Y. W. (2001). Defining Core Vocabulary and Tracking Its Distribution across Spoken and Written Genres: Evidence of a Gradience of Variation from the British National Corpus. Journal of English Linguistics, 29(3), 250–278. https://doi.org/10.1177/00754240122005369

Marton, K., Schwartz, R. G., Farkas, L., & Katsnelson, V. (2006). Effect of sentence length and complexity on working memory performance in Hungarian children with specific language impairment (SLI): A cross-linguistic comparison. International journal of language & communication disorders, 41(6), 653–673. https://doi.org/10.1080/13682820500420418

McNamara, D. S., Graesser, A. C., McCarthy, P. M., & Cai, Z. (2014). Automated evaluation of text and discourse with Coh-Metrix. Cambridge University Press.

Napolitano, D., Sheehan, K., & Mundkowsky, R. (2015). Online Readability and Text Complexity Analysis with TextEvaluator. Proceedings of NAACL-HLT 2015, Denver, Colorado, May 31 – June 5, 2015, 96–100. HLT-NAACL.

Ortega, L. (2003). Syntactic complexity measures and their relationship to L2 proficiency: A research synthesis of college-level L2 writing. Applied Linguistics, 24(4), 492–518.

Pilán, I., Vajjala, S., & Volodina, E. (2016). A readable read: Automatic assessment of language learning materials based on linguistic complexity. International Journal of Computational Linguistics and Applications, 7(1), 143–159. http://www.ijcla.bahripublications.com/2016-1/IJCLA-2016-1-pp-143-159-preprint.pdf

Pitler, E., & Nenkova, A. (2008). Revisiting Readability: A Unified Framework for Predicting Text Quality. Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 186–195.

Sáenz, L. & Fuchs, L. (2002). Examining the Reading Difficulty of Secondary Students with Learning Disabilities: Expository Versus Narrative Text. Remedial and Special Education, 23, 31–41. 10.1177/074193250202300105

Sheehan, K. M. (2017). Validating automated measures of text complexity. Educational Measurement: Issues and Practice, 36(4), 35-43. https://doi.org/10.1111/emip.12155

Sheehan, K.M., Flor, M., Napolitano, D., & Ramineni, C. (2015). Using TextEvaluator® to Quantify Sources of Linguistic Complexity in Textbooks Targeted at First‐Grade Readers Over the Past Half Century. ETS Research Report Series, 2, 1–17. doi:10.1002/ets2.12085

Sheehan, K. M., Kostin, I., Napolitano, D., & Flor, M. (2014). The TextEvaluator tool: Helping teachers and test developers select texts for use in instruction and assessment. The Elementary School Journal, 115(2), 184–209. 10.1086/678294.

Sheehan, K.M., Kostin, I., Futagi, Y., & Flor, M. (2010). Generating Automated Text Complexity Classifications that are Aligned with Targeted Text Complexity Standards. ETS Research Report Series, 2, i–44. doi:10.1002/j.2333-8504.2010.tb02235.x

Sheehan, K. M., Kostin I., & Futagi Y. (2009). When Do Standard Approaches for Measuring Vocabulary Difficulty, Syntactic Complexity and Referential Cohesion Yield Biased Estimates of Text Difficulty? Proceedings of the 30th Annual Meeting of the Cognitive Science Society, 1978–1983. Retrieved from http://csjarchive.cogsci.rpi.edu/proceedings/2008/pdfs/p1978.pdf

Xia, M., Kochmar, E., & Briscoe, T. (2016). Text Readability Assessment for Second Language Learners. Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications, 12–22, San Diego, California, June 16, 2016. Association for Computational Linguistics

 

 

 

 

 

 

 

 

 

Facebook
Twitter
LinkedIn
Tumblr
Reddit
Email
StumbleUpon
Digg
Received: 5/16/2021 
Accepted: 7/10/2021 
Published: 7/26/2021
http://orcid.org/0000-0003-4007-4437
https://dx.doi.org/10.24093/awej/call7.11 

Valentyna Parashchuk, PhD in Philology, Associate Professor of English as a foreign language at the Department of English Language and ELT Methodology, Volodymyr Vynnychenko Central Ukrainian State Pedagogical University, Kropyvnytskyi, Ukraine. Her areas of interest include teaching EFL to pre-service teachers, English phonetics, and intercultural communication.
ORCID: http://orcid.org/0000-0003-4007-4437

Laryssa Yarova, Ph.D. in Pedagogy, Chair at the Department of Translation, Applied and General Linguistics, Volodymyr Vynnychenko Central Ukrainian State Pedagogical University, Kropyvnytskyi, Ukraine. Her areas of interest include teaching EFL to pre-service translators, ESP. ORCID: http://orcid.org/0000-0001-6817-1787

Stepan Parashchuk, Ph.D. in Physics and Mathematics, Chair at Department of Informatics and Information Technologies, Volodymyr Vynnychenko Central Ukrainian State Pedagogical University, Kropyvnytskyi, Ukraine. His areas of interest include the use of information technologies in teacher training programs.  ORCID: https://orcid.org/0000-0002-8609-3206