Evaluating GenAI for Simplifying Texts for Education: Improving Accuracy and Consistency for Enhanced Readability

📅 2025-01-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses critical information loss and grade-level misalignment in cross-grade educational text simplification (e.g., from Grade 12 to Grades 8/6/4). We propose the first generalizable, multidimensional evaluation framework for this task. Methodologically, it integrates structured prompt engineering, a multi-agent collaborative simplification architecture, and a semantic consistency quantification model, assessed via three complementary metrics: word-count reduction ratio, target-grade readability alignment, and keyword-level semantic fidelity. Attribution analysis employs one-sample t-tests and multiple regression. Key contributions include: (i) the first joint measurement of accuracy and consistency with systematic bottleneck attribution; (ii) identification of an inherent performance trade-off specifically at Grade 4 simplification; and (iii) empirical characterization of cross-dimensional performance variations across LLMs and prompting strategies. These findings establish both theoretical foundations and practical technical pathways for controllable, pedagogically grounded text simplification.

Technology Category

Application Category

📝 Abstract
Generative artificial intelligence (GenAI) holds great promise as a tool to support personalized learning. Teachers need tools to efficiently and effectively enhance content readability of educational texts so that they are matched to individual students reading levels, while retaining key details. Large Language Models (LLMs) show potential to fill this need, but previous research notes multiple shortcomings in current approaches. In this study, we introduced a generalized approach and metrics for the systematic evaluation of the accuracy and consistency in which LLMs, prompting techniques, and a novel multi-agent architecture to simplify sixty informational reading passages, reducing each from the twelfth grade level down to the eighth, sixth, and fourth grade levels. We calculated the degree to which each LLM and prompting technique accurately achieved the targeted grade level for each passage, percentage change in word count, and consistency in maintaining keywords and key phrases (semantic similarity). One-sample t-tests and multiple regression models revealed significant differences in the best performing LLM and prompt technique for each of the four metrics. Both LLMs and prompting techniques demonstrated variable utility in grade level accuracy and consistency of keywords and key phrases when attempting to level content down to the fourth grade reading level. These results demonstrate the promise of the application of LLMs for efficient and precise automated text simplification, the shortcomings of current models and prompting methods in attaining an ideal balance across various evaluation criteria, and a generalizable method to evaluate future systems.
Problem

Research questions and friction points this paper is trying to address.

Generative AI
Large Language Models
Educational Material Simplification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative AI
Large Language Models
Personalized Learning Materials
🔎 Similar Papers
No similar papers found.
S
Stephanie L. Day
University of Central Florida
J
Jacapo Cirica
University of Central Florida
S
Steven R. Clapp
University of Central Florida
V
Veronika Penkova
University of Central Florida
A
Amy E. Giroux
University of Central Florida
A
Abbey Banta
University of Central Florida
C
Catherine Bordeau
University of Central Florida
P
Poojitha Mutteneni
University of Central Florida
Ben D. Sawyer
Ben D. Sawyer
Associate Professor of Industrial Engineering, University of Central Florida
human factorshuman-machine interactionreadabilityapplied neuroscienceartificial intelligence