Controlling Difficulty of Generated Text for AI-Assisted Language Learning

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) often generate text exceeding the linguistic capacity of CEFR A1–A2 language learners, limiting their pedagogical utility. Method: We propose a fine-tuning-free, modular controllable generation framework integrating prompt engineering with Future Discriminators to enable fine-grained difficulty control. Contribution/Results: We introduce Token Miss Rate (TMR), a novel metric quantifying the proportion of tokens exceeding a learner’s lexical proficiency. We empirically demonstrate— for the first time in a non-fine-tuning setting—that Future Discriminators significantly enhance text comprehensibility (strong correlation with human evaluation, *r* > 0.85). In Japanese language learning, comprehensibility improves from 40.4% to 84.3%. We publicly release code, models, annotation tools, and datasets, establishing a reproducible, extensible technical foundation and empirical evidence for AI-assisted language education.

Technology Category

Application Category

📝 Abstract
Practicing conversations with large language models (LLMs) presents a promising alternative to traditional in-person language learning. However, most LLMs generate text at a near-native level of complexity, making them ill-suited for beginner learners (CEFR: A1-A2). In this paper, we investigate whether controllable generation techniques -- specifically modular methods that do not require model fine-tuning -- can adapt LLM outputs to better support absolute beginners. We evaluate these methods through both automatic metrics and a user study with university-level learners of Japanese. Our findings show that while prompting alone fails to control output difficulty, the use of future discriminators (Yang and Klein, 2021) significantly improves output comprehensibility (from 40.4% to 84.3%). We further introduce a novel token-level evaluation metric, Token Miss Rate (TMR), that quantifies the proportion of incomprehensible tokens per utterance and correlates strongly with human judgments. To support future research in AI-assisted language learning, we release our code, models, annotation tools, and dataset.
Problem

Research questions and friction points this paper is trying to address.

Adapting LLM outputs for beginner language learners
Controlling text difficulty without model fine-tuning
Evaluating comprehensibility with Token Miss Rate metric
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modular controllable generation without fine-tuning
Future discriminators improve text comprehensibility
Token Miss Rate metric for difficulty evaluation
🔎 Similar Papers
No similar papers found.
M
Meiqing Jin
University of Pennsylvania
Liam Dugan
Liam Dugan
PhD Student, University of Pennsylvania
Natural Language ProcessingMachine Learning
C
Christopher Callison-Burch
University of Pennsylvania