Toward Human-Centered Readability Evaluation

📅 2025-10-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing NLP text simplification metrics (e.g., BLEU, FKGL, SARI) emphasize surface-level linguistic features and fail to capture human-centered health communication quality—specifically clarity, credibility, tone appropriateness, cultural relevance, and actionability—critical in high-stakes health contexts. To address this gap, we propose HCRS (Human-Centered Readability Scoring), the first framework systematically integrating human-computer interaction principles with health communication theory. HCRS establishes a dynamic, five-dimensional evaluation system validated through participatory design and empirical studies. It synergistically combines automated metrics with structured human feedback to enable context-sensitive, co-created assessment. Empirical results demonstrate that HCRS significantly improves user alignment and public health communication efficacy of text simplification systems, while exhibiting strong cross-context generalizability.

Technology Category

Application Category

📝 Abstract
Text simplification is essential for making public health information accessible to diverse populations, including those with limited health literacy. However, commonly used evaluation metrics in Natural Language Processing (NLP), such as BLEU, FKGL, and SARI, mainly capture surface-level features and fail to account for human-centered qualities like clarity, trustworthiness, tone, cultural relevance, and actionability. This limitation is particularly critical in high-stakes health contexts, where communication must be not only simple but also usable, respectful, and trustworthy. To address this gap, we propose the Human-Centered Readability Score (HCRS), a five-dimensional evaluation framework grounded in Human-Computer Interaction (HCI) and health communication research. HCRS integrates automatic measures with structured human feedback to capture the relational and contextual aspects of readability. We outline the framework, discuss its integration into participatory evaluation workflows, and present a protocol for empirical validation. This work aims to advance the evaluation of health text simplification beyond surface metrics, enabling NLP systems that align more closely with diverse users' needs, expectations, and lived experiences.
Problem

Research questions and friction points this paper is trying to address.

Current readability metrics ignore human-centered qualities in health communication
Existing NLP evaluation methods fail to capture contextual and relational aspects
Health text simplification needs assessment beyond surface-level linguistic features
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes Human-Centered Readability Score framework
Integrates automatic measures with human feedback
Captures relational and contextual readability aspects
🔎 Similar Papers
No similar papers found.