Towards Personalized Explanations for Health Simulations: A Mixed-Methods Framework for Stakeholder-Centric Summarization

๐Ÿ“… 2025-09-04
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Health simulation modelsโ€”such as agent-based models (ABMs)โ€”are often too complex for key stakeholders (e.g., clinicians, policymakers, patients) to understand and adopt. Existing large language model (LLM)-generated explanations lack stakeholder specificity and fail to accommodate diverse information needs and linguistic preferences. Method: We propose the first stakeholder-centered explanation framework for health simulation, integrating qualitative interviews and quantitative surveys to systematically elicit heterogeneous user requirements, and designing a controllable text generation mechanism that guides LLMs to produce summaries jointly customized in both content and stylistic attributes. Contribution/Results: Rigorous multi-dimensional evaluation demonstrates significant improvements in explanation relevance, readability, and stakeholder acceptance. The framework establishes a reusable methodological foundation for human-AI collaborative explanation in health decision support systems.

Technology Category

Application Category

๐Ÿ“ Abstract
Modeling & Simulation (M&S) approaches such as agent-based models hold significant potential to support decision-making activities in health, with recent examples including the adoption of vaccines, and a vast literature on healthy eating behaviors and physical activity behaviors. These models are potentially usable by different stakeholder groups, as they support policy-makers to estimate the consequences of potential interventions and they can guide individuals in making healthy choices in complex environments. However, this potential may not be fully realized because of the models' complexity, which makes them inaccessible to the stakeholders who could benefit the most. While Large Language Models (LLMs) can translate simulation outputs and the design of models into text, current approaches typically rely on one-size-fits-all summaries that fail to reflect the varied informational needs and stylistic preferences of clinicians, policymakers, patients, caregivers, and health advocates. This limitation stems from a fundamental gap: we lack a systematic understanding of what these stakeholders need from explanations and how to tailor them accordingly. To address this gap, we present a step-by-step framework to identify stakeholder needs and guide LLMs in generating tailored explanations of health simulations. Our procedure uses a mixed-methods design by first eliciting the explanation needs and stylistic preferences of diverse health stakeholders, then optimizing the ability of LLMs to generate tailored outputs (e.g., via controllable attribute tuning), and then evaluating through a comprehensive range of metrics to further improve the tailored generation of summaries.
Problem

Research questions and friction points this paper is trying to address.

Tailoring health simulation explanations to diverse stakeholder needs
Overcoming complexity barriers in health model accessibility
Addressing one-size-fits-all limitations in LLM-generated health summaries
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixed-methods framework identifies stakeholder explanation needs
LLMs optimized via controllable attribute tuning for personalization
Comprehensive evaluation metrics improve tailored summary generation
๐Ÿ”Ž Similar Papers
No similar papers found.