🤖 AI Summary
This work addresses the persistent challenges large language models face in meeting accessibility, explainability, and regulatory compliance requirements for users with cognitive and sensory impairments within interactive systems. The authors propose a novel model-driven architecture grounded in SysML v2, uniquely leveraging its full lifecycle capabilities to generate accessible user interfaces. By integrating user personas, declarative adaptation rules, and validation-oriented prompt templates, the framework dynamically produces multimodal interfaces compliant with standards such as WCAG 2.2 and EN 301 549. The approach explicitly links user needs, adaptation logic, and regulatory clauses, embodying human-centered AI principles to ensure trustworthy generation. Validated in a healthcare context, the system successfully generated personalized medication instructions—featuring plain language, pictograms, and high-contrast layouts—for users with cognitive and hearing impairments, demonstrating both efficacy and regulatory adherence.
📝 Abstract
The integration of Large Language Models (LLMs) into interactive systems opens new opportunities for adaptive user experiences, yet it also raises challenges regarding accessibility, explainability, and normative compliance. This paper presents an implemented model-driven architecture for generating personalised, multimodal, and accessibility-aligned user interfaces. The approach combines structured user profiles, declarative adaptation rules, and validated prompt templates to refine baseline accessible UI templates that conform to WCAG 2.2 and EN 301 549, tailored to cognitive and sensory support needs. LLMs dynamically transform language complexity, modality, and visual structure, producing outputs such as Plain-Language text, pictograms, and high-contrast layouts aligned with ISO 24495-1 and W3C COGA guidance. A healthcare use case demonstrates how the system generates accessible post-consultation medication instructions tailored to a user profile comprising cognitive disability and hearing impairment. SysML v2 models provide explicit traceability between user needs, adaptation rules, and normative requirements, ensuring explainable and auditable transformations. Grounded in Human-Centered AI (HCAI), the framework incorporates co-design processes and structured feedback mechanisms to guide iterative refinement and support trustworthy generative behaviour.