🤖 AI Summary
Large language models (LLMs) struggle to adapt open-ended generation to users’ heterogeneous cognitive abilities, resulting in misalignment in both knowledge complexity and expressive style. To address this cognitive mismatch, we propose the Cognitive Level Alignment Framework (CLAF)—the first systematic approach to this problem. CLAF introduces a hierarchical knowledge graph–driven capability-aware retrieval module for dynamic knowledge complexity matching, and integrates Bloom’s taxonomy with preference learning to model and optimize expressive style. By jointly controlling knowledge and style during generation, CLAF ensures output consistency and individual adaptability. Experiments on SCALE—a newly constructed multi-level comprehension benchmark—demonstrate that CLAF significantly improves content comprehensibility and informativeness, enabling personalized generation across diverse cognitive levels in real-world applications.
📝 Abstract
Large Language Models (LLMs) have demonstrated strong performance in open-ended generation tasks. However, they often struggle to adapt content to users with differing cognitive capacities, leading to a phenomenon we term cognitive misalignment. This issue arises in two forms: knowledge-level misalignment, where content is too complex or too simplistic relative to user understanding, and presentation-style misalignment, where the structure or tone hinders effective comprehension. To address these challenges, we propose the Cognitive-Level Alignment Framework (CLAF), a general-purpose generation framework that aligns both knowledge complexity and presentation style with user cognition. CLAF integrates a capability-aware retrieval module based on a hierarchical knowledge graph and a style optimization module guided by Bloom's taxonomy and preference learning. Additionally, a knowledge-controllable generation component ensures consistency and relevance throughout the output. To support training and evaluation, we construct SCALE, a cognitively annotated dataset containing responses at multiple comprehension levels per query. Empirical results show that CLAF enhances the adaptability and informativeness of LLM outputs across a range of user profiles, offering a robust solution to cognitive-level alignment in real-world applications.