🤖 AI Summary
Medical Lay Language Generation (MLLG) faces the challenge of simultaneously preserving semantic fidelity and accommodating stylistic diversity under multi-source heterogeneous data. To address this, we propose an asymmetric LoRA architecture that decouples semantic representation from stylistic control via a shared low-rank matrix A and multiple task-specific matrices B. We introduce a semantic invariance constraint to ensure medical accuracy and design a recommendation-guided switching mechanism for personalized lay-style adaptation. The method supports external prompting interfaces to enhance controllability. Experiments on three real-world medical datasets demonstrate that our approach significantly outperforms standard prompting, vanilla LoRA, and its variants—achieving higher semantic accuracy while maintaining high readability. Moreover, it reduces trainable parameters by 31.66%, offering both computational efficiency and practical applicability.
📝 Abstract
Medical Lay Language Generation (MLLG) plays a vital role in improving the accessibility of complex scientific content for broader audiences. Recent literature to MLLG commonly employ parameter-efficient fine-tuning methods such as Low-Rank Adaptation (LoRA) to fine-tuning large language models (LLMs) using paired expert-lay language datasets. However, LoRA struggles with the challenges posed by multi-source heterogeneous MLLG datasets. Specifically, through a series of exploratory experiments, we reveal that standard LoRA fail to meet the requirement for semantic fidelity and diverse lay-style generation in MLLG task. To address these limitations, we propose Magical, an asymmetric LoRA architecture tailored for MLLG under heterogeneous data scenarios. Magical employs a shared matrix $A$ for abstractive summarization, along with multiple isolated matrices $B$ for diverse lay-style generation. To preserve semantic fidelity during the lay language generation process, Magical introduces a Semantic Invariance Constraint to mitigate semantic subspace shifts on matrix $A$. Furthermore, to better adapt to diverse lay-style generation, Magical incorporates the Recommendation-guided Switch, an externally interface to prompt the LLM to switch between different matrices $B$. Experimental results on three real-world lay language generation datasets demonstrate that Magical consistently outperforms prompt-based methods, vanilla LoRA, and its recent variants, while also reducing trainable parameters by 31.66%.