🤖 AI Summary
Medical AI is often hindered by data scarcity, which limits the effective training of conventional Mixture-of-Experts (MoE) models and impedes the integration of clinical expertise. To address this challenge, this work proposes the DKGH-MoE module, which, for the first time, incorporates clinical priors—such as physicians’ gaze patterns—into the MoE architecture in a plug-and-play and interpretable manner. The resulting hybrid expert system combines a data-driven network that learns general features with a prior-guided network informed by eye-tracking trajectories to focus on regions of high diagnostic value. Evaluated across multiple medical imaging tasks, the proposed method significantly improves diagnostic performance while enhancing model interpretability, thereby demonstrating the effectiveness and necessity of synergistically integrating domain knowledge with data-driven learning.
📝 Abstract
Mixture-of-Experts (MoE) models increase representational capacity with modest computational cost, but their effectiveness in specialized domains such as medicine is limited by small datasets. In contrast, clinical practice offers rich expert knowledge, such as physician gaze patterns and diagnostic heuristics, that models cannot reliably learn from limited data. Combining data-driven experts, which capture novel patterns, with domain-expert-guided experts, which encode accumulated clinical insights, provides complementary strengths for robust and clinically meaningful learning. To this end, we propose Domain-Knowledge-Guided Hybrid MoE (DKGH-MoE), a plug-and-play and interpretable module that unifies data-driven learning with domain expertise. DKGH-MoE integrates a data-driven MoE to extract novel features from raw imaging data, and a domain-expert-guided MoE incorporates clinical priors, specifically clinician eye-gaze cues, to emphasize regions of high diagnostic relevance. By integrating domain expert insights with data-driven features, DKGH-MoE improves both performance and interpretability.