🤖 AI Summary
To address the lack of global interpretability in high-dimensional multi-label medical code prediction, this paper proposes a mechanism-level interpretable modeling framework. First, dictionary learning is employed to decompose dense embeddings into sparse, semantically explicit medical concept primitives. Second, a novel dictionary-label attention mechanism is introduced to explicitly model alignment between labels and learned concepts. Third, the first LLM-driven automated medical concept discovery pipeline is developed, enabling unsupervised induction of thousands of human-interpretable concepts without manual annotation. Experiments demonstrate that the method achieves state-of-the-art predictive performance while improving human comprehensibility of sparse embeddings by over 50%. It is the first approach to simultaneously achieve global interpretability, label-level transparency, and concept–label alignment—thereby significantly enhancing clinical decision trustworthiness.
📝 Abstract
Predicting high-dimensional or extreme multilabels, such as in medical coding, requires both accuracy and interpretability. Existing works often rely on local interpretability methods, failing to provide comprehensive explanations of the overall mechanism behind each label prediction within a multilabel set. We propose a mechanistic interpretability module called DIctionary Label Attention (method) that disentangles uninterpretable dense embeddings into a sparse embedding space, where each nonzero element (a dictionary feature) represents a globally learned medical concept. Through human evaluations, we show that our sparse embeddings are more human understandable than its dense counterparts by at least 50 percent. Our automated dictionary feature identification pipeline, leveraging large language models (LLMs), uncovers thousands of learned medical concepts by examining and summarizing the highest activating tokens for each dictionary feature. We represent the relationships between dictionary features and medical codes through a sparse interpretable matrix, enhancing the mechanistic and global understanding of the model's predictions while maintaining competitive performance and scalability without extensive human annotation.