🤖 AI Summary
To address the limited interpretability of deep learning models in melanoma diagnosis—which hinders clinical adoption—this paper proposes a cross-modal interpretable diagnostic framework. Methodologically, it introduces a dual-projection-head contrastive learning mechanism that explicitly aligns clinically relevant dermatoscopic criteria (e.g., asymmetry, border irregularity, color variation) with visual features extracted by a Vision Transformer; it further integrates a large language model to generate structured textual diagnostic reports. The contributions include: (1) achieving high diagnostic accuracy (92.79% accuracy, 0.961 AUC) while ensuring decision transparency; (2) significantly improving multiple quantitative interpretability metrics (e.g., faithfulness, plausibility); and (3) demonstrating strong alignment between model-generated visual attributions and dermatologists’ clinical judgments. This work establishes a new paradigm for AI-assisted dermatological diagnosis that balances robust performance with clinical trustworthiness.
📝 Abstract
Deep learning has demonstrated expert-level performance in melanoma classification, positioning it as a powerful tool in clinical dermatology. However, model opacity and the lack of interpretability remain critical barriers to clinical adoption, as clinicians often struggle to trust the decision-making processes of black-box models. To address this gap, we present a Cross-modal Explainable Framework for Melanoma (CEFM) that leverages contrastive learning as the core mechanism for achieving interpretability. Specifically, CEFM maps clinical criteria for melanoma diagnosis-namely Asymmetry, Border, and Color (ABC)-into the Vision Transformer embedding space using dual projection heads, thereby aligning clinical semantics with visual features. The aligned representations are subsequently translated into structured textual explanations via natural language generation, creating a transparent link between raw image data and clinical interpretation. Experiments on public datasets demonstrate 92.79% accuracy and an AUC of 0.961, along with significant improvements across multiple interpretability metrics. Qualitative analyses further show that the spatial arrangement of the learned embeddings aligns with clinicians' application of the ABC rule, effectively bridging the gap between high-performance classification and clinical trust.