🤖 AI Summary
Medical image classification demands both high accuracy and interpretability to meet clinical trust requirements; however, existing graph neural networks (GNNs) operate as black-box models with opaque reasoning mechanisms. To address this, we propose FireGNN—the first neuro-symbolic framework that end-to-end embeds a learnable fuzzy logic system into a GNN. It models topological features—including node degree, clustering coefficient, and label consistency—to generate human-readable symbolic rules. Learnable thresholds and fuzziness parameters enable differentiable optimization of fuzzy rules, while self-supervised tasks—homophily prediction and similarity entropy estimation—assess the quality of topological representations. Evaluated on six medical image benchmarks (five MedMNIST subsets and MorphoMNIST), FireGNN achieves state-of-the-art accuracy while consistently producing interpretable logical rules. It is the first GNN-based method to unify high predictive performance with formal, logic-based interpretability.
📝 Abstract
Medical image classification requires not only high predictive performance but also interpretability to ensure clinical trust and adoption. Graph Neural Networks (GNNs) offer a powerful framework for modeling relational structures within datasets; however, standard GNNs often operate as black boxes, limiting transparency and usability, particularly in clinical settings. In this work, we present an interpretable graph-based learning framework named FireGNN that integrates trainable fuzzy rules into GNNs for medical image classification. These rules embed topological descriptors - node degree, clustering coefficient, and label agreement - using learnable thresholds and sharpness parameters to enable intrinsic symbolic reasoning. Additionally, we explore auxiliary self-supervised tasks (e.g., homophily prediction, similarity entropy) as a benchmark to evaluate the contribution of topological learning. Our fuzzy-rule-enhanced model achieves strong performance across five MedMNIST benchmarks and the synthetic dataset MorphoMNIST, while also generating interpretable rule-based explanations. To our knowledge, this is the first integration of trainable fuzzy rules within a GNN.