FireGNN: Neuro-Symbolic Graph Neural Networks with Trainable Fuzzy Rules for Interpretable Medical Image Classification

📅 2025-09-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Medical image classification demands both high accuracy and interpretability to meet clinical trust requirements; however, existing graph neural networks (GNNs) operate as black-box models with opaque reasoning mechanisms. To address this, we propose FireGNN—the first neuro-symbolic framework that end-to-end embeds a learnable fuzzy logic system into a GNN. It models topological features—including node degree, clustering coefficient, and label consistency—to generate human-readable symbolic rules. Learnable thresholds and fuzziness parameters enable differentiable optimization of fuzzy rules, while self-supervised tasks—homophily prediction and similarity entropy estimation—assess the quality of topological representations. Evaluated on six medical image benchmarks (five MedMNIST subsets and MorphoMNIST), FireGNN achieves state-of-the-art accuracy while consistently producing interpretable logical rules. It is the first GNN-based method to unify high predictive performance with formal, logic-based interpretability.

Technology Category

Application Category

📝 Abstract
Medical image classification requires not only high predictive performance but also interpretability to ensure clinical trust and adoption. Graph Neural Networks (GNNs) offer a powerful framework for modeling relational structures within datasets; however, standard GNNs often operate as black boxes, limiting transparency and usability, particularly in clinical settings. In this work, we present an interpretable graph-based learning framework named FireGNN that integrates trainable fuzzy rules into GNNs for medical image classification. These rules embed topological descriptors - node degree, clustering coefficient, and label agreement - using learnable thresholds and sharpness parameters to enable intrinsic symbolic reasoning. Additionally, we explore auxiliary self-supervised tasks (e.g., homophily prediction, similarity entropy) as a benchmark to evaluate the contribution of topological learning. Our fuzzy-rule-enhanced model achieves strong performance across five MedMNIST benchmarks and the synthetic dataset MorphoMNIST, while also generating interpretable rule-based explanations. To our knowledge, this is the first integration of trainable fuzzy rules within a GNN.
Problem

Research questions and friction points this paper is trying to address.

Enhancing interpretability in medical image classification using GNNs
Integrating trainable fuzzy rules for symbolic reasoning in neural networks
Addressing black-box limitations of standard GNNs in clinical settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates trainable fuzzy rules into GNNs
Uses topological descriptors with learnable parameters
Generates interpretable rule-based explanations for classification
🔎 Similar Papers
No similar papers found.
Prajit Sengupta
Prajit Sengupta
Imperial College London
Graph Neural NetworksDeep LearningComputer VisionNLPAI4Health
I
I. Rekik
BASIRA Lab, Department of Computing, Imperial College London