🤖 AI Summary
Existing subgraph-based GNN explanation methods overemphasize local structures while neglecting long-range dependencies; graph coarsening approaches improve global interpretability but suffer from fixed granularity, limiting adaptability to multi-scale real-world tasks. To address this, we propose the Tree-shaped Interpretable Framework (TIF), the first method to construct a multi-granularity hierarchical tree: it iteratively applies graph coarsening and branch perturbation to generate cross-scale subgraph nodes, and introduces an adaptive routing mechanism to dynamically select critical reasoning paths. TIF breaks rigid granularity constraints, enabling synergistic interpretation of both local subgraphs and global topology. Evaluated on synthetic and real-world graph classification benchmarks, TIF significantly improves explanation quality—measured by faithfulness and compactness—while maintaining prediction accuracy comparable to state-of-the-art models.
📝 Abstract
Interpretable Graph Neural Networks (GNNs) aim to reveal the underlying reasoning behind model predictions, attributing their decisions to specific subgraphs that are informative. However, existing subgraph-based interpretable methods suffer from an overemphasis on local structure, potentially overlooking long-range dependencies within the entire graphs. Although recent efforts that rely on graph coarsening have proven beneficial for global interpretability, they inevitably reduce the graphs to a fixed granularity. Such an inflexible way can only capture graph connectivity at a specific level, whereas real-world graph tasks often exhibit relationships at varying granularities (e.g., relevant interactions in proteins span from functional groups, to amino acids, and up to protein domains). In this paper, we introduce a novel Tree-like Interpretable Framework (TIF) for graph classification, where plain GNNs are transformed into hierarchical trees, with each level featuring coarsened graphs of different granularity as tree nodes. Specifically, TIF iteratively adopts a graph coarsening module to compress original graphs (i.e., root nodes of trees) into increasingly coarser ones (i.e., child nodes of trees), while preserving diversity among tree nodes within different branches through a dedicated graph perturbation module. Finally, we propose an adaptive routing module to identify the most informative root-to-leaf paths, providing not only the final prediction but also the multi-granular interpretability for the decision-making process. Extensive experiments on the graph classification benchmarks with both synthetic and real-world datasets demonstrate the superiority of TIF in interpretability, while also delivering a competitive prediction performance akin to the state-of-the-art counterparts.