Graph Concept Bottleneck Models

📅 2025-08-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing concept bottleneck models (CBMs) assume conditional independence among concepts, ignoring their intrinsic semantic dependencies—thereby limiting interpretability and robustness. To address this, we propose the Graph-enhanced Concept Bottleneck Model (G-CBM), which explicitly models structural dependencies among latent concepts via a learned concept graph. Methodologically, G-CBM integrates a graph neural network into an end-to-end CBM framework, jointly optimizing concept representations and graph topology to enable precise concept-level interventions and counterfactual reasoning. Evaluated on multiple image classification benchmarks, G-CBM achieves consistent accuracy gains (average +1.8%) over baseline CBMs, yields interpretable hierarchical concept structures, and demonstrates strong generalization across diverse model architectures. Our core contribution is the first formulation of implicit concept relationships as a learnable latent graph structure—bridging the gap between the restrictive independence assumption in conventional CBMs and the rich, interdependent semantics of real-world concepts.

Technology Category

Application Category

📝 Abstract
Concept Bottleneck Models (CBMs) provide explicit interpretations for deep neural networks through concepts and allow intervention with concepts to adjust final predictions. Existing CBMs assume concepts are conditionally independent given labels and isolated from each other, ignoring the hidden relationships among concepts. However, the set of concepts in CBMs often has an intrinsic structure where concepts are generally correlated: changing one concept will inherently impact its related concepts. To mitigate this limitation, we propose GraphCBMs: a new variant of CBM that facilitates concept relationships by constructing latent concept graphs, which can be combined with CBMs to enhance model performance while retaining their interpretability. Our experiment results on real-world image classification tasks demonstrate Graph CBMs offer the following benefits: (1) superior in image classification tasks while providing more concept structure information for interpretability; (2) able to utilize latent concept graphs for more effective interventions; and (3) robust in performance across different training and architecture settings.
Problem

Research questions and friction points this paper is trying to address.

Modeling concept relationships in interpretable neural networks
Addressing conditional independence assumption in concept bottleneck models
Enhancing interpretability and intervention via latent concept graphs
Innovation

Methods, ideas, or system contributions that make the work stand out.

GraphCBMs incorporate latent concept graphs
Enhance interpretability with concept relationships
Improve intervention effectiveness via structured concepts
🔎 Similar Papers
No similar papers found.