🤖 AI Summary
Concept Bottleneck Models (CBMs) predict predefined concepts but fail to satisfy the information bottleneck principle—concept prediction capability does not imply concept-exclusive encoding, undermining interpretability and intervention reliability.
Method: We identify this fundamental limitation and propose the Minimum Concept Bottleneck Model (MCBM), which enforces each latent variable to retain only the minimal sufficient information for its associated concept via variational information bottleneck regularization.
Contribution/Results: MCBM is the first CBM framework to provide theoretical guarantees for concept interventions, ensuring both Bayesian consistency and architectural flexibility. Empirical evaluation across multiple benchmarks demonstrates significant improvements in concept specificity, intervention robustness, and model interpretability. These results validate the critical role of strict information constraints in building trustworthy, concept-based models.
📝 Abstract
Deep learning representations are often difficult to interpret, which can hinder their deployment in sensitive applications. Concept Bottleneck Models (CBMs) have emerged as a promising approach to mitigate this issue by learning representations that support target task performance while ensuring that each component predicts a concrete concept from a predefined set. In this work, we argue that CBMs do not impose a true bottleneck: the fact that a component can predict a concept does not guarantee that it encodes only information about that concept. This shortcoming raises concerns regarding interpretability and the validity of intervention procedures. To overcome this limitation, we propose Minimal Concept Bottleneck Models (MCBMs), which incorporate an Information Bottleneck (IB) objective to constrain each representation component to retain only the information relevant to its corresponding concept. This IB is implemented via a variational regularization term added to the training loss. As a result, MCBMs support concept-level interventions with theoretical guarantees, remain consistent with Bayesian principles, and offer greater flexibility in key design choices.