Many Ways to be Right: Rashomon Sets for Concept-Based Neural Networks

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The Rashomon effect—where multiple high-accuracy deep neural networks rely on distinct features or reasoning paths—remains difficult to uncover systematically. Method: This paper proposes the Rashomon Concept Bottleneck Model (RCBM) framework, which introduces lightweight concept adapters and a diversity-regularized loss atop a shared backbone to jointly optimize multiple concept-based inference paths. Contribution/Results: RCBM is the first approach to enable data-driven, post-hoc modeling of inference diversity in deep models without retraining from scratch, yielding an ensemble of models with comparable accuracy but markedly divergent, human-interpretable concept dependencies. Experiments demonstrate that the generated model set preserves high predictive performance while substantially enhancing interpretability and auditability—establishing a new paradigm for robustness analysis, fairness evaluation, and trustworthy AI deployment.

Technology Category

Application Category

📝 Abstract
Modern neural networks rarely have a single way to be right. For many tasks, multiple models can achieve identical performance while relying on different features or reasoning patterns, a property known as the Rashomon Effect. However, uncovering this diversity in deep architectures is challenging as their continuous parameter spaces contain countless near-optimal solutions that are numerically distinct but often behaviorally similar. We introduce Rashomon Concept Bottleneck Models, a framework that learns multiple neural networks which are all accurate yet reason through distinct human-understandable concepts. By combining lightweight adapter modules with a diversity-regularized training objective, our method constructs a diverse set of deep concept-based models efficiently without retraining from scratch. The resulting networks provide fundamentally different reasoning processes for the same predictions, revealing how concept reliance and decision making vary across equally performing solutions. Our framework enables systematic exploration of data-driven reasoning diversity in deep models, offering a new mechanism for auditing, comparison, and alignment across equally accurate solutions.
Problem

Research questions and friction points this paper is trying to address.

Uncovering diverse reasoning patterns in equally accurate neural networks
Learning multiple concept-based models with distinct human-understandable explanations
Systematically exploring reasoning diversity across equally performing deep learning solutions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diverse accurate models using distinct human-understandable concepts
Lightweight adapter modules with diversity-regularized training objective
Efficient construction of multiple concept-based models without retraining
🔎 Similar Papers