Enhancing Uncertainty Estimation and Interpretability with Bayesian Non-negative Decision Layer

📅 2025-05-28
🏛️ International Conference on Learning Representations
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deep neural networks (DNNs) suffer from inaccurate uncertainty estimation and poor interpretability, primarily due to their black-box architecture and interference from irrelevant features. To address this, we propose the Bayesian Nonnegative Decision Layer (BNDL), which reformulates a DNN as a conditional Bayesian nonnegative factorization model. BNDL is the first framework to jointly incorporate nonnegativity, sparsity, and stochastic latent variable modeling—thereby theoretically guaranteeing effective feature disentanglement. We further introduce a Weibull variational inference network to enable efficient and robust posterior approximation. Evaluated on multiple benchmark datasets, BNDL achieves significant improvements in predictive accuracy while producing well-calibrated uncertainty estimates. Moreover, it generates human-interpretable decision rationales that are sparse, nonnegative, and semantically meaningful—enhancing both reliability and transparency of deep learning predictions.

Technology Category

Application Category

📝 Abstract
Although deep neural networks have demonstrated significant success due to their powerful expressiveness, most models struggle to meet practical requirements for uncertainty estimation. Concurrently, the entangled nature of deep neural networks leads to a multifaceted problem, where various localized explanation techniques reveal that multiple unrelated features influence the decisions, thereby undermining interpretability. To address these challenges, we develop a Bayesian Non-negative Decision Layer (BNDL), which reformulates deep neural networks as a conditional Bayesian non-negative factor analysis. By leveraging stochastic latent variables, the BNDL can model complex dependencies and provide robust uncertainty estimation. Moreover, the sparsity and non-negativity of the latent variables encourage the model to learn disentangled representations and decision layers, thereby improving interpretability. We also offer theoretical guarantees that BNDL can achieve effective disentangled learning. In addition, we developed a corresponding variational inference method utilizing a Weibull variational inference network to approximate the posterior distribution of the latent variables. Our experimental results demonstrate that with enhanced disentanglement capabilities, BNDL not only improves the model's accuracy but also provides reliable uncertainty estimation and improved interpretability.
Problem

Research questions and friction points this paper is trying to address.

Improving uncertainty estimation in deep neural networks
Enhancing interpretability by disentangling feature influences
Developing Bayesian Non-negative Decision Layer for robust modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian Non-negative Decision Layer for uncertainty
Stochastic latent variables model dependencies
Weibull variational inference for posterior approximation
🔎 Similar Papers
No similar papers found.
Xinyue Hu
Xinyue Hu
University of Minnesota, Twin Cities
machine learningnetworkingsmart gridssmart transportation
Zhibin Duan
Zhibin Duan
Assistant professor at Xi'an jiaotong university
Bayesian deep learningProbabilistic machine learning
B
Bo Chen
National Key Laboratory of Radar Signal Processing, Xidian University, Xi’an, 710071, China.
M
Mingyuan Zhou
McCombs School of Business, The University of Texas at Austin, Austin, TX 78712