Dirichlet Scale Mixture Priors for Bayesian Neural Networks

πŸ“… 2026-02-23
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge of designing priors for Bayesian neural networks that simultaneously promote sparsity, robustness, and predictive performance while mitigating the cold posterior effect. The authors propose a Dirichlet Scale Mixture (DSM) prior that leverages its heavy-tailed nature and structured sparsity-inducing shrinkage mechanism to enable implicit feature selection and parameter reduction. This approach effectively alleviates the cold posterior problem and enhances model robustness and prunability, particularly under small-sample regimes and with correlated data. Empirical evaluations on both synthetic and real-world datasets demonstrate that the DSM prior achieves competitive predictive accuracy while substantially reducing the effective number of parameters and improving robustness against adversarial attacks, offering a superior paradigm for sparse prior specification in Bayesian neural networks.

Technology Category

Application Category

πŸ“ Abstract
Neural networks are the cornerstone of modern machine learning, yet can be difficult to interpret, give overconfident predictions and are vulnerable to adversarial attacks. Bayesian neural networks (BNNs) provide some alleviation of these limitations, but have problems of their own. The key step of specifying prior distributions in BNNs is no trivial task, yet is often skipped out of convenience. In this work, we propose a new class of prior distributions for BNNs, the Dirichlet scale mixture (DSM) prior, that addresses current limitations in Bayesian neural networks through structured, sparsity-inducing shrinkage. Theoretically, we derive general dependence structures and shrinkage results for DSM priors and show how they manifest under the geometry induced by neural networks. In experiments on simulated and real world data we find that the DSM priors encourages sparse networks through implicit feature selection, show robustness under adversarial attacks and deliver competitive predictive performance with substantially fewer effective parameters. In particular, their advantages appear most pronounced in correlated, moderately small data regimes, and are more amenable to weight pruning. Moreover, by adopting heavy-tailed shrinkage mechanisms, our approach aligns with recent findings that such priors can mitigate the cold posterior effect, offering a principled alternative to the commonly used Gaussian priors.
Problem

Research questions and friction points this paper is trying to address.

Bayesian neural networks
prior specification
sparsity
adversarial robustness
cold posterior effect
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dirichlet scale mixture
Bayesian neural networks
sparsity-inducing shrinkage
heavy-tailed priors
adversarial robustness
πŸ”Ž Similar Papers
No similar papers found.
A
August Arnstad
Department of Statistics & Data Science, University of Oslo
L
Leiv RΓΈnneberg
Department of Statistics & Data Science, University of Oslo
Geir Storvik
Geir Storvik
Professor in Statistics, University of Oslo
Statistical computingspatio-temporal processesstatistical ecology