π€ AI Summary
Quantifying predictive uncertainty in quantized neural networks remains a core challenge in Bayesian deep learning. This paper proposes a lightweight empirical Bayesian approach: it introduces a learnable normalizing flow prior exclusively over the final layerβs weights and performs inference by maximizing the evidence lower bound (ELBO). Crucially, this is the first work to restrict normalizing flows to the output layer; we theoretically prove that the induced prior strength continuously interpolates between standard Bayesian neural networks and deep ensembles. Experiments on major benchmarks demonstrate that our method achieves uncertainty estimation performance comparable to full-layer Bayesian networks and deep ensembles, while significantly reducing computational overhead. The key contribution lies in establishing both the theoretical equivalence and practical efficacy of βlast-layer Bayesianization,β offering a novel, efficient paradigm for uncertainty modeling in quantized neural networks.
π Abstract
The task of quantifying the inherent uncertainty associated with neural network predictions is a key challenge in artificial intelligence. Bayesian neural networks (BNNs) and deep ensembles are among the most prominent approaches to tackle this task. Both approaches produce predictions by computing an expectation of neural network outputs over some distribution on the corresponding weights; this distribution is given by the posterior in the case of BNNs, and by a mixture of point masses for ensembles. Inspired by recent work showing that the distribution used by ensembles can be understood as a posterior corresponding to a learned data-dependent prior, we propose last layer empirical Bayes (LLEB). LLEB instantiates a learnable prior as a normalizing flow, which is then trained to maximize the evidence lower bound; to retain tractability we use the flow only on the last layer. We show why LLEB is well motivated, and how it interpolates between standard BNNs and ensembles in terms of the strength of the prior that they use. LLEB performs on par with existing approaches, highlighting that empirical Bayes is a promising direction for future research in uncertainty quantification.