Deep Ensembles Secretly Perform Empirical Bayes

📅 2025-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the theoretical foundations of deep ensembles for uncertainty quantification in neural networks. While deep ensembles empirically outperform Bayesian neural networks (BNNs), their success lacks a unified theoretical explanation. We establish, for the first time, that deep ensembles are theoretically equivalent to empirical Bayes inference: they implicitly learn a data-dependent prior—a mixture of point masses—thereby approximating Bayesian model averaging. This equivalence unifies deep ensembles and BNNs under a coherent Bayesian framework and reveals that their superior uncertainty estimation stems from nonparametric, adaptive prior modeling. Our result provides the first rigorous Bayesian interpretation of ensemble methods and inspires new uncertainty estimation paradigms that jointly ensure interpretability and robustness.

Technology Category

Application Category

📝 Abstract
Quantifying uncertainty in neural networks is a highly relevant problem which is essential to many applications. The two predominant paradigms to tackle this task are Bayesian neural networks (BNNs) and deep ensembles. Despite some similarities between these two approaches, they are typically surmised to lack a formal connection and are thus understood as fundamentally different. BNNs are often touted as more principled due to their reliance on the Bayesian paradigm, whereas ensembles are perceived as more ad-hoc; yet, deep ensembles tend to empirically outperform BNNs, with no satisfying explanation as to why this is the case. In this work we bridge this gap by showing that deep ensembles perform exact Bayesian averaging with a posterior obtained with an implicitly learned data-dependent prior. In other words deep ensembles are Bayesian, or more specifically, they implement an empirical Bayes procedure wherein the prior is learned from the data. This perspective offers two main benefits: (i) it theoretically justifies deep ensembles and thus provides an explanation for their strong empirical performance; and (ii) inspection of the learned prior reveals it is given by a mixture of point masses -- the use of such a strong prior helps elucidate observed phenomena about ensembles. Overall, our work delivers a newfound understanding of deep ensembles which is not only of interest in it of itself, but which is also likely to generate future insights that drive empirical improvements for these models.
Problem

Research questions and friction points this paper is trying to address.

Deep Ensembles
Neural Network Uncertainty
Bayesian Neural Networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep Ensembles
Bayesian Methods
Prior Knowledge Learning
🔎 Similar Papers
No similar papers found.