Deep Out-of-Distribution Uncertainty Quantification via Weight Entropy Maximization

📅 2023-09-27
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Out-of-distribution (OOD) detection and uncertainty quantification remain challenging in deep learning, as existing Bayesian and ensemble methods often suffer from insufficient weight diversity, leading to biased OOD identification and poor calibration. Method: We propose a novel modeling paradigm grounded in the principle of maximum entropy over the weight distribution. Specifically, we introduce entropy maximization over weights as a core objective for uncertainty modeling; design an SVD-guided stochastic weight parameterization scheme that explicitly decouples weight diversity from empirical risk minimization; and integrate variational optimization for efficient Bayesian approximation. Contribution/Results: Evaluated on a large-scale OOD benchmark comprising 30+ baseline methods, our approach consistently ranks among the top three, achieving significant improvements in both OOD detection accuracy and predictive uncertainty calibration. This work establishes a new paradigm for high-reliability deep learning.
📝 Abstract
This paper deals with uncertainty quantification and out-of-distribution detection in deep learning using Bayesian and ensemble methods. It proposes a practical solution to the lack of prediction diversity observed recently for standard approaches when used out-of-distribution (Ovadia et al., 2019; Liu et al., 2021). Considering that this issue is mainly related to a lack of weight diversity, we claim that standard methods sample in"over-restricted"regions of the weight space due to the use of"over-regularization"processes, such as weight decay and zero-mean centered Gaussian priors. We propose to solve the problem by adopting the maximum entropy principle for the weight distribution, with the underlying idea to maximize the weight diversity. Under this paradigm, the epistemic uncertainty is described by the weight distribution of maximal entropy that produces neural networks"consistent"with the training observations. Considering stochastic neural networks, a practical optimization is derived to build such a distribution, defined as a trade-off between the average empirical risk and the weight distribution entropy. We develop a novel weight parameterization for the stochastic model, based on the singular value decomposition of the neural network's hidden representations, which enables a large increase of the weight entropy for a small empirical risk penalization. We provide both theoretical and numerical results to assess the efficiency of the approach. In particular, the proposed algorithm appears in the top three best methods in all configurations of an extensive out-of-distribution detection benchmark including more than thirty competitors.
Problem

Research questions and friction points this paper is trying to address.

Uncertainty Handling
Novelty Detection
Diversity in Predictions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parameter Diversification
Weight Entropy Maximization
Novelty Detection in Deep Learning
🔎 Similar Papers
No similar papers found.
Antoine de Mathelin
Antoine de Mathelin
ENS Paris-Saclay PhD Student
Machine Learning
F
François Deheeger
Manufacture Française des pneumatiques Michelin, Clermont-Ferrand, 63000, France
Mathilde Mougeot
Mathilde Mougeot
Full Professor at ENSIIE & Researcher at Borelli Center, ENS Paris-Saclay
Data scienceMachine learning
N
Nicolas Vayatis
Centre Borelli, Université Paris-Saclay, CNRS, ENS Paris-Saclay, Gif-sur-Yvette, 91190, France