π€ AI Summary
Deep learning uncertainty quantification is often restricted to output-layer approximations, failing to capture uncertainty across the entire network. To address this, we propose RegVarβa novel method that models regularization sensitivity as an uncertainty metric. RegVar quantifies how perturbations in regularization affect predictions by measuring gradient responses of *all* layer parameters, enabling end-to-end, scalable uncertainty calibration. Theoretically, we prove that RegVar converges exactly to the Laplace approximation in the infinitesimal limit, bridging a critical gap between scalability and precision in large-scale Bayesian deep learning. Empirically, RegVar scales linearly with network size and significantly improves uncertainty calibration across diverse architectures and datasets. Moreover, it reveals representation learning stability and enhances reliability of downstream decision-making.
π Abstract
Uncertainty quantification in deep learning is crucial for safe and reliable decision-making in downstream tasks. Existing methods quantify uncertainty at the last layer or other approximations of the network which may miss some sources of uncertainty in the model. To address this gap, we propose an uncertainty quantification method for large networks based on variation due to regularization. Essentially, predictions that are more (less) sensitive to the regularization of network parameters are less (more, respectively) certain. This principle can be implemented by deterministically tweaking the training loss during the fine-tuning phase and reflects confidence in the output as a function of all layers of the network. We show that regularization variation (RegVar) provides rigorous uncertainty estimates that, in the infinitesimal limit, exactly recover the Laplace approximation in Bayesian deep learning. We demonstrate its success in several deep learning architectures, showing it can scale tractably with the network size while maintaining or improving uncertainty quantification quality. Our experiments across multiple datasets show that RegVar not only identifies uncertain predictions effectively but also provides insights into the stability of learned representations.