Guided Uncertainty Learning Using a Post-Hoc Evidential Meta-Model

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deep learning models often exhibit overconfidence under distributional shift, and existing post-hoc calibration methods fail to fundamentally address this issue. This paper proposes a retraining-free meta-model calibration framework—Post-hoc Evidential Learning (PEL)—which freezes the backbone network, identifies discriminative regions via feature saliency analysis, and constructs a noise-driven curriculum to explicitly guide the model to learn *when* it is uncertain and *how* to quantify uncertainty. PEL introduces no architectural or parametric modifications to the original model; instead, it learns an uncertainty representation mechanism solely from calibration data. Evaluated across multiple benchmarks, PEL improves out-of-distribution detection and adversarial example detection performance by approximately 77% and 80%, respectively, significantly surpassing current state-of-the-art methods. The approach achieves high reliability while maintaining zero intrusiveness—preserving model integrity and deployment compatibility.

Technology Category

Application Category

📝 Abstract
Reliable uncertainty quantification remains a major obstacle to the deployment of deep learning models under distributional shift. Existing post-hoc approaches that retrofit pretrained models either inherit misplaced confidence or merely reshape predictions, without teaching the model when to be uncertain. We introduce GUIDE, a lightweight evidential learning meta-model approach that attaches to a frozen deep learning model and explicitly learns how and when to be uncertain. GUIDE identifies salient internal features via a calibration stage, and then employs these features to construct a noise-driven curriculum that teaches the model how and when to express uncertainty. GUIDE requires no retraining, no architectural modifications, and no manual intermediate-layer selection to the base deep learning model, thus ensuring broad applicability and minimal user intervention. The resulting model avoids distilling overconfidence from the base model, improves out-of-distribution detection by ~77% and adversarial attack detection by ~80%, while preserving in-distribution performance. Across diverse benchmarks, GUIDE consistently outperforms state-of-the-art approaches, evidencing the need for actively guiding uncertainty to close the gap between predictive confidence and reliability.
Problem

Research questions and friction points this paper is trying to address.

Reliable uncertainty quantification under distributional shift remains challenging
Existing post-hoc approaches fail to teach models when to be uncertain
Lightweight meta-model learns how and when to express uncertainty without retraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Attaches lightweight meta-model to frozen base model
Learns uncertainty via noise-driven curriculum training
Requires no retraining or architectural modifications
🔎 Similar Papers
No similar papers found.
C
Charmaine Barker
Department of Computer Science, University of York, York, UK
D
Daniel Bethell
Department of Computer Science, University of York, York, UK
Simos Gerasimou
Simos Gerasimou
Associate Professor (Senior Lecturer) in Computer Science, University of York
Self-Adaptive SystemsSoftware EngineeringAI Safety