Conformal Risk Training: End-to-End Optimization of Conformal Risk Control

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deep learning models lack provable risk guarantees in high-stakes applications. Existing conformal risk control (CRC) only provides finite-sample bounds on expected loss, offering inadequate protection against tail risks; moreover, post-hoc CRC compromises average predictive performance. Method: We propose the first end-to-end conformal risk training framework, extending CRC to the broad class of optimization certainty equivalent (OCE) risk measures—including expectation and conditional value-at-risk (CVaR)—via a gradient-differentiable conformalized loss that enables direct, differentiable OCE risk minimization. Risk constraints are embedded intrinsically during training. Contribution/Results: Our method simultaneously ensures strict finite-sample risk guarantees and improves average performance. Experiments on false negative rate control and battery energy storage financial risk forecasting demonstrate significant gains over standard post-hoc CRC—achieving tighter empirical risk control while improving prediction accuracy, all under theoretically valid risk bounds.

Technology Category

Application Category

📝 Abstract
While deep learning models often achieve high predictive accuracy, their predictions typically do not come with any provable guarantees on risk or reliability, which are critical for deployment in high-stakes applications. The framework of conformal risk control (CRC) provides a distribution-free, finite-sample method for controlling the expected value of any bounded monotone loss function and can be conveniently applied post-hoc to any pre-trained deep learning model. However, many real-world applications are sensitive to tail risks, as opposed to just expected loss. In this work, we develop a method for controlling the general class of Optimized Certainty-Equivalent (OCE) risks, a broad class of risk measures which includes as special cases the expected loss (generalizing the original CRC method) and common tail risks like the conditional value-at-risk (CVaR). Furthermore, standard post-hoc CRC can degrade average-case performance due to its lack of feedback to the model. To address this, we introduce "conformal risk training," an end-to-end approach that differentiates through conformal OCE risk control during model training or fine-tuning. Our method achieves provable risk guarantees while demonstrating significantly improved average-case performance over post-hoc approaches on applications to controlling classifiers' false negative rate and controlling financial risk in battery storage operation.
Problem

Research questions and friction points this paper is trying to address.

Providing provable risk guarantees for deep learning models in high-stakes applications
Controlling tail risks beyond expected loss through Optimized Certainty-Equivalent measures
Improving average performance via end-to-end conformal risk training integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

End-to-end conformal risk training for OCE risks
Differentiates through conformal control during training
Controls tail risks like CVaR beyond expected loss
🔎 Similar Papers