🤖 AI Summary
This work addresses the intractability of exact conformal prediction for real-valued outputs, which requires training infinitely many estimators and renders confidence regions computationally prohibitive. The authors propose a general framework to efficiently construct a tight approximation of the full conformal prediction region within a reproducing kernel Hilbert space (RKHS). By introducing a notion of “thickness” to quantify the deviation between the approximate and true conformal regions, and leveraging the smoothness of the loss and score functions, they establish approximation error bounds that depend explicitly on the regularity of the underlying function. This approach not only enables scalable computation of conformal confidence regions but also provides rigorous theoretical guarantees on the tightness of the approximation.
📝 Abstract
Full conformal prediction is a framework that implicitly formulates distribution-free confidence prediction regions for a wide range of estimators. However, a classical limitation of the full conformal framework is the computation of the confidence prediction regions, which is usually impossible since it requires training infinitely many estimators (for real-valued prediction for instance). The main purpose of the present work is to describe a generic strategy for designing a tight approximation to the full conformal prediction region that can be efficiently computed. Along with this approximate confidence region, a theoretical quantification of the tightness of this approximation is developed, depending on the smoothness assumptions on the loss and score functions. The new notion of thickness is introduced for quantifying the discrepancy between the approximate confidence region and the full conformal one.