🤖 AI Summary
This paper addresses the low efficiency of conformal prediction by proposing a unified framework for constructing confidence sets applicable to both classification and regression tasks. Methodologically, it establishes, for the first time, an efficiency comparison framework—under the average-case criterion—among randomized prediction, exchangeable prediction, and conformal prediction. Leveraging the exchangeability assumption, stochastic process theory, distributionally robust modeling, and generalization error analysis, the framework provides theoretical characterization over generalized label spaces and broad families of probability measures. The main contributions are threefold: (i) it removes conventional restrictions on task type and the i.i.d. assumption; (ii) it proves that average-case efficiency remains attainable under non-i.i.d. settings; and (iii) it delivers a computationally efficient solution for set-valued and functional prediction, backed by rigorous theoretical guarantees and practical performance bounds.
📝 Abstract
This paper continues the study of the efficiency of conformal prediction as compared with more general randomness prediction and exchangeability prediction. It does not restrict itself to the case of classification, and our results will also be applicable to the case of regression. The price to pay is that efficiency will be attained only on average, albeit with respect to a wide range of probability measures on the label space.