🤖 AI Summary
This work addresses the generalization performance optimization of majority-voting classifiers in multi-view learning. Methodologically, it pioneers the extension of PAC-Bayesian theory to the multi-view setting, deriving a novel generalization upper bound based on Rényi divergence and establishing the first PAC-Bayesian framework for multi-view ensembles. It further derives first- and second-order oracle bounds and a multi-view C-bound, ensuring strict alignment between theoretical bounds and tractable optimization objectives. To enable practical deployment, a self-bounding convex optimization algorithm is designed, yielding tight, computationally efficient, and differentiable generalization bounds. Empirical evaluation across multiple standard multi-view benchmark datasets demonstrates substantial improvements in classification accuracy and model robustness, thereby validating the effectiveness of the proposed theoretical guidance.
📝 Abstract
The PAC-Bayesian framework has significantly advanced the understanding of statistical learning, particularly for majority voting methods. Despite its successes, its application to multi-view learning -- a setting with multiple complementary data representations -- remains underexplored. In this work, we extend PAC-Bayesian theory to multi-view learning, introducing novel generalization bounds based on R'enyi divergence. These bounds provide an alternative to traditional Kullback-Leibler divergence-based counterparts, leveraging the flexibility of R'enyi divergence. Furthermore, we propose first- and second-order oracle PAC-Bayesian bounds and extend the C-bound to multi-view settings. To bridge theory and practice, we design efficient self-bounding optimization algorithms that align with our theoretical results.