π€ AI Summary
The generalization mechanisms of deep neural networks (DNNs) remain poorly understood, particularly regarding the quantitative characterization of collaborative decision-making across multiple complementary evidence sources.
Method: We introduce the notion of *Minimum Sufficient View* (MSV)βthe smallest set of mutually complementary input views formally required to support a reliable prediction. Our end-to-end framework integrates differentiable view selection, evidence entropy regularization, and multi-view feature disentanglement to establish a quantitative relationship between prediction confidence and the number of sufficient views.
Contribution/Results: Experiments on ImageNet-C and Multi-View CIFAR demonstrate that each additional MSV yields an average +2.3% Top-1 accuracy gain and a 37% reduction in calibration error. By moving beyond single-view evaluation paradigms, this work provides a novel theoretical framework and empirical foundation for analyzing DNN robustness and interpretability.