Structured Basis Function Networks: Loss-Centric Multi-Hypothesis Ensembles with Controllable Diversity

📅 2025-09-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing uncertainty quantification methods suffer from two disjoint limitations: multi-hypothesis prediction lacks a principled aggregation mechanism, while ensemble learning struggles to model structured ambiguity—neither aligns with the geometric structure of task-specific loss functions. This work proposes a Bregman-divergence-based central aggregation framework that unifies multi-hypothesis generation and ensemble learning. By explicitly embedding the geometric properties of regression and classification loss functions into the aggregation process, it yields loss-aware multi-hypothesis ensembles. A tunable diversity regularizer enables explicit trade-offs among bias, variance, and diversity, revealing intrinsic relationships among model complexity, capacity, and diversity. The method combines closed-form least-squares estimation with gradient-based optimization for efficient structured uncertainty modeling. Experiments across diverse, challenging benchmarks demonstrate significant improvements in controllable hypothesis diversity, generalization performance, and robustness of deep neural network uncertainty quantification.

Technology Category

Application Category

📝 Abstract
Existing approaches to predictive uncertainty rely either on multi-hypothesis prediction, which promotes diversity but lacks principled aggregation, or on ensemble learning, which improves accuracy but rarely captures the structured ambiguity. This implicitly means that a unified framework consistent with the loss geometry remains absent. The Structured Basis Function Network addresses this gap by linking multi-hypothesis prediction and ensembling through centroidal aggregation induced by Bregman divergences. The formulation applies across regression and classification by aligning predictions with the geometry of the loss, and supports both a closed-form least-squares estimator and a gradient-based procedure for general objectives. A tunable diversity mechanism provides parametric control of the bias-variance-diversity trade-off, connecting multi-hypothesis generalisation with loss-aware ensemble aggregation. Experiments validate this relation and use the mechanism to study the complexity-capacity-diversity trade-off across datasets of increasing difficulty with deep-learning predictors.
Problem

Research questions and friction points this paper is trying to address.

Unifying multi-hypothesis prediction and ensemble learning
Addressing lack of principled aggregation in uncertainty methods
Controlling bias-variance-diversity trade-off in predictive models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Centroidal aggregation using Bregman divergences for ensembles
Tunable diversity mechanism controlling bias-variance-diversity trade-off
Unified framework aligning predictions with loss geometry
🔎 Similar Papers
2024-10-06arXiv.orgCitations: 0