Input Adaptive Bayesian Model Averaging

📅 2025-10-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of “input-dependent weight assignment” in multi-model ensemble prediction under heterogeneous environments, this paper proposes Input-Adaptive Bayesian Model Averaging (IA-BMA). IA-BMA achieves sample-wise dynamic weighting by conditioning prior distributions on input features and employs amortized variational inference for efficient posterior weight estimation. Theoretically, IA-BMA is proven to strictly outperform any individual base model in predictive performance. Extensive experiments on personalized cancer treatment, credit card fraud detection, and multiple UCI benchmarks demonstrate that IA-BMA consistently surpasses both non-adaptive baselines and existing adaptive methods, delivering significant improvements in both predictive accuracy and probabilistic calibration. The core contribution lies in generalizing Bayesian model averaging from static, global weighting to an input-driven, fine-grained adaptive mechanism—thereby enabling context-aware uncertainty-aware ensembling.

Technology Category

Application Category

📝 Abstract
This paper studies prediction with multiple candidate models, where the goal is to combine their outputs. This task is especially challenging in heterogeneous settings, where different models may be better suited to different inputs. We propose input adaptive Bayesian Model Averaging (IA-BMA), a Bayesian method that assigns model weights conditional on the input. IA-BMA employs an input adaptive prior, and yields a posterior distribution that adapts to each prediction, which we estimate with amortized variational inference. We derive formal guarantees for its performance, relative to any single predictor selected per input. We evaluate IABMA across regression and classification tasks, studying data from personalized cancer treatment, credit-card fraud detection, and UCI datasets. IA-BMA consistently delivers more accurate and better-calibrated predictions than both non-adaptive baselines and existing adaptive methods.
Problem

Research questions and friction points this paper is trying to address.

Adaptively combines multiple models' outputs per input
Addresses heterogeneous settings with varying model suitability
Provides input-conditional Bayesian averaging with performance guarantees
Innovation

Methods, ideas, or system contributions that make the work stand out.

Input adaptive Bayesian Model Averaging method
Amortized variational inference for estimation
Input adaptive prior for conditional weights
🔎 Similar Papers
Yuli Slavutsky
Yuli Slavutsky
Postdoctoral Research Scientist, Columbia University
Machine LearningStatistics
S
Sebastian Salazar
Department of Computer Science, Columbia University, New York, NY 10027, USA
D
David M. Blei
Department of Statistics, Columbia University, New York, NY 10027, USA; Department of Computer Science, Columbia University, New York, NY 10027, USA