🤖 AI Summary
To address the challenge of “input-dependent weight assignment” in multi-model ensemble prediction under heterogeneous environments, this paper proposes Input-Adaptive Bayesian Model Averaging (IA-BMA). IA-BMA achieves sample-wise dynamic weighting by conditioning prior distributions on input features and employs amortized variational inference for efficient posterior weight estimation. Theoretically, IA-BMA is proven to strictly outperform any individual base model in predictive performance. Extensive experiments on personalized cancer treatment, credit card fraud detection, and multiple UCI benchmarks demonstrate that IA-BMA consistently surpasses both non-adaptive baselines and existing adaptive methods, delivering significant improvements in both predictive accuracy and probabilistic calibration. The core contribution lies in generalizing Bayesian model averaging from static, global weighting to an input-driven, fine-grained adaptive mechanism—thereby enabling context-aware uncertainty-aware ensembling.
📝 Abstract
This paper studies prediction with multiple candidate models, where the goal is to combine their outputs. This task is especially challenging in heterogeneous settings, where different models may be better suited to different inputs. We propose input adaptive Bayesian Model Averaging (IA-BMA), a Bayesian method that assigns model weights conditional on the input. IA-BMA employs an input adaptive prior, and yields a posterior distribution that adapts to each prediction, which we estimate with amortized variational inference. We derive formal guarantees for its performance, relative to any single predictor selected per input. We evaluate IABMA across regression and classification tasks, studying data from personalized cancer treatment, credit-card fraud detection, and UCI datasets. IA-BMA consistently delivers more accurate and better-calibrated predictions than both non-adaptive baselines and existing adaptive methods.