๐ค AI Summary
To address noise interference in user preference modeling from implicit feedback, this paper proposes the Adaptive Ensemble Learning (AEL) framework. AEL is built upon a Sparse Mixture-of-Experts (MoE) architecture, incorporating lightweight stacked sub-recommenders and a sparse gating network that dynamically selects an optimal expert subset for each sampleโenabling sample-level adaptive denoising. Its key contribution lies in pioneering a sparse-MoE-driven adaptive ensemble paradigm that jointly ensures robustness, diversity, and computational efficiency. Extensive experiments on multiple public benchmarks demonstrate that AEL significantly outperforms state-of-the-art denoising methods, maintaining superior performance under both high-noise and dynamically varying noise conditions.
๐ Abstract
Learning user preferences from implicit feedback is one of the core challenges in recommendation. The difficulty lies in the potential noise within implicit feedback. Therefore, various denoising recommendation methods have been proposed recently. However, most of them overly rely on the hyperparameter configurations, inevitably leading to inadequacies in model adaptability and generalization performance. In this study, we propose a novel Adaptive Ensemble Learning (AEL) for denoising recommendation, which employs a sparse gating network as a brain, selecting suitable experts to synthesize appropriate denoising capacities for different data samples. To address the ensemble learning shortcoming of model complexity and ensure sub-recommender diversity, we also proposed a novel method that stacks components to create sub-recommenders instead of directly constructing them. Extensive experiments across various datasets demonstrate that AEL outperforms others in kinds of popular metrics, even in the presence of substantial and dynamic noise. Our code is available at https://github.com/cpu9xx/AEL.