🤖 AI Summary
This work addresses scalability and hyperparameter adaptation challenges in distributed Gaussian process (GP) learning for multi-agent systems. We propose the first fully decentralized stochastic-feature GP inference framework. Methodologically, it integrates decentralized optimization, distributed consensus algorithms, and online Bayesian model averaging to achieve asymptotically exact GP approximate inference; additionally, we design an online ensemble strategy tailored for multi-kernel learning, enabling adaptive hyperparameter selection and dynamic kernel composition. Compared with existing distributed Bayesian and frequentist approaches, our framework demonstrates significant improvements in prediction accuracy, communication efficiency, and adaptability to environmental dynamics—both in synthetic and real-world experiments—thereby overcoming the scalability bottleneck of distributed Bayesian learning in multi-agent settings.
📝 Abstract
Flexible and scalable decentralized learning solutions are fundamentally important in the application of multi-agent systems. While several recent approaches introduce (ensembles of) kernel machines in the distributed setting, Bayesian solutions are much more limited. We introduce a fully decentralized, asymptotically exact solution to computing the random feature approximation of Gaussian processes. We further address the choice of hyperparameters by introducing an ensembling scheme for Bayesian multiple kernel learning based on online Bayesian model averaging. The resulting algorithm is tested against Bayesian and frequentist methods on simulated and real-world datasets.