Exploiting Expertise of Non-Expert and Diverse Agents in Social Bandit Learning: A Free Energy Approach

πŸ“… 2026-03-12
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge in social multi-armed bandit settings where agents struggle to effectively leverage behavioral information from non-expert or heterogeneous peers without access to others’ reward signals. The authors propose a free-energy-based social learning algorithm that, for the first time, enables agents to automatically evaluate and integrate policies from relevant peers in the absence of observed rewards, without requiring prior knowledge or predefined social norms. By modeling interactions in policy space, the method selectively filters useful information and demonstrates significant performance gains over existing approaches in environments comprising experts, non-experts, and even random agents. Theoretically, the algorithm maintains a logarithmic regret bound and guarantees convergence, offering both empirical effectiveness and formal performance assurances.

Technology Category

Application Category

πŸ“ Abstract
Personalized AI-based services involve a population of individual reinforcement learning agents. However, most reinforcement learning algorithms focus on harnessing individual learning and fail to leverage the social learning capabilities commonly exhibited by humans and animals. Social learning integrates individual experience with observing others' behavior, presenting opportunities for improved learning outcomes. In this study, we focus on a social bandit learning scenario where a social agent observes other agents' actions without knowledge of their rewards. The agents independently pursue their own policy without explicit motivation to teach each other. We propose a free energy-based social bandit learning algorithm over the policy space, where the social agent evaluates others' expertise levels without resorting to any oracle or social norms. Accordingly, the social agent integrates its direct experiences in the environment and others' estimated policies. The theoretical convergence of our algorithm to the optimal policy is proven. Empirical evaluations validate the superiority of our social learning method over alternative approaches in various scenarios. Our algorithm strategically identifies the relevant agents, even in the presence of random or suboptimal agents, and skillfully exploits their behavioral information. In addition to societies including expert agents, in the presence of relevant but non-expert agents, our algorithm significantly enhances individual learning performance, where most related methods fail. Importantly, it also maintains logarithmic regret.
Problem

Research questions and friction points this paper is trying to address.

social bandit learning
non-expert agents
social learning
reinforcement learning
policy evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

social bandit learning
free energy principle
non-expert agents
policy integration
logarithmic regret
πŸ”Ž Similar Papers
No similar papers found.
Erfan Mirzaei
Erfan Mirzaei
Ph.D. Researcher, Istituto Italiano di Tecnologia
Statistical LearningComputational Neuroscience
S
Seyed Pooya Shariatpanahi
Cognitive Systems Lab., School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran
A
Alireza Tavakoli
Cognitive Systems Lab., School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran
Reshad Hosseini
Reshad Hosseini
Associate Professor, Machine Learning and Robotics Group, University of Tehran
Machine LearningMachine VisionManifold Optimization
Majid Nili Ahmadabadi
Majid Nili Ahmadabadi
Professor of ECE, University of Tehran
Reinforcement learningSocial LearningCognitive ModelingRobotics