π€ AI Summary
To address low training efficiency, high data/time costs, and severe policy staleness in team reinforcement learning under heterogeneous computing environments, this paper proposes the Asynchronous Federated Policy Gradient (AFedPG) framework. AFedPG enables collaborative training of a global policy across (N) heterogeneous agents and introduces the first lookahead mechanism tailored for federated reinforcement learning (FedRL), which adaptively compensates for policy staleness induced by asynchronous updates. It establishes the first global convergence theory for asynchronous FedRL, yielding an (O(varepsilon^{-2.5}/N)) sample complexity bound and an (Oig((sum 1/t_i)^{-1}ig)) time complexity boundβboth demonstrating linear speedup in (N). Experiments on four MuJoCo benchmark tasks show that AFedPG significantly outperforms existing baselines, achieving substantial improvements in wall-clock time efficiency under computational heterogeneity. Theoretical guarantees align closely with empirical results.
π Abstract
To improve the efficiency of reinforcement learning (RL), we propose a novel asynchronous federated reinforcement learning (FedRL) framework termed AFedPG, which constructs a global model through collaboration among $N$ agents using policy gradient (PG) updates. To address the challenge of lagged policies in asynchronous settings, we design a delay-adaptive lookahead technique extit{specifically for FedRL} that can effectively handle heterogeneous arrival times of policy gradients. We analyze the theoretical global convergence bound of AFedPG, and characterize the advantage of the proposed algorithm in terms of both the sample complexity and time complexity. Specifically, our AFedPG method achieves $O(frac{{epsilon}^{-2.5}}{N})$ sample complexity for global convergence at each agent on average. Compared to the single agent setting with $O(epsilon^{-2.5})$ sample complexity, it enjoys a linear speedup with respect to the number of agents. Moreover, compared to synchronous FedPG, AFedPG improves the time complexity from $O(frac{t_{max}}{N})$ to $O({sum_{i=1}^{N} frac{1}{t_{i}}})^{-1}$, where $t_{i}$ denotes the time consumption in each iteration at agent $i$, and $t_{max}$ is the largest one. The latter complexity $O({sum_{i=1}^{N} frac{1}{t_{i}}})^{-1}$ is always smaller than the former one, and this improvement becomes significant in large-scale federated settings with heterogeneous computing powers ($t_{max}gg t_{min}$). Finally, we empirically verify the improved performance of AFedPG in four widely used MuJoCo environments with varying numbers of agents. We also demonstrate the advantages of AFedPG in various computing heterogeneity scenarios.