Harnessing Data from Clustered LQR Systems: Personalized and Collaborative Policy Optimization

📅 2025-11-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In reinforcement learning, the absence of a known process model hinders task similarity identification and cross-task data reuse. To address this, we propose a collaborative clustering RL framework for multi-agent linear quadratic regulation (LQR). Methodologically, we are the first to embed dynamic clustering into data-driven control, jointly optimizing both policies and cluster structure; we employ distributed zeroth-order policy optimization coupled with a sequential elimination mechanism, achieving only logarithmic communication overhead. Theoretically, we prove that correct clustering is achieved with high probability, and intra-cluster policy suboptimality converges as cluster size increases. Empirically, the learned controllers significantly improve sample efficiency and statistical gain under a closed-loop performance discrepancy metric, while demonstrating strong scalability.

Technology Category

Application Category

📝 Abstract
It is known that reinforcement learning (RL) is data-hungry. To improve sample-efficiency of RL, it has been proposed that the learning algorithm utilize data from 'approximately similar' processes. However, since the process models are unknown, identifying which other processes are similar poses a challenge. In this work, we study this problem in the context of the benchmark Linear Quadratic Regulator (LQR) setting. Specifically, we consider a setting with multiple agents, each corresponding to a copy of a linear process to be controlled. The agents' local processes can be partitioned into clusters based on similarities in dynamics and tasks. Combining ideas from sequential elimination and zeroth-order policy optimization, we propose a new algorithm that performs simultaneous clustering and learning to output a personalized policy (controller) for each cluster. Under a suitable notion of cluster separation that captures differences in closed-loop performance across systems, we prove that our approach guarantees correct clustering with high probability. Furthermore, we show that the sub-optimality gap of the policy learned for each cluster scales inversely with the size of the cluster, with no additional bias, unlike in prior works on collaborative learning-based control. Our work is the first to reveal how clustering can be used in data-driven control to learn personalized policies that enjoy statistical gains from collaboration but do not suffer sub-optimality due to inclusion of data from dissimilar processes. From a distributed implementation perspective, our method is attractive as it incurs only a mild logarithmic communication overhead.
Problem

Research questions and friction points this paper is trying to address.

Identifying similar processes among multiple agents with unknown dynamics
Simultaneously clustering agents and learning personalized control policies
Eliminating performance bias from dissimilar processes in collaborative learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Simultaneous clustering and learning algorithm
Personalized policy optimization per cluster
Logarithmic communication overhead implementation
🔎 Similar Papers
No similar papers found.