ILoRA: Federated Learning with Low-Rank Adaptation for Heterogeneous Client Aggregation

📅 2025-11-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Federated LoRA faces three key challenges under client heterogeneity: (1) absence of QR orthogonal initialization causes subspace misalignment and training instability; (2) aggregation of LoRA adapters with heterogeneous ranks induces rank incompatibility and global bias; and (3) non-IID data exacerbates client drift, degrading generalization. To address these, we propose FedLoRA: (1) it enforces client subspace consistency via QR orthogonal initialization; (2) it introduces a concatenation-based QR aggregation mechanism to losslessly fuse multi-rank LoRA updates; and (3) it employs a rank-aware control-variate AdamW optimizer to mitigate client drift. We provide theoretical convergence guarantees. Extensive experiments on multiple vision and NLP federated benchmarks demonstrate that FedLoRA significantly outperforms existing federated LoRA methods—achieving accuracy gains of 2.1–4.7% while ensuring more stable convergence.

Technology Category

Application Category

📝 Abstract
Federated Learning with Low-Rank Adaptation (LoRA) faces three critical challenges under client heterogeneity: (1) Initialization-Induced Instability due to random initialization misaligning client subspaces; (2) Rank Incompatibility and Aggregation Error when averaging LoRA parameters of different ranks, which biases the global model; and (3) exacerbated Client Drift under Non-IID Data, impairing generalization. To address these challenges, we propose ILoRA, a unified framework that integrates three core innovations: a QR-based orthonormal initialization to ensure all clients start in a coherent subspace; a Concatenated QR Aggregation mechanism that fuses heterogeneous-rank updates via concatenation and decomposition, preserving information while maintaining dimension alignment; and an AdamW optimizer with rank-aware control variates to correct local updates and mitigate client drift. Supported by theoretical convergence guarantees, extensive experiments on vision and NLP benchmarks demonstrate that ILoRA consistently achieves superior accuracy and convergence stability compared to existing federated LoRA methods.
Problem

Research questions and friction points this paper is trying to address.

Addresses initialization instability in federated LoRA under client heterogeneity
Solves rank incompatibility and aggregation errors in heterogeneous LoRA parameters
Mitigates client drift and improves generalization with non-IID data
Innovation

Methods, ideas, or system contributions that make the work stand out.

QR-based orthonormal initialization for coherent subspace alignment
Concatenated QR aggregation for heterogeneous-rank parameter fusion
AdamW optimizer with rank-aware control variates
J
Junchao Zhou
College of Intelligence and Computing, Tianjin University
J
Junkang Liu
College of Intelligence and Computing, Tianjin University
Fanhua Shang
Fanhua Shang
Professor at Tianjin University
Machine LearningData MiningComputer Vision