Resource-Adaptive Federated Text Generation with Differential Privacy

📅 2026-03-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency of differentially private synthetic text generation in cross-institutional federated learning, where stringent privacy constraints and heterogeneous computational resources often exclude weak clients, exacerbating data skew and noise. To overcome this, the authors propose a resource-adaptive two-stage federated text generation framework: strong clients perform differentially private fine-tuning, while weak clients contribute via a lightweight differentially private voting mechanism. Semantic consistency is further enforced through control codes (e.g., topic or metadata), enabling all clients to jointly shape the output with only a single round of communication. This approach is the first to achieve full-client participation under differential privacy in heterogeneous settings, significantly improving global distribution alignment of synthetic data and robustness on downstream tasks without compromising privacy guarantees.

Technology Category

Application Category

📝 Abstract
In cross-silo federated learning (FL), sensitive text datasets remain confined to local organizations due to privacy regulations, making repeated training for each downstream task both communication-intensive and privacy-demanding. A promising alternative is to generate differentially private (DP) synthetic datasets that approximate the global distribution and can be reused across tasks. However, pretrained large language models (LLMs) often fail under domain shift, and federated finetuning is hindered by computational heterogeneity: only resource-rich clients can update the model, while weaker clients are excluded, amplifying data skew and the adverse effects of DP noise. We propose a flexible participation framework that adapts to client capacities. Strong clients perform DP federated finetuning, while weak clients contribute through a lightweight DP voting mechanism that refines synthetic text. To ensure the synthetic data mirrors the global dataset, we apply control codes (e.g., labels, topics, metadata) that represent each client's data proportions and constrain voting to semantically coherent subsets. This two-phase approach requires only a single round of communication for weak clients and integrates contributions from all participants. Experiments show that our framework improves distribution alignment and downstream robustness under DP and heterogeneity.
Problem

Research questions and friction points this paper is trying to address.

federated learning
differential privacy
resource heterogeneity
synthetic text generation
data skew
Innovation

Methods, ideas, or system contributions that make the work stand out.

resource-adaptive federated learning
differential privacy
synthetic text generation
client heterogeneity
control codes
🔎 Similar Papers
No similar papers found.
J
Jiayi Wang
Oak Ridge National Laboratory, Oak Ridge, TN, USA
John Gounley
John Gounley
Oak Ridge National Laboratory
H
Heidi Hanson
Oak Ridge National Laboratory, Oak Ridge, TN, USA