Federated Learning Meets LLMs: Feature Extraction From Heterogeneous Clients

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of aligning multi-source tabular data in federated learning—where schema heterogeneity and incompatible feature spaces impede effective integration—this paper proposes a large language model (LLM)-based semantic alignment framework. Clients locally serialize tabular data into textual sequences and extract privacy-preserving semantic embeddings using lightweight pre-trained LLMs (e.g., DistilBERT, ALBERT, RoBERTa, ClinicalBERT), enabling fully automated, human-in-the-loop-free feature alignment. A lightweight classifier is then trained via FedAvg. Unlike conventional schema-matching approaches, this method eliminates explicit schema coordination, substantially enhancing robustness and generalization under heterogeneity. Evaluated on coronary heart disease prediction, it achieves up to a 0.25 improvement in F1-score, reduces communication overhead by 65%, and maintains stable performance even under extreme schema divergence.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) enables collaborative model training without sharing raw data, making it attractive for privacy-sensitive domains such as healthcare, finance, and IoT. A major obstacle, however, is the heterogeneity of tabular data across clients, where divergent schemas and incompatible feature spaces prevent straightforward aggregation. To address this challenge, we propose FedLLM-Align, a federated frame- work that leverages pre-trained large language models (LLMs) as universal feature extractors. Tabular records are serialized into text, and embeddings from models such as DistilBERT, ALBERT, RoBERTa, and ClinicalBERT provide semantically aligned representations that support lightweight local classifiers under the standard FedAvg protocol. This approach removes the need for manual schema harmonization while preserving privacy, since raw data remain strictly local. We evaluate FedLLM- Align on coronary heart disease prediction using partitioned Framingham datasets with simulated schema divergence. Across all client settings and LLM backbones, our method consistently outperforms state-of-the-art baselines, achieving up to +0.25 improvement in F1-score and a 65% reduction in communication cost. Stress testing under extreme schema divergence further demonstrates graceful degradation, unlike traditional methods that collapse entirely. These results establish FedLLM-Align as a robust, privacy-preserving, and communication-efficient solution for federated learning in heterogeneous environments.
Problem

Research questions and friction points this paper is trying to address.

Addressing tabular data heterogeneity in federated learning
Leveraging LLMs as universal feature extractors for alignment
Enabling privacy-preserving collaboration without schema harmonization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses LLMs as universal feature extractors
Serializes tabular data into text embeddings
Enables federated learning without schema harmonization
🔎 Similar Papers
A
Abdelrhman Gaber
Computer Science and Engineering Dept., American university in Cairo, Cairo, Egypt
H
Hassan Abd-Eltawab
Computer Science and Engineering Dept., American university in Cairo, Cairo, Egypt
Y
Youssif Abuzied
Computer Science and Engineering Dept., American university in Cairo, Cairo, Egypt
M
Muhammad ElMahdy
Computer Science and Engineering Dept., American university in Cairo, Cairo, Egypt
Tamer ElBatt
Tamer ElBatt
Professor, Wireless Networks and Mobile Computing, The American University in Cairo
Wireless and Mobile NetworksModelingPerformance AnalysisOptimizationIoT