🤖 AI Summary
To address the challenge of aligning multi-source tabular data in federated learning—where schema heterogeneity and incompatible feature spaces impede effective integration—this paper proposes a large language model (LLM)-based semantic alignment framework. Clients locally serialize tabular data into textual sequences and extract privacy-preserving semantic embeddings using lightweight pre-trained LLMs (e.g., DistilBERT, ALBERT, RoBERTa, ClinicalBERT), enabling fully automated, human-in-the-loop-free feature alignment. A lightweight classifier is then trained via FedAvg. Unlike conventional schema-matching approaches, this method eliminates explicit schema coordination, substantially enhancing robustness and generalization under heterogeneity. Evaluated on coronary heart disease prediction, it achieves up to a 0.25 improvement in F1-score, reduces communication overhead by 65%, and maintains stable performance even under extreme schema divergence.
📝 Abstract
Federated learning (FL) enables collaborative model training without sharing raw data, making it attractive for privacy-sensitive domains such as healthcare, finance, and IoT. A major obstacle, however, is the heterogeneity of tabular data across clients, where divergent schemas and incompatible feature spaces prevent straightforward aggregation. To address this challenge, we propose FedLLM-Align, a federated frame- work that leverages pre-trained large language models (LLMs) as universal feature extractors. Tabular records are serialized into text, and embeddings from models such as DistilBERT, ALBERT, RoBERTa, and ClinicalBERT provide semantically aligned representations that support lightweight local classifiers under the standard FedAvg protocol. This approach removes the need for manual schema harmonization while preserving privacy, since raw data remain strictly local. We evaluate FedLLM- Align on coronary heart disease prediction using partitioned Framingham datasets with simulated schema divergence. Across all client settings and LLM backbones, our method consistently outperforms state-of-the-art baselines, achieving up to +0.25 improvement in F1-score and a 65% reduction in communication cost. Stress testing under extreme schema divergence further demonstrates graceful degradation, unlike traditional methods that collapse entirely. These results establish FedLLM-Align as a robust, privacy-preserving, and communication-efficient solution for federated learning in heterogeneous environments.