🤖 AI Summary
To address high latency, intermittent connectivity, bandwidth constraints, and data privacy challenges in real-time traffic forecasting for low-Earth-orbit (LEO) satellite systems within non-terrestrial networks (NTNs), this paper proposes Fed-KAN—the first federated learning framework tailored for NTNs. Fed-KAN innovatively integrates the parameter-free, highly expressive Kolmogorov–Arnold Network (KAN) into federated learning, replacing conventional Fed-MLP architectures to enhance model generalization and adaptability under dynamic topologies and weak links. Evaluated on real-world satellite operator traffic data and integrated with O-RAN architecture, Fed-KAN achieves a 77.39% reduction in average test loss, faster convergence, and supports functional split deployment. This work establishes a novel, efficient, and privacy-preserving distributed modeling paradigm for intelligent operations and maintenance in NTNs.
📝 Abstract
Non-Terrestrial Networks (NTNs) are becoming a critical component of modern communication infrastructures, especially with the advent of Low Earth Orbit (LEO) satellite systems. Traditional centralized learning approaches face major challenges in such networks due to high latency, intermittent connectivity and limited bandwidth. Federated Learning (FL) is a promising alternative as it enables decentralized training while maintaining data privacy. However, existing FL models, such as Federated Learning with Multi-Layer Perceptrons (Fed-MLP), can struggle with high computational complexity and poor adaptability to dynamic NTN environments. This paper provides a detailed analysis for Federated Learning with Kolmogorov-Arnold Networks (Fed-KAN), its implementation and performance improvements over traditional FL models in NTN environments for traffic forecasting. The proposed Fed-KAN is a novel approach that utilises the functional approximation capabilities of KANs in a FL framework. We evaluate Fed-KAN compared to Fed-MLP on a traffic dataset of real satellite operator and show a significant reduction in training and test loss. Our results show that Fed-KAN can achieve a 77.39% reduction in average test loss compared to Fed-MLP, highlighting its improved performance and better generalization ability. At the end of the paper, we also discuss some potential applications of Fed-KAN within O-RAN and Fed-KAN usage for split functionalities in NTN architecture.