Efficient Self-Learning and Model Versioning for AI-native O-RAN Edge

📅 2026-01-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of automating the lifecycle management of massive machine learning models at the O-RAN edge in AI-native 6G networks, where efficient scaling across multi-layer cloud domains and multi-timescale control loops remains difficult. To this end, the paper proposes a self-learning framework that continuously generates models in central and regional clouds and stores them in a shared version repository. An update manager, guided by a reinforcement learning–driven self-learning policy, dynamically determines the optimal timing and placement for model deployment. Seamless updates are then realized across heterogeneous nodes via container orchestration. By uniquely integrating self-learning mechanisms with model versioning, the framework enables automated, cross-domain, and cross-loop model evolution, effectively balancing model accuracy, system stability, and resilience while satisfying quality-of-service and latency constraints.

Technology Category

Application Category

📝 Abstract
The AI-native vision of 6G requires Radio Access Networks to train, deploy, and continuously refine thousands of machine learning (ML) models that drive real-time radio network optimization. Although the Open RAN (O-RAN) architecture provides open interfaces and an intelligent control plane, it leaves the life-cycle management of these models unspecified. Consequently, operators still rely on ad-hoc, manual update practices that can neither scale across the heterogeneous, multi-layer stack of Cell-Site, Edge-, Regional-, and Central-Cloud domains, nor across the three O-RAN control loops (real-, near-real-, and non-real-time). We present a self-learning framework that provides an efficient closed-loop version management for an AI-native O-RAN edge. In this framework, training pipelines in the Central/Regional Cloud continuously generate new models, which are cataloged along with their resource footprints, security scores, and accuracy metrics in a shared version repository. An Update Manager consults this repository and applies a self-learning policy to decide when and where each new model version should be promoted into operation. A container orchestrator then realizes these decisions across heterogeneous worker nodes, enabling multiple services (rApps, xApps, and dApps) to obtain improved inference with minimal disruption. Simulation results show that an efficient RL-driven decision-making can guarantee quality of service, bounded latencies while balancing model accuracy, system stability, and resilience.
Problem

Research questions and friction points this paper is trying to address.

AI-native O-RAN
model versioning
self-learning
lifecycle management
edge intelligence
Innovation

Methods, ideas, or system contributions that make the work stand out.

self-learning
model versioning
AI-native O-RAN
reinforcement learning
edge intelligence
🔎 Similar Papers
Mounir Bensalem
Mounir Bensalem
PhD Student, Technische Universität Braunschweig
communication networksmachine learning
F
Fin Gentzen
Technische Universität Braunschweig, Germany
T
Tuck-Wai Choong
National Taiwan University of Science and Technology, Taipei, Taiwan
Y
Yu-Chiao Jhuang
National Taiwan University of Science and Technology, Taipei, Taiwan
A
A. Jukan
Technische Universität Braunschweig, Germany
Jenq-Shiou Leu
Jenq-Shiou Leu
Professor of Electronic and Computer Engineering, National Taiwan University of Science and Technology
Heterogeneous Network IntegrationMobile Service and Platform DesignDistributed Computing(P2PCloud Computing)Power-Saving