Interpretable Anomaly-Based DDoS Detection in AI-RAN with XAI and LLMs

📅 2025-07-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the lack of interpretability and actionable insights in DDoS attack detection within AI-native Radio Access Networks (AI-RAN), this paper proposes a distributed anomaly detection framework integrating eXplainable AI (XAI) and Large Language Models (LLMs). The method models multivariate time-series KPIs using LSTM networks to capture anomalous patterns, applies LIME and SHAP for local feature attribution, and leverages an LLM to translate technical explanations into natural-language, human-understandable insights—enhancing transparency and accessibility for non-experts. Evaluated on real-world 5G network data, the framework achieves an F1-score of 0.962 and delivers fine-grained, operationally actionable root-cause analysis. To our knowledge, this is the first work to systematically integrate XAI–LLM synergy into AI-RAN security detection, jointly optimizing high detection accuracy and strong model interpretability. It establishes a novel paradigm for deploying trustworthy, explainable security mechanisms in intelligent wireless networks.

Technology Category

Application Category

📝 Abstract
Next generation Radio Access Networks (RANs) introduce programmability, intelligence, and near real-time control through intelligent controllers, enabling enhanced security within the RAN and across broader 5G/6G infrastructures. This paper presents a comprehensive survey highlighting opportunities, challenges, and research gaps for Large Language Models (LLMs)-assisted explainable (XAI) intrusion detection (IDS) for secure future RAN environments. Motivated by this, we propose an LLM interpretable anomaly-based detection system for distributed denial-of-service (DDoS) attacks using multivariate time series key performance measures (KPMs), extracted from E2 nodes, within the Near Real-Time RAN Intelligent Controller (Near-RT RIC). An LSTM-based model is trained to identify malicious User Equipment (UE) behavior based on these KPMs. To enhance transparency, we apply post-hoc local explainability methods such as LIME and SHAP to interpret individual predictions. Furthermore, LLMs are employed to convert technical explanations into natural-language insights accessible to non-expert users. Experimental results on real 5G network KPMs demonstrate that our framework achieves high detection accuracy (F1-score > 0.96) while delivering actionable and interpretable outputs.
Problem

Research questions and friction points this paper is trying to address.

Detect DDoS attacks in AI-RAN using XAI and LLMs
Interpret anomaly detection via explainable AI methods
Translate technical insights into natural language for users
Innovation

Methods, ideas, or system contributions that make the work stand out.

LSTM model detects DDoS using KPMs
XAI methods LIME SHAP explain predictions
LLMs translate technical details to natural language
🔎 Similar Papers
No similar papers found.
S
Sotiris Chatzimiltis
5G/6GIC, Institute for Communication Systems (ICS), University of Surrey, Guildford, UK
Mohammad Shojafar
Mohammad Shojafar
Associate Professor, University of Surrey, EU Marie Curie Alumni, ACM Distinguished Speaker
Network SecurityFog Computing5G/6GFuture InternetAdversarial Machine Learning
Mahdi Boloursaz Mashhadi
Mahdi Boloursaz Mashhadi
Lecturer (Assistant Professor) at University of Surrey
Wireless CommunicationsSignal ProcessingMachine Learning
R
Rahim Tafazolli
5G/6GIC, Institute for Communication Systems (ICS), University of Surrey, Guildford, UK