🤖 AI Summary
To address the lack of interpretability and actionable insights in DDoS attack detection within AI-native Radio Access Networks (AI-RAN), this paper proposes a distributed anomaly detection framework integrating eXplainable AI (XAI) and Large Language Models (LLMs). The method models multivariate time-series KPIs using LSTM networks to capture anomalous patterns, applies LIME and SHAP for local feature attribution, and leverages an LLM to translate technical explanations into natural-language, human-understandable insights—enhancing transparency and accessibility for non-experts. Evaluated on real-world 5G network data, the framework achieves an F1-score of 0.962 and delivers fine-grained, operationally actionable root-cause analysis. To our knowledge, this is the first work to systematically integrate XAI–LLM synergy into AI-RAN security detection, jointly optimizing high detection accuracy and strong model interpretability. It establishes a novel paradigm for deploying trustworthy, explainable security mechanisms in intelligent wireless networks.
📝 Abstract
Next generation Radio Access Networks (RANs) introduce programmability, intelligence, and near real-time control through intelligent controllers, enabling enhanced security within the RAN and across broader 5G/6G infrastructures. This paper presents a comprehensive survey highlighting opportunities, challenges, and research gaps for Large Language Models (LLMs)-assisted explainable (XAI) intrusion detection (IDS) for secure future RAN environments. Motivated by this, we propose an LLM interpretable anomaly-based detection system for distributed denial-of-service (DDoS) attacks using multivariate time series key performance measures (KPMs), extracted from E2 nodes, within the Near Real-Time RAN Intelligent Controller (Near-RT RIC). An LSTM-based model is trained to identify malicious User Equipment (UE) behavior based on these KPMs. To enhance transparency, we apply post-hoc local explainability methods such as LIME and SHAP to interpret individual predictions. Furthermore, LLMs are employed to convert technical explanations into natural-language insights accessible to non-expert users. Experimental results on real 5G network KPMs demonstrate that our framework achieves high detection accuracy (F1-score > 0.96) while delivering actionable and interpretable outputs.