Investigating Feature Attribution for 5G Network Intrusion Detection

📅 2025-09-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited interpretability of security alerts and consequent difficulty in enabling actionable responses in 5G network intrusion detection, this paper proposes a logic-based feature attribution framework for explainable AI (XAI). We systematically compare SHAP and VoTE-XAI across three critical dimensions—sparsity, stability, and computational efficiency—using an XGBoost model trained on diverse 5G attack datasets. Experimental results show that VoTE-XAI achieves sub-2-millisecond inference time per explanation, significantly outperforming SHAP in efficiency; it also yields sparser, more stable attributions while preserving all critical discriminative features. This work presents the first empirical analysis of SHAP and VoTE-XAI under 5G security constraints, revealing both their performance trade-offs and complementary potential. It establishes a novel, empirically grounded XAI paradigm to support high-assurance, orchestratable automated threat response systems in next-generation mobile networks.

Technology Category

Application Category

📝 Abstract
With the rise of fifth-generation (5G) networks in critical applications, it is urgent to move from detection of malicious activity to systems capable of providing a reliable verdict suitable for mitigation. In this regard, understanding and interpreting machine learning (ML) models' security alerts is crucial for enabling actionable incident response orchestration. Explainable Artificial Intelligence (XAI) techniques are expected to enhance trust by providing insights into why alerts are raised. A dominant approach statistically associates feature sets that can be correlated to a given alert. This paper starts by questioning whether such attribution is relevant for future generation communication systems, and investigates its merits in comparison with an approach based on logical explanations. We extensively study two methods, SHAP and VoTE-XAI, by analyzing their interpretations of alerts generated by an XGBoost model in three different use cases with several 5G communication attacks. We identify three metrics for assessing explanations: sparsity, how concise they are; stability, how consistent they are across samples from the same attack type; and efficiency, how fast an explanation is generated. As an example, in a 5G network with 92 features, 6 were deemed important by VoTE-XAI for a Denial of Service (DoS) variant, ICMPFlood, while SHAP identified over 20. More importantly, we found a significant divergence between features selected by SHAP and VoTE-XAI. However, none of the top-ranked features selected by SHAP were missed by VoTE-XAI. When it comes to efficiency of providing interpretations, we found that VoTE-XAI is significantly more responsive, e.g. it provides a single explanation in under 0.002 seconds, in a high-dimensional setting (478 features).
Problem

Research questions and friction points this paper is trying to address.

Evaluating feature attribution relevance for 5G intrusion detection
Comparing SHAP and VoTE-XAI explanation methods for security alerts
Assessing explanation quality through sparsity, stability and efficiency metrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using SHAP and VoTE-XAI for feature attribution
Comparing logical and statistical XAI methods
Evaluating explanations via sparsity stability efficiency metrics
🔎 Similar Papers
No similar papers found.
F
Federica Uccello
Department of Computer and Information Science, Linköping University, Linköping, Sweden
Simin Nadjm-Tehrani
Simin Nadjm-Tehrani
Professor, Linköping University
Distributed Dependable SystemsSecurity in Critical InfrastructuresTrustworthy AIDelay-tolerant Networks