AMDS: Attack-Aware Multi-Stage Defense System for Network Intrusion Detection with Two-Stage Adaptive Weight Learning

📅 2026-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited robustness of existing machine learning–based network intrusion detection systems against adversarial threats such as gradient-based attacks and distributional shifts, as well as their inability to adaptively respond to diverse attack types. To overcome these limitations, the authors propose an attack-aware, multi-stage defense framework that uniquely integrates three complementary signals—ensemble disagreement, prediction uncertainty, and distributional anomaly—and incorporates a two-stage adaptive weight learning mechanism to enable differentiated responses to heterogeneous adversarial attacks. Experimental results demonstrate that the proposed method achieves an AUC of 94.2% on standard benchmarks, outperforming current adversarially trained ensemble models by 4.5% in accuracy and 9.0 points in F1 score. Notably, it maintains 94.4% accuracy under white-box adaptive attacks, significantly enhancing both robustness and generalization.

Technology Category

Application Category

📝 Abstract
Machine learning based network intrusion detection systems are vulnerable to adversarial attacks that degrade classification performance under both gradient-based and distribution shift threat models. Existing defenses typically apply uniform detection strategies, which may not account for heterogeneous attack characteristics. This paper proposes an attack-aware multi-stage defense framework that learns attack-specific detection strategies through a weighted combination of ensemble disagreement, predictive uncertainty, and distributional anomaly signals. Empirical analysis across seven adversarial attack types reveals distinct detection signatures, enabling a two-stage adaptive detection mechanism. Experimental evaluation on a benchmark intrusion detection dataset indicates that the proposed system attains 94.2% area under the receiver operating characteristic curve and improves classification accuracy by 4.5 percentage points and F1-score by 9.0 points over adversarially trained ensembles. Under adaptive white-box attacks with full architectural knowledge, the system appears to maintain 94.4% accuracy with a 4.2% attack success rate, though this evaluation is limited to two adaptive variants and does not constitute a formal robustness guarantee. Cross-dataset validation further suggests that defense effectiveness depends on baseline classifier competence and may vary with feature dimensionality. These results suggest that attack-specific optimization combined with multi-signal integration can provide a practical approach to improving adversarial robustness in machine learning-based intrusion detection systems.
Problem

Research questions and friction points this paper is trying to address.

adversarial attacks
network intrusion detection
attack heterogeneity
machine learning robustness
defense strategy
Innovation

Methods, ideas, or system contributions that make the work stand out.

attack-aware defense
multi-stage adaptive detection
ensemble disagreement
predictive uncertainty
distributional anomaly
🔎 Similar Papers
No similar papers found.
O
Oluseyi Olukola
School of Computing Sciences and Computer Engineering, University of Southern Mississippi, Hattiesburg, MS, USA
Nick Rahimi
Nick Rahimi
Associate Professor, University of Southern Mississippi
CybersecurityTrustworthy AIDistributed SystemsP2P Network