🤖 AI Summary
This study addresses synchronous adversarial machine learning threats—such as compromised intrusion detection and traffic sign recognition—in networked autonomous vehicles operating in multi-domain combat environments. We systematically investigate how key hyperparameters of tree-based ensemble models (Random Forest, Gradient Boosting, and XGBoost) affect the runtime of black-box, zeroth-order optimization attacks. Through extensive empirical evaluation and adversarial training, we find that the number of trees and boosting iterations significantly increase attack latency; moreover, Random Forest and Gradient Boosting exhibit greater hyperparameter sensitivity than XGBoost. Based on these insights, we propose a lightweight, architecture-preserving defense strategy centered on strategic hyperparameter tuning, effectively widening the defensive time window and enhancing the security resilience of onboard AI systems. Our approach offers an interpretable, deployment-efficient pathway for improving adversarial robustness of trustworthy AI in resource-constrained edge computing scenarios.
📝 Abstract
Multi-Domain Operations (MDOs) emphasize cross-domain defense against complex and synergistic threats, with civilian infrastructures like smart cities and Connected Autonomous Vehicles (CAVs) emerging as primary targets. As dual-use assets, CAVs are vulnerable to Multi-Surface Threats (MSTs), particularly from Adversarial Machine Learning (AML) which can simultaneously compromise multiple in-vehicle ML systems (e.g., Intrusion Detection Systems, Traffic Sign Recognition Systems). Therefore, this study investigates how key hyperparameters in Decision Tree-based ensemble models-Random Forest (RF), Gradient Boosting (GB), and Extreme Gradient Boosting (XGB)-affect the time required for a Black-Box AML attack i.e. Zeroth Order Optimization (ZOO). Findings show that parameters like the number of trees or boosting rounds significantly influence attack execution time, with RF and GB being more sensitive than XGB. Adversarial Training (AT) time is also analyzed to assess the attacker's window of opportunity. By optimizing hyperparameters, this research supports Defensive Trustworthy AI (D-TAI) practices within MST scenarios and contributes to the development of resilient ML systems for civilian and military domains, aligned with Cyber Social Security framework in MDOs and Human-AI Multi-Domain Task Forces.