Leveraging Trustworthy AI for Automotive Security in Multi-Domain Operations: Towards a Responsive Human-AI Multi-Domain Task Force for Cyber Social Security

📅 2025-07-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses synchronous adversarial machine learning threats—such as compromised intrusion detection and traffic sign recognition—in networked autonomous vehicles operating in multi-domain combat environments. We systematically investigate how key hyperparameters of tree-based ensemble models (Random Forest, Gradient Boosting, and XGBoost) affect the runtime of black-box, zeroth-order optimization attacks. Through extensive empirical evaluation and adversarial training, we find that the number of trees and boosting iterations significantly increase attack latency; moreover, Random Forest and Gradient Boosting exhibit greater hyperparameter sensitivity than XGBoost. Based on these insights, we propose a lightweight, architecture-preserving defense strategy centered on strategic hyperparameter tuning, effectively widening the defensive time window and enhancing the security resilience of onboard AI systems. Our approach offers an interpretable, deployment-efficient pathway for improving adversarial robustness of trustworthy AI in resource-constrained edge computing scenarios.

Technology Category

Application Category

📝 Abstract
Multi-Domain Operations (MDOs) emphasize cross-domain defense against complex and synergistic threats, with civilian infrastructures like smart cities and Connected Autonomous Vehicles (CAVs) emerging as primary targets. As dual-use assets, CAVs are vulnerable to Multi-Surface Threats (MSTs), particularly from Adversarial Machine Learning (AML) which can simultaneously compromise multiple in-vehicle ML systems (e.g., Intrusion Detection Systems, Traffic Sign Recognition Systems). Therefore, this study investigates how key hyperparameters in Decision Tree-based ensemble models-Random Forest (RF), Gradient Boosting (GB), and Extreme Gradient Boosting (XGB)-affect the time required for a Black-Box AML attack i.e. Zeroth Order Optimization (ZOO). Findings show that parameters like the number of trees or boosting rounds significantly influence attack execution time, with RF and GB being more sensitive than XGB. Adversarial Training (AT) time is also analyzed to assess the attacker's window of opportunity. By optimizing hyperparameters, this research supports Defensive Trustworthy AI (D-TAI) practices within MST scenarios and contributes to the development of resilient ML systems for civilian and military domains, aligned with Cyber Social Security framework in MDOs and Human-AI Multi-Domain Task Forces.
Problem

Research questions and friction points this paper is trying to address.

Investigates hyperparameters' impact on AML attack time in Decision Tree models
Analyzes Adversarial Training time to assess attacker's opportunity window
Supports Defensive Trustworthy AI practices for resilient ML systems in MDOs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decision Tree-based ensemble models for AML defense
Hyperparameter optimization to reduce attack time
Adversarial Training for resilient ML systems
🔎 Similar Papers
No similar papers found.
Vita Santa Barletta
Vita Santa Barletta
Dipartimento di Informatica, Università degli Studi di Bari
Software Engineering
D
Danilo Caivano
Università degli studi di Bari Aldo Moro, Piazza Umberto I, 70121 Bari, Apulia, Italy
G
Gabriel Cellammare
Università degli studi di Bari Aldo Moro, Piazza Umberto I, 70121 Bari, Apulia, Italy
S
Samuele del Vescovo
Scuola IMT Alti Studi Lucca, Piazza S.Francesco, 19, 55100 Lucca, Italy
A
Annita Larissa Sciacovelli
Università degli studi di Bari Aldo Moro, Piazza Umberto I, 70121 Bari, Apulia, Italy