Threat Modeling for AI: The Case for an Asset-Centric Approach

📅 2025-05-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional threat modeling methods fail when applied to AI systems, as deeply integrated, autonomous AI agents execute code, interact across domains, and operate unattended—introducing novel attack surfaces and undermining conventional assumptions. Method: This paper proposes an asset-centric threat modeling framework specifically designed for AI systems, introducing a bottom-up, asset-driven paradigm that transcends the limitations of conventional attack-oriented approaches. It integrates asset lifecycle analysis, AI architecture abstraction, and distributed infrastructure security assessment. Contribution/Results: The framework enables cross-technology-domain collaborative analysis, quantitative evaluation of security assumptions for third-party AI components, and context-sensitive, holistic identification of AI-specific vulnerabilities. It delivers scalable, interpretable, and context-adaptive risk assessments, significantly enhancing end-to-end security governance across the development and deployment lifecycle of integrated AI agent systems.

Technology Category

Application Category

📝 Abstract
Recent advances in AI are transforming AI's ubiquitous presence in our world from that of standalone AI-applications into deeply integrated AI-agents. These changes have been driven by agents' increasing capability to autonomously make decisions and initiate actions, using existing applications; whether those applications are AI-based or not. This evolution enables unprecedented levels of AI integration, with agents now able to take actions on behalf of systems and users -- including, in some cases, the powerful ability for the AI to write and execute scripts as it deems necessary. With AI systems now able to autonomously execute code, interact with external systems, and operate without human oversight, traditional security approaches fall short. This paper introduces an asset-centric methodology for threat modeling AI systems that addresses the unique security challenges posed by integrated AI agents. Unlike existing top-down frameworks that analyze individual attacks within specific product contexts, our bottom-up approach enables defenders to systematically identify how vulnerabilities -- both conventional and AI-specific -- impact critical AI assets across distributed infrastructures used to develop and deploy these agents. This methodology allows security teams to: (1) perform comprehensive analysis that communicates effectively across technical domains, (2) quantify security assumptions about third-party AI components without requiring visibility into their implementation, and (3) holistically identify AI-based vulnerabilities relevant to their specific product context. This approach is particularly relevant for securing agentic systems with complex autonomous capabilities. By focusing on assets rather than attacks, our approach scales with the rapidly evolving threat landscape while accommodating increasingly complex and distributed AI development pipelines.
Problem

Research questions and friction points this paper is trying to address.

Addresses security challenges of integrated AI agents
Identifies vulnerabilities in critical AI assets systematically
Quantifies security assumptions for third-party AI components
Innovation

Methods, ideas, or system contributions that make the work stand out.

Asset-centric methodology for AI threat modeling
Bottom-up approach to identify vulnerabilities systematically
Quantifies security assumptions for third-party AI components
🔎 Similar Papers
No similar papers found.
J
Jose Sanchez Vicarte
Intel Security Research
Marcin Spoczynski
Marcin Spoczynski
Senior Research Scientist
securityMLorchestration
M
Mostafa Elsaid
Intel AI Cloud Security