Invitation Is All You Need! Promptware Attacks Against LLM-Powered Assistants in Production Are Practical and Dangerous

📅 2025-08-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work exposes a critical security threat: Promptware—malicious prompts—enables indirect prompt injection against LLM assistants (e.g., Gemini) via common interaction channels such as email and calendar invites, compromising confidentiality, integrity, and availability. To address this, we propose TARA, the first threat analysis framework specifically designed for LLM assistants, which systematically identifies five novel threat categories—including context contamination, tool misuse, and unauthorized device invocation. We formally define “targeted Promptware attacks,” demonstrating their cross-platform lateral movement capability and capacity to trigger real-world harms (e.g., camera hijacking, smart-home compromise). Through rigorous threat modeling, dynamic interaction testing, and quantitative risk assessment, we empirically validate 14 distinct attacks across web, mobile, and assistant-integration scenarios; 73% are classified as high or critical severity. Our findings directly informed Google’s deployment of targeted mitigations, reducing residual risk to low-to-moderate levels.

Technology Category

Application Category

📝 Abstract
The growing integration of LLMs into applications has introduced new security risks, notably known as Promptware - maliciously engineered prompts designed to manipulate LLMs to compromise the CIA triad of these applications. While prior research warned about a potential shift in the threat landscape for LLM-powered applications, the risk posed by Promptware is frequently perceived as low. In this paper, we investigate the risk Promptware poses to users of Gemini-powered assistants (web application, mobile application, and Google Assistant). We propose a novel Threat Analysis and Risk Assessment (TARA) framework to assess Promptware risks for end users. Our analysis focuses on a new variant of Promptware called Targeted Promptware Attacks, which leverage indirect prompt injection via common user interactions such as emails, calendar invitations, and shared documents. We demonstrate 14 attack scenarios applied against Gemini-powered assistants across five identified threat classes: Short-term Context Poisoning, Permanent Memory Poisoning, Tool Misuse, Automatic Agent Invocation, and Automatic App Invocation. These attacks highlight both digital and physical consequences, including spamming, phishing, disinformation campaigns, data exfiltration, unapproved user video streaming, and control of home automation devices. We reveal Promptware's potential for on-device lateral movement, escaping the boundaries of the LLM-powered application, to trigger malicious actions using a device's applications. Our TARA reveals that 73% of the analyzed threats pose High-Critical risk to end users. We discuss mitigations and reassess the risk (in response to deployed mitigations) and show that the risk could be reduced significantly to Very Low-Medium. We disclosed our findings to Google, which deployed dedicated mitigations.
Problem

Research questions and friction points this paper is trying to address.

Assess Promptware risks in LLM-powered applications
Analyze Targeted Promptware Attacks via indirect injections
Demonstrate digital and physical consequences of attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Targeted Promptware Attacks via indirect injection
TARA framework for Promptware risk assessment
14 attack scenarios across five threat classes
🔎 Similar Papers
No similar papers found.
Ben Nassi
Ben Nassi
Tel-Aviv University
Side-Channel AttacksAI SecurityPromptware
S
Stav Cohen
Technion - Israel Institute of Technology, Haifa, Israel
O
Or Yair
SafeBreach, Tel-Aviv, Israel