Contextualized Privacy Defense for LLM Agents

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of proactive, context-aware privacy protection mechanisms in large language model (LLM) agents when performing multi-step tasks, which often forces a trade-off between privacy and functionality. To bridge this gap, the authors propose the Contextualized Defense Instruction (CDI) paradigm, which dynamically generates step-specific privacy instructions during task execution to proactively guide agent behavior. This approach is further refined through a reinforcement learning framework trained on failure trajectories. CDI represents the first integration of context-aware, active privacy guidance into the LLM agent execution pipeline. In unified simulation evaluations, it achieves a 94.2% privacy retention rate while maintaining 80.6% task helpfulness—significantly outperforming baseline methods—and demonstrates superior robustness in both adversarial and generalization scenarios.

Technology Category

Application Category

📝 Abstract
LLM agents increasingly act on users' personal information, yet existing privacy defenses remain limited in both design and adaptability. Most prior approaches rely on static or passive defenses, such as prompting and guarding. These paradigms are insufficient for supporting contextual, proactive privacy decisions in multi-step agent execution. We propose Contextualized Defense Instructing (CDI), a new privacy defense paradigm in which an instructor model generates step-specific, context-aware privacy guidance during execution, proactively shaping actions rather than merely constraining or vetoing them. Crucially, CDI is paired with an experience-driven optimization framework that trains the instructor via reinforcement learning (RL), where we convert failure trajectories with privacy violations into learning environments. We formalize baseline defenses and CDI as distinct intervention points in a canonical agent loop, and compare their privacy-helpfulness trade-offs within a unified simulation framework. Results show that our CDI consistently achieves a better balance between privacy preservation (94.2%) and helpfulness (80.6%) than baselines, with superior robustness to adversarial conditions and generalization.
Problem

Research questions and friction points this paper is trying to address.

privacy defense
LLM agents
contextual privacy
proactive privacy
multi-step execution
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contextualized Defense Instructing
privacy defense
reinforcement learning
LLM agents
proactive privacy guidance
🔎 Similar Papers
No similar papers found.