🤖 AI Summary
AI agents face instruction injection risks due to static, over-permissive authorization, enabling malicious inputs to trigger unauthorized actions.
Method: We propose AgentSentry, a task-centric dynamic access control framework that introduces the first task-intent alignment mechanism to enable fine-grained, context-aware, just-in-time least-privilege assignment. Leveraging runtime policy generation coordinated with the Model Context Protocol (MCP), AgentSentry dynamically parses user task objectives and generates or revokes permissions in real time.
Contribution/Results: Evaluations demonstrate that AgentSentry achieves 100% blocking of privilege escalation in representative attack scenarios (e.g., unauthorized email forwarding), while introducing zero interference for legitimate tasks such as application registration. The framework significantly enhances both security and controllability of GUI-based AI agents.
📝 Abstract
AI agents capable of GUI understanding and Model Context Protocol are increasingly deployed to automate mobile tasks. However, their reliance on over-privileged, static permissions creates a critical vulnerability: instruction injection. Malicious instructions, embedded in otherwise benign content like emails, can hijack the agent to perform unauthorized actions. We present AgentSentry, a lightweight runtime task-centric access control framework that enforces dynamic, task-scoped permissions. Instead of granting broad, persistent permissions, AgentSentry dynamically generates and enforces minimal, temporary policies aligned with the user's specific task (e.g., register for an app), revoking them upon completion. We demonstrate that AgentSentry successfully prevents an instruction injection attack, where an agent is tricked into forwarding private emails, while allowing the legitimate task to complete. Our approach highlights the urgent need for intent-aligned security models to safely govern the next generation of autonomous agents.