🤖 AI Summary
Large language model (LLM) agent systems suffer from weak user control and inadequate security governance. Method: This paper proposes a user-centric security governance architecture featuring (i) novel client-side agent registration and policy delegation; (ii) a lightweight cryptographic token derivation scheme enabling low-overhead, verifiable dynamic access control; and (iii) a cross-heterogeneous-LLM (edge/cloud) communication interception and policy enforcement framework integrating distributed identity authentication with provider-centralized policy management. Contributions/Results: Experiments across multi-agent collaborative scenarios demonstrate <3% authorization latency overhead, zero accuracy degradation in task execution, millisecond-scale policy updates, and secure geographically distributed agent coordination—achieving robust, scalable, and user-governed security without compromising performance.
📝 Abstract
Large Language Model (LLM)-based agents increasingly interact, collaborate, and delegate tasks to one another autonomously with minimal human interaction. Industry guidelines for agentic system governance emphasize the need for users to maintain comprehensive control over their agents, mitigating potential damage from malicious agents. Several proposed agentic system designs address agent identity, authorization, and delegation, but remain purely theoretical, without concrete implementation and evaluation. Most importantly, they do not provide user-controlled agent management. To address this gap, we propose SAGA, a Security Architecture for Governing Agentic systems, that offers user oversight over their agents' lifecycle. In our design, users register their agents with a central entity, the Provider, that maintains agents contact information, user-defined access control policies, and helps agents enforce these policies on inter-agent communication. We introduce a cryptographic mechanism for deriving access control tokens, that offers fine-grained control over an agent's interaction with other agents, balancing security and performance consideration. We evaluate SAGA on several agentic tasks, using agents in different geolocations, and multiple on-device and cloud LLMs, demonstrating minimal performance overhead with no impact on underlying task utility in a wide range of conditions. Our architecture enables secure and trustworthy deployment of autonomous agents, accelerating the responsible adoption of this technology in sensitive environments.