🤖 AI Summary
This work addresses the lack of a formal semantic foundation in current large language model (LLM) agent dialogue systems, which hinders rigorous analysis of attacks such as prompt injection and their impact on reasoning and tool usage. To bridge this gap, the paper proposes an extended untyped call-by-value λ-calculus that incorporates dialogue primitives and a dynamic information-flow control mechanism. This framework formally models the interleaved planning loop involving LLM invocations, tool executions, and code generation. It provides the first formal semantics for LLM agents with provable information-flow security guarantees, enabling defense strategies such as sub-dialogue isolation and generated-code sandboxing. Confidentiality and integrity are formally verified through a termination-insensitive non-interference theorem, thereby establishing a theoretical foundation for secure agent programming.
📝 Abstract
A conversation with a large language model (LLM) is a sequence of prompts and responses, with each response generated from the preceding conversation. AI agents build such conversations automatically: given an initial human prompt, a planner loop interleaves LLM calls with tool invocations and code execution. This tight coupling creates a new and poorly understood attack surface. A malicious prompt injected into a conversation can compromise later reasoning, trigger dangerous tool calls, or distort final outputs. Despite the centrality of such systems, we currently lack a principled semantic foundation for reasoning about their behaviour and safety. We address this gap by introducing an untyped call-by-value lambda calculus enriched with dynamic information-flow control and a small number of primitives for constructing prompt-response conversations. Our language includes a primitive that invokes an LLM: it serializes a value, sends it to the model as a prompt, and parses the response as a new term. This calculus faithfully represents planner loops and their vulnerabilities, including the mechanisms by which prompt injection alters subsequent computation. The semantics explicitly captures conversations, and so supports reasoning about defenses such as quarantined sub-conversations, isolation of generated code, and information-flow restrictions on what may influence an LLM call. A termination-insensitive noninterference theorem establishes integrity and confidentiality guarantees, demonstrating that a formal calculus can provide rigorous foundations for safe agentic programming.