A2AS: Agentic AI Runtime Security and Self-Defense

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
AI agents and large language model (LLM) applications lack runtime security guarantees in open environments. Method: This paper proposes BASIC, the first lightweight, zero-architecture-intrusion runtime defense framework that requires no model retraining and incurs minimal overhead. Built upon five pillars—Behavioral Certificates, Attested Prompts, Safety Boundaries, Contextual Defense, and Policy Encoding—BASIC enables behavioral trust attestation, contextual integrity protection, and dynamic self-defense. Contribution/Results: Experiments demonstrate zero inference latency, no external dependencies or system refactoring, and support for fine-grained, customizable security policies. Crucially, this work pioneers the adaptation of the HTTPS-inspired defense-in-depth paradigm to the LLM runtime layer, establishing the first industrially viable, standardized solution for AI Agent and LLM Security (A2AS).

Technology Category

Application Category

📝 Abstract
The A2AS framework is introduced as a security layer for AI agents and LLM-powered applications, similar to how HTTPS secures HTTP. A2AS enforces certified behavior, activates model self-defense, and ensures context window integrity. It defines security boundaries, authenticates prompts, applies security rules and custom policies, and controls agentic behavior, enabling a defense-in-depth strategy. The A2AS framework avoids latency overhead, external dependencies, architectural changes, model retraining, and operational complexity. The BASIC security model is introduced as the A2AS foundation: (B) Behavior certificates enable behavior enforcement, (A) Authenticated prompts enable context window integrity, (S) Security boundaries enable untrusted input isolation, (I) In-context defenses enable secure model reasoning, (C) Codified policies enable application-specific rules. This first paper in the series introduces the BASIC security model and the A2AS framework, exploring their potential toward establishing the A2AS industry standard.
Problem

Research questions and friction points this paper is trying to address.

Securing AI agents and LLM applications against threats
Enforcing certified behavior and context integrity for models
Establishing security boundaries without performance overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

A2AS framework provides security layer for AI agents
BASIC model enforces behavior certificates and authenticated prompts
Framework ensures context integrity without architectural changes
🔎 Similar Papers
No similar papers found.
E
Eugene Neelou
I
Ivan Novikov
M
Max Moroz
Om Narayan
Om Narayan
New York University
Cybersecurity
T
Tiffany Saade
M
Mika Ayenson
I
Ilya Kabanov
J
Jen Ozmen
Edward Lee
Edward Lee
Vineeth Sai Narajala
Vineeth Sai Narajala
Security Engineer, Meta | Amazon Web Services | Nordstrom | University of Washington - Seattle
CybersecurityGenAI
E
Emmanuel Guilherme Junior
K
Ken Huang
H
Huseyin Gulsin
J
Jason Ross
M
Marat Vyshegorodtsev
A
Adelin Travers
I
Idan Habler
R
Rahul Jadav