🤖 AI Summary
Existing AI agents predominantly rely on scripted execution and external orchestration, lacking autonomous, environment-triggered behavior; moreover, homogeneous perception mechanisms across agents lead to ineffective information isolation, with leakage rates as high as 83%. Method: We propose a bottom-up agent architecture that embeds LLM-based agents within dynamic partially observable environments, where all actions are autonomously triggered by environmental state transitions. Inspired by the biological concept of *Umwelt*, we introduce “aspects”—a mechanism enabling distinct agent groups to adopt heterogeneous perceptual perspectives, thereby enforcing fine-grained information isolation. The architecture is formally modeled as a partially observable Markov decision process (POMDP). Contribution/Results: Our design eliminates cross-agent information leakage at its root, achieving zero leakage while improving runtime efficiency. It establishes a novel paradigm for secure, adaptive AI agent systems grounded in principled environmental autonomy and perceptual differentiation.
📝 Abstract
Agentic LLM AI agents are often little more than autonomous chatbots: actors following scripts, often controlled by an unreliable director. This work introduces a bottom-up framework that situates AI agents in their environment, with all behaviors triggered by changes in their environments. It introduces the notion of aspects, similar to the idea of umwelt, where sets of agents perceive their environment differently to each other, enabling clearer control of information. We provide an illustrative implementation and show that compared to a typical architecture, which leaks up to 83% of the time, aspective agentic AI enables zero information leakage. We anticipate that this concept of specialist agents working efficiently in their own information niches can provide improvements to both security and efficiency.