🤖 AI Summary
This work addresses the significant memory and computational redundancy arising from independent KV cache maintenance in multi-LoRA large language model agent systems. To mitigate this inefficiency, the authors propose LRAgent, a novel framework that introduces, for the first time, a KV cache sharing mechanism tailored for multi-LoRA settings. Specifically, LRAgent decomposes the KV cache into a shared base component derived from the pretrained foundation model and low-rank adapter-specific components associated with individual LoRA modules. By sharing the base cache and storing adapter caches in a compact low-rank form, the framework substantially reduces resource overhead. Integrated with a shared-A architecture and a custom-designed Flash-LoRA-Attention kernel, LRAgent achieves throughput and first-token latency approaching those of fully shared caching, while preserving the accuracy of non-shared caching across multiple multi-agent question-answering benchmarks.
📝 Abstract
Role specialization in multi-LLM agent systems is often realized via multi-LoRA, where agents share a pretrained backbone and differ only through lightweight adapters. Despite sharing base model weights, each agent independently builds and stores its own KV cache for the same long, tool-augmented trajectories, incurring substantial memory and compute overhead. Existing KV cache sharing methods largely overlook this multi-LoRA setting. We observe that, across agents, cache differences are dominated by adapter outputs, while activations from the shared pretrained backbone remain highly similar. Based on this observation, we propose LRAgent, a KV cache sharing framework for multi-LoRA agents that decomposes the cache into a shared base component from the pretrained weights and an adapter-dependent component from LoRA weights. LRAgent reduces memory overhead by sharing the base component and storing the adapter component in its inherent low-rank form, and further reduces compute overhead, enabled by shared-$A$ multi-LoRA architectures, by also sharing the low-rank cache and avoiding redundant computations for contexts already processed by other agents. To efficiently reconstruct adapter contributions at runtime, we introduce Flash-LoRA-Attention, a kernel that reorders attention computation to avoid materializing the low-rank cache to full dimension. LRAgent achieves throughput and time-to-first-token latency close to fully shared caching, while preserving accuracy near the non-shared caching baseline across agentic question-answering benchmarks.