ELMUR: External Layer Memory with Update/Rewrite for Long-Horizon RL

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Real-world robotics demands decision-making under partial observability and long-horizon dependencies, yet existing methods—constrained by fixed attention windows or unstable memory mechanisms—struggle to model ultra-long historical dependencies. To address this, we propose a hierarchical external memory architecture comprising localized layer-wise memory modules, bidirectional cross-attention, and a convex mixture update/rewrite strategy based on Linear Recurrent Units (LRUs). This design overcomes context-length limitations while enabling efficient, stable long-term memory maintenance. Crucially, it integrates seamlessly into the Transformer framework without inflating sequence length, supporting dependency modeling over million-step trajectories. Empirically, our method achieves 100% success on T-Maze and significantly outperforms state-of-the-art baselines on POPGym and MIKASA-Robo visual manipulation tasks, demonstrating substantial improvement in history modeling for partially observable reinforcement learning.

Technology Category

Application Category

📝 Abstract
Real-world robotic agents must act under partial observability and long horizons, where key cues may appear long before they affect decision making. However, most modern approaches rely solely on instantaneous information, without incorporating insights from the past. Standard recurrent or transformer models struggle with retaining and leveraging long-term dependencies: context windows truncate history, while naive memory extensions fail under scale and sparsity. We propose ELMUR (External Layer Memory with Update/Rewrite), a transformer architecture with structured external memory. Each layer maintains memory embeddings, interacts with them via bidirectional cross-attention, and updates them through an Least Recently Used (LRU) memory module using replacement or convex blending. ELMUR extends effective horizons up to 100,000 times beyond the attention window and achieves a 100% success rate on a synthetic T-Maze task with corridors up to one million steps. In POPGym, it outperforms baselines on more than half of the tasks. On MIKASA-Robo sparse-reward manipulation tasks with visual observations, it nearly doubles the performance of strong baselines. These results demonstrate that structured, layer-local external memory offers a simple and scalable approach to decision making under partial observability.
Problem

Research questions and friction points this paper is trying to address.

Addresses partial observability in long-horizon robotic decision making
Solves retention and utilization of long-term dependencies in RL
Overcomes context window limitations in transformer architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer with structured external memory layers
Bidirectional cross-attention for memory interaction
LRU-based memory update through replacement or blending
🔎 Similar Papers
No similar papers found.