π€ AI Summary
To address the challenge of online adaptation and robust execution of goal-directed behaviors in complex, long-horizon robotic rearrangement tasks, this paper proposes a hierarchical active inference architecture. At the high level, task-level active inference operates over a discrete state space to enable offline-training-free goal recovery and skill switching; at the low level, a whole-body continuous controller is tightly coupled to support closed-loop action optimization. This framework achieves, for the first time on real robots, end-to-end, scalable active inference control that jointly ensures long-horizon planning capability and real-time adaptability. Evaluated on three long-horizon rearrangement tasks in the Habitat benchmark, our method significantly outperforms state-of-the-art approaches, improving task success rates by 12.6%β23.4%. Moreover, it demonstrates superior robustness against perceptual noise and environmental disturbances.
π Abstract
Despite growing interest in active inference for robotic control, its application to complex, long-horizon tasks remains untested. We address this gap by introducing a fully hierarchical active inference architecture for goal-directed behavior in realistic robotic settings. Our model combines a high-level active inference model that selects among discrete skills realized via a whole-body active inference controller. This unified approach enables flexible skill composition, online adaptability, and recovery from task failures without requiring offline training. Evaluated on the Habitat Benchmark for mobile manipulation, our method outperforms state-of-the-art baselines across the three long-horizon tasks, demonstrating for the first time that active inference can scale to the complexity of modern robotics benchmarks.