🤖 AI Summary
This work addresses the challenge of jointly achieving semantic understanding and physical interaction in complex embodied environments. We propose a tightly integrated architecture that synergistically combines large language models (LLMs) with world models (WMs). Methodologically, we design a multimodal LLM-driven cognitive module coupled with a physics-informed WM to form a closed-loop perception–prediction–action pipeline; internal representation learning and future-state prediction enable task decomposition, dynamic planning, and autonomous decision-making. Our key contributions are threefold: (1) the first systematic analysis of complementary mechanisms between LLMs and WMs in embodied intelligence; (2) a unified framework bridging high-level semantic reasoning with low-level physical law adherence; and (3) empirical validation—across diverse realistic scenarios—of end-to-end execution of complex, long-horizon tasks. This approach establishes a scalable technical pathway toward embodied artificial general intelligence.
📝 Abstract
Embodied Artificial Intelligence (AI) is an intelligent system paradigm for achieving Artificial General Intelligence (AGI), serving as the cornerstone for various applications and driving the evolution from cyberspace to physical systems. Recent breakthroughs in Large Language Models (LLMs) and World Models (WMs) have drawn significant attention for embodied AI. On the one hand, LLMs empower embodied AI via semantic reasoning and task decomposition, bringing high-level natural language instructions and low-level natural language actions into embodied cognition. On the other hand, WMs empower embodied AI by building internal representations and future predictions of the external world, facilitating physical law-compliant embodied interactions. As such, this paper comprehensively explores the literature in embodied AI from basics to advances, covering both LLM driven and WM driven works. In particular, we first present the history, key technologies, key components, and hardware systems of embodied AI, as well as discuss its development via looking from unimodal to multimodal angle. We then scrutinize the two burgeoning fields of embodied AI, i.e., embodied AI with LLMs/multimodal LLMs (MLLMs) and embodied AI with WMs, meticulously delineating their indispensable roles in end-to-end embodied cognition and physical laws-driven embodied interactions. Building upon the above advances, we further share our insights on the necessity of the joint MLLM-WM driven embodied AI architecture, shedding light on its profound significance in enabling complex tasks within physical worlds. In addition, we examine representative applications of embodied AI, demonstrating its wide applicability in real-world scenarios. Last but not least, we point out future research directions of embodied AI that deserve further investigation.