๐ค AI Summary
This work addresses the challenge of spatial disorientation and cyclic path failures that commonly impede large language model agents in autonomous web navigation. To overcome these limitations, the authors propose V-GEMS, a multimodal agent architecture that integrates visual grounding with an explicit memory stack featuring state tracking. This design enables precise identification of interactive elements, long-term contextual awareness, and structured path backtracking. The study introduces an updatable dynamic benchmark to evaluate navigation adaptabilityโa novel contribution that facilitates, for the first time in web traversal tasks, vision-grounded resolution of ambiguous elements and effective avoidance of cyclic errors. Experimental results demonstrate that V-GEMS outperforms the WebWalker baseline by 28.7% on this dynamic benchmark, substantially enhancing the robustness and efficacy of complex web navigation.
๐ Abstract
Autonomous web navigation requires agents to perceive complex visual environments and maintain long-term context, yet current Large Language Model (LLM) based agents often struggle with spatial disorientation and navigation loops. In this paper, we propose generally applicable V-GEMS(Visual Grounding and Explicit Memory System), a robust multimodal agent architecture designed for precise and resilient web traversal. Our agent integrates visual grounding to resolve ambiguous interactive elements and introduces an explicit memory stack with state tracking. This dual mechanism allows the agent to maintain a structured map of its traversal path, enabling valid backtracking and preventing cyclical failures in deep navigation tasks. We also introduce an updatable dynamic benchmark to rigorously evaluate adaptability. Experiments show V-GEMS significantly dominates the WebWalker baseline, achieving a substantial 28.7% performance gain. Code is available at https://github.com/Vaultttttttttttt/V-GEMS.