🤖 AI Summary
Existing mobile GUI automation agents suffer from planning failures in dynamic multimodal interfaces, primarily due to strong coupling among textual, visual, and spatial modalities, as well as heterogeneous action spaces across pages and tasks. To address this, we propose the first adaptive MLLM-based agent framework for complex mobile GUIs. Our approach introduces a reflective adaptive planning module with error recovery capabilities, a hierarchical multi-dimensional memory system—integrating short-term operational traces, long-term task patterns, and cross-application experience—and a GUI state reflection mechanism coupled with dynamic action-space alignment. Evaluated on our newly constructed benchmarks MobBench and AndroidArena, our framework achieves an 18.7% absolute improvement in task success rate, significantly enhancing cross-page generalization. It is the first to enable robust end-to-end automation of complex, real-world mobile GUI tasks.
📝 Abstract
Existing Multimodal Large Language Model (MLLM)-based agents face significant challenges in handling complex GUI (Graphical User Interface) interactions on devices. These challenges arise from the dynamic and structured nature of GUI environments, which integrate text, images, and spatial relationships, as well as the variability in action spaces across different pages and tasks. To address these limitations, we propose MobA, a novel MLLM-based mobile assistant system. MobA introduces an adaptive planning module that incorporates a reflection mechanism for error recovery and dynamically adjusts plans to align with the real environment contexts and action module's execution capacity. Additionally, a multifaceted memory module provides comprehensive memory support to enhance adaptability and efficiency. We also present MobBench, a dataset designed for complex mobile interactions. Experimental results on MobBench and AndroidArena demonstrate MobA's ability to handle dynamic GUI environments and perform complex mobile task.