🤖 AI Summary
Existing multimodal large language models (MLLMs) exhibit limited performance in GUI understanding due to the absence of explicit spatial structure modeling, compounded by the scarcity of high-quality GUI datasets with precise spatial annotations—largely constrained by privacy concerns and annotation noise. Method: We propose the first GUI-dedicated trimodal perceptual model integrating image, text, and spatial coordinate modalities. Our approach introduces a novel spatial structure optimization strategy and an adaptive fusion gate mechanism, coupled with an end-to-end automated GUI synthesis pipeline that generates spatially grounded training data without human annotation. Contribution/Results: Leveraging only limited supervised signals, our method significantly improves performance across downstream GUI tasks—including component localization, recognition, and task-oriented navigation—outperforming state-of-the-art MLLM baselines on multiple benchmarks. This work establishes a new paradigm for structured interface understanding under low-resource conditions.
📝 Abstract
Graphical user interface (GUI) has become integral to modern society, making it crucial to be understood for human-centric systems. However, unlike natural images or documents, GUIs comprise artificially designed graphical elements arranged to convey specific semantic meanings. Current multi-modal large language models (MLLMs) already proficient in processing graphical and textual components suffer from hurdles in GUI understanding due to the lack of explicit spatial structure modeling. Moreover, obtaining high-quality spatial structure data is challenging due to privacy issues and noisy environments. To address these challenges, we present MP-GUI, a specially designed MLLM for GUI understanding. MP-GUI features three precisely specialized perceivers to extract graphical, textual, and spatial modalities from the screen as GUI-tailored visual clues, with spatial structure refinement strategy and adaptively combined via a fusion gate to meet the specific preferences of different GUI understanding tasks. To cope with the scarcity of training data, we also introduce a pipeline for automatically data collecting. Extensive experiments demonstrate that MP-GUI achieves impressive results on various GUI understanding tasks with limited data.