MP-GUI: Modality Perception with MLLMs for GUI Understanding

📅 2025-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal large language models (MLLMs) exhibit limited performance in GUI understanding due to the absence of explicit spatial structure modeling, compounded by the scarcity of high-quality GUI datasets with precise spatial annotations—largely constrained by privacy concerns and annotation noise. Method: We propose the first GUI-dedicated trimodal perceptual model integrating image, text, and spatial coordinate modalities. Our approach introduces a novel spatial structure optimization strategy and an adaptive fusion gate mechanism, coupled with an end-to-end automated GUI synthesis pipeline that generates spatially grounded training data without human annotation. Contribution/Results: Leveraging only limited supervised signals, our method significantly improves performance across downstream GUI tasks—including component localization, recognition, and task-oriented navigation—outperforming state-of-the-art MLLM baselines on multiple benchmarks. This work establishes a new paradigm for structured interface understanding under low-resource conditions.

Technology Category

Application Category

📝 Abstract
Graphical user interface (GUI) has become integral to modern society, making it crucial to be understood for human-centric systems. However, unlike natural images or documents, GUIs comprise artificially designed graphical elements arranged to convey specific semantic meanings. Current multi-modal large language models (MLLMs) already proficient in processing graphical and textual components suffer from hurdles in GUI understanding due to the lack of explicit spatial structure modeling. Moreover, obtaining high-quality spatial structure data is challenging due to privacy issues and noisy environments. To address these challenges, we present MP-GUI, a specially designed MLLM for GUI understanding. MP-GUI features three precisely specialized perceivers to extract graphical, textual, and spatial modalities from the screen as GUI-tailored visual clues, with spatial structure refinement strategy and adaptively combined via a fusion gate to meet the specific preferences of different GUI understanding tasks. To cope with the scarcity of training data, we also introduce a pipeline for automatically data collecting. Extensive experiments demonstrate that MP-GUI achieves impressive results on various GUI understanding tasks with limited data.
Problem

Research questions and friction points this paper is trying to address.

Lack of explicit spatial structure modeling in GUIs
Challenges in obtaining high-quality spatial structure data
Scarcity of training data for GUI understanding tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Specialized perceivers for GUI modalities extraction
Spatial structure refinement strategy for accurate modeling
Automated data collection pipeline for training
🔎 Similar Papers
No similar papers found.
Z
Ziwei Wang
College of Computer Science and Technology, Zhejiang University, China
W
Weizhi Chen
College of Computer Science and Technology, Zhejiang University, China
L
Leyang Yang
College of Computer Science and Technology, Zhejiang University, China
S
Sheng Zhou
Zhejiang Key Laboratory of Accessible Perception and Intelligent Systems, Zhejiang University, China
S
Shengchu Zhao
Ant Group
H
Hanbei Zhan
College of Computer Science and Technology, Zhejiang University, China
J
Jiongchao Jin
Ant Group
L
Liangcheng Li
College of Computer Science and Technology, Zhejiang University, China
Zirui Shao
Zirui Shao
Zhejiang University
J
Jiajun Bu
College of Computer Science and Technology, Zhejiang University, China