Benchmarking MLLM-based Web Understanding: Reasoning, Robustness and Safety

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current multimodal large language models (MLLMs) lack systematic evaluation of reasoning capability, robustness, and security in end-to-end web applications such as GUI agents and frontend generation. Method: We introduce WebRSSBench—the first comprehensive benchmark for web understanding—comprising 3,799 QA pairs across 729 real-world websites. It jointly evaluates models along three dimensions: reasoning (8 task categories), robustness (under UI perturbations), and security (risk interaction identification). The benchmark employs standardized prompts, deterministic scripts, and a two-stage quality control process (“automated detection + human verification”) to enable multi-step reasoning over page structure, text, components, and security behaviors. Contribution/Results: Evaluating 12 state-of-the-art MLLMs reveals significant deficiencies in compositional reasoning, cross-element association, and perturbation robustness; models also exhibit excessive conservatism, failing to accurately identify security risks.

Technology Category

Application Category

📝 Abstract
Multimodal large language models (MLLMs) are increasingly positioned as AI collaborators for building complex web-related applications like GUI agents and front-end code generation. However, existing benchmarks largely emphasize visual perception or UI code generation, showing insufficient evaluation on the reasoning, robustness and safety capability required for end-to-end web applications. To bridge the gap, we introduce a comprehensive web understanding benchmark, named WebRSSBench, that jointly evaluates Reasoning, Robustness, and Safety across eight tasks, such as position relationship reasoning, color robustness, and safety critical detection, etc. The benchmark is constructed from 729 websites and contains 3799 question answer pairs that probe multi-step inference over page structure, text, widgets, and safety-critical interactions. To ensure reliable measurement, we adopt standardized prompts, deterministic evaluation scripts, and multi-stage quality control combining automatic checks with targeted human verification. We evaluate 12 MLLMs on WebRSSBench. The results reveal significant gaps, models still struggle with compositional and cross-element reasoning over realistic layouts, show limited robustness when facing perturbations in user interfaces and content such as layout rearrangements or visual style shifts, and are rather conservative in recognizing and avoiding safety critical or irreversible actions. Our code is available at https://github.com/jinliang-byte/webssrbench.
Problem

Research questions and friction points this paper is trying to address.

Evaluating multimodal language models' web reasoning capabilities
Assessing robustness against UI perturbations and visual changes
Testing safety in recognizing critical irreversible web actions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduced WebRSSBench benchmark for web understanding
Evaluated reasoning, robustness, safety across eight tasks
Used standardized prompts and multi-stage quality control
🔎 Similar Papers
No similar papers found.
J
Junliang Liu
Dalian Maritime University, Dalian, China
Jingyu Xiao
Jingyu Xiao
Tsinghua University
Data MiningLarge Language ModelsComputer NetworkMLLM4Code
W
Wenxin Tang
Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
W
Wenxuan Wang
Renmin University of China, Beijing, China
Z
Zhixian Wang
Nanyang Technological University, Singapore
M
Minrui Zhang
Wuhan University, Wuhan, China
S
Shuanghe Yu
Dalian Maritime University, Dalian, China