🤖 AI Summary
Current multimodal large language models (MLLMs) lack systematic evaluation of reasoning capability, robustness, and security in end-to-end web applications such as GUI agents and frontend generation. Method: We introduce WebRSSBench—the first comprehensive benchmark for web understanding—comprising 3,799 QA pairs across 729 real-world websites. It jointly evaluates models along three dimensions: reasoning (8 task categories), robustness (under UI perturbations), and security (risk interaction identification). The benchmark employs standardized prompts, deterministic scripts, and a two-stage quality control process (“automated detection + human verification”) to enable multi-step reasoning over page structure, text, components, and security behaviors. Contribution/Results: Evaluating 12 state-of-the-art MLLMs reveals significant deficiencies in compositional reasoning, cross-element association, and perturbation robustness; models also exhibit excessive conservatism, failing to accurately identify security risks.
📝 Abstract
Multimodal large language models (MLLMs) are increasingly positioned as AI collaborators for building complex web-related applications like GUI agents and front-end code generation. However, existing benchmarks largely emphasize visual perception or UI code generation, showing insufficient evaluation on the reasoning, robustness and safety capability required for end-to-end web applications. To bridge the gap, we introduce a comprehensive web understanding benchmark, named WebRSSBench, that jointly evaluates Reasoning, Robustness, and Safety across eight tasks, such as position relationship reasoning, color robustness, and safety critical detection, etc. The benchmark is constructed from 729 websites and contains 3799 question answer pairs that probe multi-step inference over page structure, text, widgets, and safety-critical interactions. To ensure reliable measurement, we adopt standardized prompts, deterministic evaluation scripts, and multi-stage quality control combining automatic checks with targeted human verification. We evaluate 12 MLLMs on WebRSSBench. The results reveal significant gaps, models still struggle with compositional and cross-element reasoning over realistic layouts, show limited robustness when facing perturbations in user interfaces and content such as layout rearrangements or visual style shifts, and are rather conservative in recognizing and avoiding safety critical or irreversible actions. Our code is available at https://github.com/jinliang-byte/webssrbench.