WebUIBench: A Comprehensive Benchmark for Evaluating Multimodal Large Language Models in WebUI-to-Code

📅 2025-06-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing benchmarks evaluate multimodal large language models (MLLMs) on WebUI-to-Code tasks only via end-to-end output, neglecting critical sub-capabilities—including WebUI perception, HTML programming proficiency, and WebUI–HTML comprehension—thus lacking fine-grained diagnostic capability. Method: We propose WebUIBench, the first multidimensional evaluation benchmark tailored to this task, grounded in software engineering principles and featuring a fine-grained assessment framework covering four core capabilities. It comprises 21K high-quality, human-verified question-answer pairs derived from real-world websites, annotated across multiple dimensions. Contribution/Results: WebUIBench enables systematic evaluation of 29 state-of-the-art MLLMs. Our experiments provide the first empirical characterization of model capability distributions across development stages and uncover pervasive deficiencies. The benchmark delivers interpretable, reproducible evaluation metrics to guide capability alignment and toolchain optimization.

Technology Category

Application Category

📝 Abstract
With the rapid advancement of Generative AI technology, Multimodal Large Language Models(MLLMs) have the potential to act as AI software engineers capable of executing complex web application development. Considering that the model requires a confluence of multidimensional sub-capabilities to address the challenges of various development phases, constructing a multi-view evaluation framework is crucial for accurately guiding the enhancement of development efficiency. However, existing benchmarks usually fail to provide an assessment of sub-capabilities and focus solely on webpage generation outcomes. In this work, we draw inspiration from the principles of software engineering and further propose WebUIBench, a benchmark systematically designed to evaluate MLLMs in four key areas: WebUI Perception, HTML Programming,WebUI-HTML Understanding, and WebUI-to-Code. WebUIBench comprises 21K high-quality question-answer pairs derived from over 0.7K real-world websites. The extensive evaluation of 29 mainstream MLLMs uncovers the skill characteristics and various weakness that models encountered during the development process.
Problem

Research questions and friction points this paper is trying to address.

Evaluating MLLMs' sub-capabilities in web development tasks
Assessing multimodal models' performance in WebUI-to-code conversion
Identifying weaknesses in MLLMs for real-world website development
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal evaluation framework for MLLMs
21K QA pairs from real-world websites
Assesses four key web development areas
Zhiyu Lin
Zhiyu Lin
Beijing Jiaotong University
Z
Zhengda Zhou
Institute of Artificial Intelligence (TeleAI), China Telecom; Nanjing University
Z
Zhiyuan Zhao
Institute of Artificial Intelligence (TeleAI), China Telecom
T
Tianrui Wan
Northwestern Polytechnical University
Y
Yilun Ma
Northwestern Polytechnical University
J
Junyu Gao
Institute of Artificial Intelligence (TeleAI), China Telecom; Northwestern Polytechnical University
X
Xuelong Li
Institute of Artificial Intelligence (TeleAI), China Telecom