🤖 AI Summary
Existing benchmarks evaluate multimodal large language models (MLLMs) on WebUI-to-Code tasks only via end-to-end output, neglecting critical sub-capabilities—including WebUI perception, HTML programming proficiency, and WebUI–HTML comprehension—thus lacking fine-grained diagnostic capability.
Method: We propose WebUIBench, the first multidimensional evaluation benchmark tailored to this task, grounded in software engineering principles and featuring a fine-grained assessment framework covering four core capabilities. It comprises 21K high-quality, human-verified question-answer pairs derived from real-world websites, annotated across multiple dimensions.
Contribution/Results: WebUIBench enables systematic evaluation of 29 state-of-the-art MLLMs. Our experiments provide the first empirical characterization of model capability distributions across development stages and uncover pervasive deficiencies. The benchmark delivers interpretable, reproducible evaluation metrics to guide capability alignment and toolchain optimization.
📝 Abstract
With the rapid advancement of Generative AI technology, Multimodal Large Language Models(MLLMs) have the potential to act as AI software engineers capable of executing complex web application development. Considering that the model requires a confluence of multidimensional sub-capabilities to address the challenges of various development phases, constructing a multi-view evaluation framework is crucial for accurately guiding the enhancement of development efficiency. However, existing benchmarks usually fail to provide an assessment of sub-capabilities and focus solely on webpage generation outcomes. In this work, we draw inspiration from the principles of software engineering and further propose WebUIBench, a benchmark systematically designed to evaluate MLLMs in four key areas: WebUI Perception, HTML Programming,WebUI-HTML Understanding, and WebUI-to-Code. WebUIBench comprises 21K high-quality question-answer pairs derived from over 0.7K real-world websites. The extensive evaluation of 29 mainstream MLLMs uncovers the skill characteristics and various weakness that models encountered during the development process.