🤖 AI Summary
This study addresses the open question of whether large language models (LLMs) can reliably substitute human experts in Nielsen’s Heuristic Evaluation (HE) for identifying usability issues in human-computer interaction.
Method: We conduct the first systematic evaluation of GPT-4o on HE, using real webpage screenshots and a structured prompt aligned with Nielsen’s 10 heuristics. GPT-4o’s outputs are quantitatively compared against HCI expert annotations across recall, false positive rate, and heuristic-specific performance.
Contribution/Results: GPT-4o identifies only 21.2% of expert-identified issues while generating 27 novel, unverified problems. It performs relatively well on “aesthetic and minimalist design” and “match between system and real world,” but exhibits significant weaknesses in “flexibility and efficiency” and “helping users recognize, diagnose, and recover from errors”—with high false positives attributable to hallucination. The study delineates the current capability boundaries of LLMs in HE and proposes five actionable, practice-oriented recommendations for prompt engineering and human-AI collaboration.
📝 Abstract
Heuristic evaluation is a widely used method in Human-Computer Interaction (HCI) to inspect interfaces and identify issues based on heuristics. Recently, Large Language Models (LLMs), such as GPT-4o, have been applied in HCI to assist in persona creation, the ideation process, and the analysis of semi-structured interviews. However, considering the need to understand heuristics and the high degree of abstraction required to evaluate them, LLMs may have difficulty conducting heuristic evaluation. However, prior research has not investigated GPT-4o's performance in heuristic evaluation compared to HCI experts in web-based systems. In this context, this study aims to compare the results of a heuristic evaluation performed by GPT-4o and human experts. To this end, we selected a set of screenshots from a web system and asked GPT-4o to perform a heuristic evaluation based on Nielsen's Heuristics from a literature-grounded prompt. Our results indicate that only 21.2% of the issues identified by human experts were also identified by GPT-4o, despite it found 27 new issues. We also found that GPT-4o performed better for heuristics related to aesthetic and minimalist design and match between system and real world, whereas it has difficulty identifying issues in heuristics related to flexibility, control, and user efficiency. Additionally, we noticed that GPT-4o generated several false positives due to hallucinations and attempts to predict issues. Finally, we highlight five takeaways for the conscious use of GPT-4o in heuristic evaluations.