🤖 AI Summary
The inherent complexity, unpredictability, and generativity of AI systems pose significant challenges for UX evaluation in HCI. Method: This paper introduces an LLM-augmented UX evaluation planning framework tailored for HCI researchers, featuring an interactive tool that uniquely integrates scientific literature knowledge graphs with LLM-based reasoning and introduces the “evaluation alignment” paradigm—supported by a cognitively guided interface enabling systematic exploration of mappings between diverse UX metrics and research objectives. Contributions/Results: (1) A reusable, structured repository of UX problems; (2) the first literature-driven UX evaluation knowledge modeling framework; and (3) empirical validation through a user study with 19 HCI scholars, demonstrating significant improvements in the clarity, specificity, feasibility, and overall quality of evaluation proposals, alongside fostering deeper methodological reflection.
📝 Abstract
Evaluating UX in the context of AI's complexity, unpredictability, and generative nature presents unique challenges. HCI scholars lack sufficient tool support to build knowledge around diverse evaluation metrics and develop comprehensive UX evaluation plans. In this paper, we introduce EvAlignUX, an innovative system grounded in scientific literature and powered by large language models (LLMs), designed to help HCI scholars explore evaluation metrics and their relationship to potential research outcomes. A user study involving 19 HCI scholars revealed that EvAlignUX significantly improved the perceived clarity, specificity, feasibility, and overall quality of their evaluation proposals. The use of EvAlignUX enhanced participants' thought processes, resulting in the creation of a Question Bank that can be used to guide UX Evaluation Development. Additionally, the influence of researchers' backgrounds on their perceived inspiration and concerns about over-reliance on AI highlights future research directions for AI's role in fostering critical thinking.