EvAlignUX: Advancing UX Research through LLM-Supported Exploration of Evaluation Metrics

📅 2024-09-23
🏛️ arXiv.org
📈 Citations: 2
Influential: 1
📄 PDF
🤖 AI Summary
The inherent complexity, unpredictability, and generativity of AI systems pose significant challenges for UX evaluation in HCI. Method: This paper introduces an LLM-augmented UX evaluation planning framework tailored for HCI researchers, featuring an interactive tool that uniquely integrates scientific literature knowledge graphs with LLM-based reasoning and introduces the “evaluation alignment” paradigm—supported by a cognitively guided interface enabling systematic exploration of mappings between diverse UX metrics and research objectives. Contributions/Results: (1) A reusable, structured repository of UX problems; (2) the first literature-driven UX evaluation knowledge modeling framework; and (3) empirical validation through a user study with 19 HCI scholars, demonstrating significant improvements in the clarity, specificity, feasibility, and overall quality of evaluation proposals, alongside fostering deeper methodological reflection.

Technology Category

Application Category

📝 Abstract
Evaluating UX in the context of AI's complexity, unpredictability, and generative nature presents unique challenges. HCI scholars lack sufficient tool support to build knowledge around diverse evaluation metrics and develop comprehensive UX evaluation plans. In this paper, we introduce EvAlignUX, an innovative system grounded in scientific literature and powered by large language models (LLMs), designed to help HCI scholars explore evaluation metrics and their relationship to potential research outcomes. A user study involving 19 HCI scholars revealed that EvAlignUX significantly improved the perceived clarity, specificity, feasibility, and overall quality of their evaluation proposals. The use of EvAlignUX enhanced participants' thought processes, resulting in the creation of a Question Bank that can be used to guide UX Evaluation Development. Additionally, the influence of researchers' backgrounds on their perceived inspiration and concerns about over-reliance on AI highlights future research directions for AI's role in fostering critical thinking.
Problem

Research questions and friction points this paper is trying to address.

Support HCI researchers in creating comprehensive UX evaluation plans
Explore evaluation metrics and their impact on research outcomes
Shift UX evaluation from method-centric to mindset-centric approach
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-powered system for UX metric exploration
Scientific literature-based evaluation guidance
UX Question Bank for evaluation development
🔎 Similar Papers
No similar papers found.
Qingxiao Zheng
Qingxiao Zheng
University at Buffalo
Human-AI InteractionAI SystemsUXAI for Social Good
M
Minrui Chen
Informatics, University of Illinois Urbana-Champaign, USA
Pranav Sharma
Pranav Sharma
Department of Computer Science, University of Illinois Urbana-Champaign, USA
Yiliu Tang
Yiliu Tang
Informatics, University of Illinois Urbana-Champaign, USA
M
Mehul Oswal
Department of Computer Science, University of Illinois Urbana-Champaign, USA
Yiren Liu
Yiren Liu
University of Illinois at Urbana-Champaign
Human Computer Interaction
Y
Yun Huang
School of Information Sciences, University of Illinois Urbana-Champaign, USA