🤖 AI Summary
Generative AI systems suffer from critical usability bottlenecks—including unpredictable outputs, challenging fine-tuning, low transparency, weak user control, and high cognitive load. To address these, this study introduces the first cross-domain usability evaluation framework for generative AI, systematically diagnosing issues along four dimensions: user experience, transparency, user control, and cognitive load. We propose a novel usability enhancement paradigm centered on explainability augmentation, intuitive interaction design, and closed-loop user feedback. Empirical validation integrates human-computer interaction evaluation, cognitive ergonomics analysis, and multidimensional metrics (efficiency, learnability, satisfaction). Results yield domain-specific best practices for content creation, education, and programming—demonstrating significant improvements in task completion rates and user satisfaction. This work delivers the first evidence-based usability guideline for designing trustworthy generative AI systems.
📝 Abstract
Generative AI systems are transforming content creation, but their usability remains a key challenge. This paper examines usability factors such as user experience, transparency, control, and cognitive load. Common challenges include unpredictability and difficulties in fine-tuning outputs. We review evaluation metrics like efficiency, learnability, and satisfaction, highlighting best practices from various domains. Improving interpretability, intuitive interfaces, and user feedback can enhance usability, making generative AI more accessible and effective.