🤖 AI Summary
To address the lack of effective online tools for collaborative usability evaluation involving both real and synthetic users in multi-page web design A/B testing, this paper proposes a usability assessment framework integrating design thinking and linguistic decision theory, implemented as an online decision support system enabling role-playing tests. Methodologically, it combines A/B testing, the System Usability Scale (SUS), synthetic-user role-playing, and multi-criteria decision modeling based on linguistic term sets to enhance quantifiability and interpretability of subjective user experience. Its key contribution is the first integration of linguistic decision methods into the online A/B testing workflow, enabling dynamic, human–machine collaborative usability evaluation. Empirical validation across three Moodle virtual learning environments at the University of Guadalajara, Mexico, demonstrates that the system significantly improves assessment comprehensiveness, feedback interpretability, and cross-design comparative efficacy.
📝 Abstract
In recent years, attention has increasingly focused on enhancing user satisfaction with user interfaces, spanning both mobile applications and websites. One fundamental aspect of human-machine interaction is the concept of web usability. In order to assess web usability, the A/B testing technique enables the comparison of data between two designs. Expanding the scope of tests to include the designs being evaluated, in conjunction with the involvement of both real and fictional users, presents a challenge for which few online tools offer support. We propose a methodology for web usability evaluation based on user-centered approaches such as design thinking and linguistic decision-making, named Linguistic Decision-Making for Web Usability Evaluation. This engages people in role-playing scenarios and conducts a number of usability tests, including the widely recognized System Usability Scale. We incorporate the methodology into a decision support system based on A/B testing. We use real users in a case study to assess three Moodle platforms at the University of Guadalajara, Mexico.