🤖 AI Summary
This work proposes a context-aware, interactive feedback approach to address the ambiguity and inefficiency of mobile application user feedback caused by insufficient contextual information, which often increases developers’ clarification costs. By leveraging multimodal large language models (MLLMs), the method dynamically generates adaptive follow-up questions in real time based on contextual cues—such as user-submitted screenshots—to collaboratively construct structured, high-quality bug reports or feature requests. Integrated into an iOS framework and evaluated within a real-world fitness application, the approach enables users to provide feedback more effortlessly and effectively. Expert assessment of 54 generated reports demonstrates significantly higher completeness compared to traditional form-based submissions, particularly in defect descriptions and feature requests.
📝 Abstract
User feedback is essential for the success of mobile apps, yet what users report and what developers need often diverge. Research shows that users often submit vague feedback and omit essential contextual details. This leads to incomplete reports and time-consuming clarification discussions. To overcome this challenge, we propose FeedAIde, a context-aware, interactive feedback approach that supports users during the reporting process by leveraging the reasoning capabilities of Multimodal Large Language Models. FeedAIde captures contextual information, such as the screenshot where the issue emerges, and uses it for adaptive follow-up questions to collaboratively refine with the user a rich feedback report that contains information relevant to developers. We implemented an iOS framework of FeedAIde and evaluated it on a gym's app with its users. Compared to the app's simple feedback form, participants rated FeedAIde as easier and more helpful for reporting feedback. An assessment by two industry experts of the resulting 54 reports showed that FeedAIde improved the quality of both bug reports and feature requests, particularly in terms of completeness. The findings of our study demonstrate the potential of context-aware, GenAI-powered feedback reporting to enhance the experience for users and increase the information value for developers.