A Comparison of Conversational Models and Humans in Answering Technical Questions: the Firefox Case

📅 2025-10-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Despite growing interest in retrieval-augmented generation (RAG) for developer support, its real-world efficacy in open-source software (OSS) contexts remains empirically unvalidated. Method: This study conducts the first systematic, human-evaluated assessment of RAG for technical Q&A in a large-scale OSS project—Mozilla Firefox—comparing responses from domain experts, standard GPT, and RAG-GPT. A test set derived from authentic developer chat logs is evaluated by domain experts via double-blind annotation across three dimensions: helpfulness, comprehensiveness, and conciseness. Contribution/Results: RAG-GPT achieves significantly higher comprehensiveness than human experts (62.50% vs. 54.17%) and matches them closely in helpfulness (75.00% vs. 79.17%), albeit with higher redundancy. This work establishes the first empirical benchmark for RAG in OSS collaboration, demonstrating its viability to augment core maintainers—improving scalability of technical support without compromising response quality.

Technology Category

Application Category

📝 Abstract
The use of Large Language Models (LLMs) to support tasks in software development has steadily increased over recent years. From assisting developers in coding activities to providing conversational agents that answer newcomers' questions. In collaboration with the Mozilla Foundation, this study evaluates the effectiveness of Retrieval-Augmented Generation (RAG) in assisting developers within the Mozilla Firefox project. We conducted an empirical analysis comparing responses from human developers, a standard GPT model, and a GPT model enhanced with RAG, using real queries from Mozilla's developer chat rooms. To ensure a rigorous evaluation, Mozilla experts assessed the responses based on helpfulness, comprehensiveness, and conciseness. The results show that RAG-assisted responses were more comprehensive than human developers (62.50% to 54.17%) and almost as helpful (75.00% to 79.17%), suggesting RAG's potential to enhance developer assistance. However, the RAG responses were not as concise and often verbose. The results show the potential to apply RAG-based tools to Open Source Software (OSS) to minimize the load to core maintainers without losing answer quality. Toning down retrieval mechanisms and making responses even shorter in the future would enhance developer assistance in massive projects like Mozilla Firefox.
Problem

Research questions and friction points this paper is trying to address.

Evaluating RAG effectiveness in answering technical Firefox questions
Comparing human vs AI response quality on developer queries
Assessing RAG's potential to reduce maintainer workload in OSS
Innovation

Methods, ideas, or system contributions that make the work stand out.

Used Retrieval-Augmented Generation for technical assistance
Enhanced GPT model with retrieval mechanisms
Applied RAG to Open Source Software projects
🔎 Similar Papers
No similar papers found.
J
Joao Correia
Pontifical Catholic University, Rio de Janeiro, RJ, Brazil
Daniel Coutinho
Daniel Coutinho
Pontifical Catholic University, Rio de Janeiro, RJ, Brazil
M
Marco Castelluccio
Mozilla Corporation, London, UK
C
Caio Barbosa
Pontifical Catholic University, Rio de Janeiro, RJ, Brazil
R
Rafael de Mello
Federal University of Rio de Janeiro, Rio de Janeiro, RJ, Brazil
Anita Sarma
Anita Sarma
Oregon State University
Software EngineeringHuman Computer InteractionDistributed Software Development
Alessandro Garcia
Alessandro Garcia
Associate Professor, Computer Science, Pontifical Catholic University of Rio de Janeiro
Software Engineering
M
Marco Gerosa
Northern Arizona University, Flagstaff, AZ, USA
Igor Steinmacher
Igor Steinmacher
Northern Arizona University
Software EngineeringCSCWMining Software RepositoriesOpen Source Software