🤖 AI Summary
This study addresses the lack of systematic understanding regarding how structural, contextual, and linguistic factors in developers’ explanations on Stack Overflow influence their perceived usefulness. Leveraging a dataset of 3,323 questions and 59,398 answers, the authors combine textual analysis with statistical modeling to quantify the impact of these dimensions on explanation usefulness, operationalized as normalized upvote counts. Findings reveal small to moderate positive effects of explanation length, inclusion of code snippets, timeliness, and author reputation, while sentiment polarity exhibits negligible influence. Crucially, clarity and substantive content are shown to be more important than emotional expression. These results provide empirical grounding for improving developer communication practices and inform requirements engineering tasks such as ambiguity resolution and rationale articulation.
📝 Abstract
Explanations are essential in software engineering (SE) and requirements communication, helping stakeholders clarify ambiguities, justify design choices, and build shared understanding. Online Q&A forums such as Stack Overflow provide large-scale settings where such explanations are produced and evaluated, offering valuable insights into what makes them effective. While prior work has explored answer acceptance and voting behavior, little is known about which specific features make explanations genuinely useful. The relative influence of structural, contextual, and linguistic factors, such as content richness, timing, and sentiment, remains unclear. We analyzed 3,323 questions and 59,398 answers from Stack Overflow, combining text analysis and statistical modeling to examine how explanation attributes relate to perceived usefulness (normalized upvotes). Structural and contextual factors, especially explanation length, code inclusion, timing, and author reputation, show small to moderate positive effects. Sentiment polarity has negligible influence, suggesting that clarity and substance outweigh tone in technical communication. This study provides an empirical account of what drives perceived usefulness in developer explanations. It contributes methodological transparency through open data and replication materials, and conceptual insight by relating observed communication patterns to principles of requirements communication. The findings offer evidence-based implications for how developers and RE practitioners can craft clearer and more effective explanations, potentially supporting fairer communication in both open and organizational contexts. From an RE perspective, these determinants can be interpreted as practical signals for ambiguity reduction and rationale articulation in day-to-day requirements communication.