π€ AI Summary
This work addresses the prevalent issue in technical Q&A platforms such as Stack Overflow, where programming answers frequently remain uncorrected despite user comments highlighting defects, leading to outdated or incomplete knowledge. To tackle this, we propose AUTOCOMBAT, an automated answer enhancement approach leveraging large language models (e.g., DeepSeek) that integrates context-aware mechanisms and comment classification to accurately interpret user feedback and generate high-quality revised answers. We introduce ReSOlve, the first benchmark dataset specifically curated for answer improvement, and demonstrate AUTOCOMBATβs effectiveness on it: the generated answers approach human-level revision quality, and a user study shows that 84.5% of developers are willing to adopt or recommend the tool, significantly outperforming existing baselines.
π Abstract
Large Language Models (LLMs) are widely used to support software developers in tasks such as code generation, optimization, and documentation. However, their ability to improve existing programming answers in a human-like manner remains underexplored. On technical question-and-answer platforms such as Stack Overflow (SO), contributors often revise answers based on user comments that identify errors, inefficiencies, or missing explanations. Yet roughly one-third of this feedback is never addressed due to limited time, expertise, or visibility, leaving many answers incomplete or outdated. This study investigates whether LLMs can enhance programming answers by interpreting and incorporating comment-based feedback. We make four main contributions. First, we introduce ReSOlve, a benchmark consisting of 790 SO answers with associated comment threads, annotated for improvement-related and general feedback. Second, we evaluate four state-of-the-art LLMs on their ability to identify actionable concerns, finding that DeepSeek achieves the best balance between precision and recall. Third, we present AUTOCOMBAT, an LLM-powered tool that improves programming answers by jointly leveraging user comments and question context. Compared to human revised references, AUTOCOMBAT produces near-human quality improvements while preserving the original intent and significantly outperforming the baseline. Finally, a user study with 58 practitioners shows strong practical value, with 84.5 percent indicating they would adopt or recommend the tool. Overall, AUTOCOMBAT demonstrates the potential of scalable, feedback-driven answer refinement to improve the reliability and trustworthiness of technical knowledge platforms.