🤖 AI Summary
In technical Q&A platforms, developers often omit essential code snippets due to privacy concerns, time constraints, or platform policies, hindering effective problem resolution. To address this, we propose CodeFiller—the first end-to-end browser-based real-time code completion method that jointly integrates fine-grained code necessity identification with a lightweight open-source large language model (Llama-3-8B). CodeFiller is fine-tuned on Stack Overflow’s 2024 Python/Java Q&A dataset and employs a machine learning classifier to determine whether code is requisite for question clarity. All generation occurs locally via a privacy-preserving browser extension. Experiments demonstrate that CodeFiller significantly improves answerability for low-code questions; it outperforms state-of-the-art baselines across BLEU, CodeBLEU, and developer preference metrics; and its feasibility is validated in real-world deployment with plug-and-play support.
📝 Abstract
Context: Software developers often ask questions on Technical Q&A forums like Stack Overflow (SO) to seek solutions to their programming-related problems (e.g., errors and unexpected behavior of code). Problem: Many questions miss required code snippets due to the lack of readily available code, time constraints, employer restrictions, confidentiality concerns, or uncertainty about what code to share. Unfortunately, missing but required code snippets prevent questions from getting prompt and appropriate solutions. Objective: We plan to introduce GENCNIPPET, a tool designed to integrate with SO's question submission system. GENCNIPPET will generate relevant code examples (when required) to support questions for their timely solutions. Methodology: We first downloaded the SO April 2024 data dump, which contains 1.94 million questions related to Python that have code snippets and 1.43 million questions related to Java. Then, we filter these questions to identify those that genuinely require code snippets using a state-of-the-art machine learning model. Next, we select questions with positive scores to ensure high-quality data. Our plan is to fine-tune Llama-3 models (e.g., Llama-3-8B), using 80% of the selected questions for training and 10% for validation. The primary reasons for choosing Llama models are their open-source accessibility and robust fine-tuning capabilities, which are essential for deploying a freely accessible tool. GENCNIPPET will be integrated with the SO question submission system as a browser plugin. It will communicate with the fine-tuned model to generate code snippets tailored to the target questions. The effectiveness of the generated code examples will be assessed using automatic evaluation against ground truth, user perspectives, and live (wild) testing in real-world scenarios.