Integrating Code Metrics into Automated Documentation Generation for Computational Notebooks

📅 2026-02-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing automated documentation generation methods, which often overlook structural and quantitative code features that critically influence code readability, resulting in contextually irrelevant or inaccurate documentation in computational notebooks. To bridge this gap, the study systematically introduces code metrics as auxiliary signals for the first time, constructing a high-quality dataset of (code, Markdown) pairs and integrating metric information into both CNN-RNN and GPT-3.5 architectures. Experimental results demonstrate significant improvements in generation quality: the CNN-RNN model achieves a 6% increase in BLEU-1 and a 3% gain in ROUGE-L F1, while few-shot GPT-3.5 shows a 9% improvement in BERTScore F1. These findings validate the generalizability and effectiveness of code metrics as enhancing signals across diverse model paradigms.

Technology Category

Application Category

📝 Abstract
Effective code documentation is essential for collaboration, comprehension, and long-term software maintainability, yet developers often neglect it due to its repetitive nature. Automated documentation generation has evolved from heuristic and rule-based methods to neural network-based and large language model (LLM)-based approaches. However, existing methods often overlook structural and quantitative characteristics of code that influence readability and comprehension. Prior research suggests that code metrics capture information relevant to program understanding. Building on these insights, this paper investigates the role of source code metrics as auxiliary signals for automated documentation generation, focusing on computational notebooks, a popular medium among data scientists that integrates code, narrative, and results but suffers from inconsistent documentation. We propose a two-stage approach. First, the CodeSearchNet dataset construction process was refined to create a specialized dataset from over 17 million code and markdown cells. After structural and semantic filtering, approximately 36,734 high-quality (code, markdown) pairs were extracted. Second, two modeling paradigms, a lightweight CNN-RNN architecture and a few-shot GPT-3.5 architecture, were evaluated with and without metric information. Results show that incorporating code metrics improves the accuracy and contextual relevance of generated documentation, yielding gains of 6% in BLEU-1 and 3% in ROUGE-L F1 for CNN-RNN-based architecture, and 9% in BERTScore F1 for LLM-based architecture. These findings demonstrate that integrating code metrics provides valuable structural context, enhancing automated documentation generation across diverse model families.
Problem

Research questions and friction points this paper is trying to address.

automated documentation generation
code metrics
computational notebooks
program understanding
code readability
Innovation

Methods, ideas, or system contributions that make the work stand out.

code metrics
automated documentation generation
computational notebooks
LLM-based modeling
dataset curation
🔎 Similar Papers
No similar papers found.