On the Effectiveness of Large Language Models in Domain-Specific Code Generation

📅 2023-12-04
🏛️ ACM Transactions on Software Engineering and Methodology
📈 Citations: 11
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit limited performance in domain-specific code generation—e.g., web, game, and mathematical programming—primarily due to insufficient semantic understanding of specialized APIs (e.g., React, Unity). This work presents the first systematic analysis revealing critical deficiencies in LLMs’ API-level cognition. To address this, we propose DomCoder, a domain-enhanced code generation framework that integrates three complementary API knowledge augmentation strategies: external knowledge retrieval, chain-of-thought (CoT) prompting, and CoT-aware fine-tuning. Evaluated across diverse domain-specific benchmarks, DomCoder achieves significant improvements in both functional correctness and domain-specific fidelity. Our results empirically validate that explicit API knowledge guidance effectively bridges the domain capability gap in LLMs, advancing their applicability to real-world software development tasks requiring deep platform expertise.
📝 Abstract
Large language models (LLMs) such as ChatGPT have shown remarkable capabilities in code generation. Despite significant achievements, they rely on enormous training data to acquire a broad spectrum of open-domain knowledge. Besides, their evaluation revolves around open-domain benchmarks like HumanEval, which primarily consist of programming contests. Therefore, it is hard to fully characterize the intricacies and challenges associated with particular domains (e.g., web, game, and math). In this paper, we conduct an in-depth study of the LLMs in domain-specific code generation. Our results demonstrate that LLMs exhibit sub-optimal performance in generating domain-specific code, due to their limited proficiency in utilizing domain-specific libraries. We further observe that incorporating API knowledge as prompts can empower LLMs to generate more professional code. Based on these findings, we further investigate how to effectively incorporate API knowledge into the code generation process. We experiment with three strategies for incorporating domain knowledge, namely, external knowledge inquirer, chain-of-thought prompting, and chain-of-thought fine-tuning. We refer to these strategies as a new code generation approach called DomCoder . Experimental results show that all strategies of DomCoder improve the effectiveness of domain-specific code generation under certain settings.
Problem

Research questions and friction points this paper is trying to address.

Evaluate LLMs in domain-specific code generation
Improve LLMs' use of domain-specific libraries
Develop DomCoder for enhanced code generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs for domain-specific code generation
API knowledge as prompts enhancement
DomCoder approach improves code effectiveness
🔎 Similar Papers
No similar papers found.