🤖 AI Summary
The mechanisms by which system prompts influence instruction-tuned models in code generation remain poorly understood, particularly regarding their performance across varying model scales, programming languages, and prompting strategies. This study conducts a large-scale, multi-variable controlled experiment evaluating 360 configurations spanning four instruction-tuned models, five categories of system prompts, three prompting strategies, two programming languages, and multiple temperature settings. The findings reveal that the effectiveness of system prompts is non-monotonic and highly configuration-dependent: notably, few-shot examples can degrade performance in larger models, challenging the conventional wisdom that few-shot prompting consistently outperforms zero-shot. Additionally, Java is found to be more sensitive to prompt design than Python, suggesting the need for language-specific prompting strategies. This work provides empirical foundations and practical guidance for effective prompt engineering in code generation.
📝 Abstract
Instruction-tuned Language Models (ILMs) have become essential components of modern AI systems, demonstrating exceptional versatility across natural language and reasoning tasks. Among their most impactful applications is code generation, where ILMs -- commonly referred to as Code Language Models (CLMs) -- translate human intent into executable programs. While progress has been driven by advances in scaling and training methodologies, one critical aspect remains underexplored: the impact of system prompts on both general-purpose ILMs and specialized CLMs for code generation. We systematically evaluate how system prompts of varying instructional detail, along with model scale, prompting strategy, and programming language, affect code assistant. Our experimental setting spans 360 configurations across four models, five system prompts, three prompting strategies, two languages, and two temperature settings. We find that (1) increasing system-prompt constraint specificity does not monotonically improve correctness -- prompt effectiveness is configuration-dependent and can help or hinder based on alignment with task requirements and decoding context; (2) for larger code-specialized models, few-shot examples can degrade performance relative to zero-shot generation, contrary to conventional wisdom; and (3) programming language matters, with Java exhibiting significantly greater sensitivity to system prompt variations than Python, suggesting language-specific prompt engineering strategies may be necessary.