ConCodeEval: Evaluating Large Language Models for Code Constraints in Domain-Specific Languages

📅 2024-07-03
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) struggle to comprehend fine-grained, structure-enforcing constraints—such as those encoded in JSON/YAML and other domain-specific languages (DSLs)—under zero- or few-shot settings, undermining reliable code generation. Method: We propose ConCodeEval, the first benchmark explicitly designed to evaluate LLMs’ ability to understand DSL-imposed constraints. It spans five syntactic formats and introduces two novel constraint-aware tasks: constraint interpretation and constraint-guided generation. We design a cross-format, dual-task evaluation framework integrating natural-language–structured-constraint joint prompting, multi-format parsing, and automated assessment. Contribution/Results: ConCodeEval formally defines and quantifies LLM controllability over fine-grained code constraints for the first time. Experiments reveal that state-of-the-art code LLMs—including CodeLlama and StarCoder—suffer >40% performance degradation on constraint tasks, exposing critical deficiencies in structured semantic control. This benchmark provides a foundational, trustworthy evaluation metric for controllable, constraint-compliant code generation.

Technology Category

Application Category

📝 Abstract
Recent work shows Large Language Models (LLMs) struggle to understand natural language constraints for various text generation tasks in zero- and few-shot settings. While, in the code domain, there is wide usage of constraints in code format to maintain the integrity of code written in Domain-Specific Languages (DSLs) like JSON and YAML which are widely used for system-level programming tasks in enterprises. Given that LLMs are increasingly used for system-level code tasks, evaluating if they can comprehend these code constraints is crucial. However, no work has been done to evaluate their controllability over code constraints. Hence, we introduce ConCodeEval, a first-of-its-kind benchmark having two novel tasks for code constraints across five representations. Our findings suggest that language models struggle with code constraints. Code languages that perform excellently for normal code tasks do not perform well when the same languages represent fine-grained constraints.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' understanding of code constraints in DSLs
Assessing LLMs' controllability over various code constraint representations
Benchmarking LLM performance on fine-grained code constraint tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces ConCodeEval benchmark for code constraints
Evaluates LLMs on five constraint representations
Highlights LLM struggles with fine-grained constraints
🔎 Similar Papers
No similar papers found.
Mehant Kammakomati
Mehant Kammakomati
IBM Research
S
Sameer Pimparkhede
IIT Bombay
S
Srikanth G. Tamilselvam
IBM Research
Prince Kumar
Prince Kumar
IBM Research Labs
NLPMLDL
P
Pushpak Bhattacharyya
IIT Bombay