LogiCase: Effective Test Case Generation from Logical Description in Competitive Programming

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Automatic test case generation (ATCG) for competitive programming struggles to model complex logical constraints and cover critical boundary cases. Method: This paper proposes an end-to-end framework based on Counter-augmented Context-Free Grammars (CCFGs), which jointly encode both syntactic structure and semantic constraints (e.g., numeric ranges, inter-field dependencies) from natural language specifications—enabling precise mapping to executable tests. We fine-tune CodeT5 to translate natural language descriptions into CCFGs, then perform grammar parsing and controllable stochastic sampling to generate high-quality test cases. Contribution/Results: Evaluated on the CodeContests dataset, our approach achieves a 32.7% improvement in erroneous algorithm detection rate and attains a 91.4% test validity rate, significantly outperforming state-of-the-art ATCG methods.

Technology Category

Application Category

📝 Abstract
Automated Test Case Generation (ATCG) is crucial for evaluating software reliability, particularly in competitive programming where robust algorithm assessments depend on diverse and accurate test cases. However, existing ATCG methods often fail to meet complex specifications or generate effective corner cases, limiting their utility. In this work, we introduce Context-Free Grammars with Counters (CCFGs), a formalism that captures both syntactic and semantic structures in input specifications. Using a fine-tuned CodeT5 model, we translate natural language input specifications into CCFGs, enabling the systematic generation of high-quality test cases. Experiments on the CodeContests dataset demonstrate that CCFG-based test cases outperform baseline methods in identifying incorrect algorithms, achieving significant gains in validity and effectiveness. Our approach provides a scalable and reliable grammar-driven framework for enhancing automated competitive programming evaluations.
Problem

Research questions and friction points this paper is trying to address.

Generating diverse test cases for competitive programming
Overcoming limitations in existing automated test case methods
Translating natural language specs into structured grammar for reliability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Context-Free Grammars with Counters (CCFGs)
Translates natural language to CCFGs via CodeT5
Generates high-quality test cases systematically
🔎 Similar Papers
No similar papers found.