π€ AI Summary
To address data scarcity and tooling limitations for low-resource Bangla in code generation, this paper proposes Coder-Reviewerβthe first retrieval-augmented, dual-model collaborative framework. It integrates in-context learning, LLM-assisted translation, systematic prompt engineering, and an execution-feedback-driven multi-round self-refinement mechanism. The framework jointly optimizes natural language understanding and code robustness through encoder-reviewer co-modeling, with iterative corrections guided by program execution feedback. Evaluated on the newly constructed BLP-2025 benchmark, our approach achieves 84.00% Pass@1 accuracy, substantially outperforming existing baselines. This work introduces, for the first time, retrieval augmentation and execution-aware self-refinement to low-resource NL2Code tasks, establishing a scalable paradigm for code generation in resource-constrained languages.
π Abstract
Bangla is a low-resource language for code generation, lacking large-scale annotated datasets and tools to transform natural language specifications into executable programs. This makes Bangla-to-code generation a challenging task requiring innovative solutions. To address this, we introduce BanglaForge, a novel framework for generating code from Bangla function descriptions. BanglaForge leverages a retrieval-augmented dual-model collaboration paradigm with self-refinement, combining in-context learning, llm-based translation, systematic prompt engineering, and iterative self-refinement based on execution feedback, where a coder generates initial solutions and a reviewer enhances them for robustness. On the BLP-2025 Bangla Code Generation benchmark, BanglaForge achieves a competitive Pass@1 accuracy of 84.00%, demonstrating the effectiveness of retrieval, model collaboration, and self-refinement for low-resource Bangla code generation.