A Solver-in-the-Loop Framework for Improving LLMs on Answer Set Programming for Logic Puzzle Solving

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the weak semantic parsing capability and poor executability of large language models (LLMs) in natural language-to-Answer Set Programming (NL→ASP) code generation. We propose the first ASP solver-closed-loop framework, which leverages solver feedback to automatically filter, classify, and validate intermediate LLM-generated code snippets. Our method integrates solver-guided best-of-N search, supervised fine-tuning, and declarative programming space contraction modeling. Evaluated on two prompt settings and two logic puzzle benchmarks, our approach significantly improves both the executability and semantic correctness of generated ASP programs. To the best of our knowledge, this is the first end-to-end, solver-driven NL→ASP generation paradigm. It establishes a novel pathway for declarative solving of combinatorial search problems by tightly coupling neural generation with symbolic reasoning and execution feedback.

Technology Category

Application Category

📝 Abstract
The rise of large language models (LLMs) has sparked interest in coding assistants. While general-purpose programming languages are well supported, generating code for domain-specific languages remains a challenging problem for LLMs. In this paper, we focus on the LLM-based generation of code for Answer Set Programming (ASP), a particularly effective approach for finding solutions to combinatorial search problems. The effectiveness of LLMs in ASP code generation is currently hindered by the limited number of examples seen during their initial pre-training phase. In this paper, we introduce a novel ASP-solver-in-the-loop approach for solver-guided instruction-tuning of LLMs to addressing the highly complex semantic parsing task inherent in ASP code generation. Our method only requires problem specifications in natural language and their solutions. Specifically, we sample ASP statements for program continuations from LLMs for unriddling logic puzzles. Leveraging the special property of declarative ASP programming that partial encodings increasingly narrow down the solution space, we categorize them into chosen and rejected instances based on solver feedback. We then apply supervised fine-tuning to train LLMs on the curated data and further improve robustness using a solver-guided search that includes best-of-N sampling. Our experiments demonstrate consistent improvements in two distinct prompting settings on two datasets.
Problem

Research questions and friction points this paper is trying to address.

Improves LLMs for ASP code generation
Addresses limited examples in pre-training
Enhances semantic parsing for logic puzzles
Innovation

Methods, ideas, or system contributions that make the work stand out.

Solver-in-the-loop framework for LLM fine-tuning
Solver-guided categorization of ASP code samples
Supervised fine-tuning with solver-guided search enhancement
🔎 Similar Papers
No similar papers found.