ASP-Bench: From Natural Language to Logic Programs

📅 2026-02-01
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of automatically translating natural language into Answer Set Programming (ASP) as a critical step toward robust neuro-symbolic systems. It introduces ASP-Bench, the first multidimensional reasoning benchmark encompassing core ASP features—such as choice rules, aggregates, and optimization statements—with 128 carefully curated natural language–ASP pairs. The authors propose a feedback-driven iterative modeling paradigm grounded in the ReAct framework, which leverages solver feedback in a closed loop to progressively refine generated logic programs. Evaluated on ASP-Bench, this approach achieves full saturation, demonstrating both the efficacy of the proposed paradigm and enabling fine-grained analysis of the factors governing translation difficulty.

Technology Category

Application Category

📝 Abstract
Automating the translation of natural-language specifications into logic programs is a challenging task that affects neurosymbolic engineering. We present ASP-Bench, a benchmark comprising 128 natural language problem instances, 64 base problems with easy and hard variants. It evaluates systems that translate natural-language problems into Answer Set Programs (ASPs), a prominent form of logic programming. It provides systematic coverage of ASP features, including choice rules, aggregates, and optimization. Each problem includes reference validators that check whether solutions satisfy the problem specification. We characterize problems along seven largely independent reasoning aspects (optimization, temporal reasoning, default logic, resource allocation, recursion, spatial reasoning, and quantitative complexity), providing a multidimensional view of modeling difficulty. We test the benchmark using an agentic approach based on the ReAct (Reason and Act) framework, which achieves full saturation, demonstrating that feedback-driven iterative refinement with solver feedback provides a reliable and robust approach for modeling natural language in ASP. Our analysis across multiple agent runs enables us to gain insights into what determines a problem's modeling hardness.
Problem

Research questions and friction points this paper is trying to address.

natural language to logic programs
Answer Set Programming
neurosymbolic engineering
automated translation
logic programming
Innovation

Methods, ideas, or system contributions that make the work stand out.

ASP-Bench
Answer Set Programming
Natural Language to Logic Translation
ReAct Framework
Neurosymbolic Engineering
🔎 Similar Papers
2024-08-22arXiv.orgCitations: 2