Canonical Intermediate Representation for LLM-based optimization problem formulation and code generation

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge large language models face in accurately translating natural language descriptions into optimization models, particularly when handling composite constraints and intricate business rules. To bridge this gap, the authors propose a Canonical Intermediate Representation (CIR) that decouples business rule logic from its mathematical implementation, serving as a semantic intermediary between natural language and formal optimization models. They further introduce a multi-agent R2C framework that parses input text, retrieves domain knowledge, generates CIR specifications, and subsequently instantiates them into mathematical optimization models. The paper establishes the first systematic benchmark for rule-to-constraint reasoning and demonstrates state-of-the-art performance: on a newly curated complex-rule benchmark, their method achieves 47.2% accuracy, matches or exceeds closed-source models like GPT-5 on existing benchmarks, and sets new best results on select tasks through a self-reflection mechanism.

Technology Category

Application Category

📝 Abstract
Automatically formulating optimization models from natural language descriptions is a growing focus in operations research, yet current LLM-based approaches struggle with the composite constraints and appropriate modeling paradigms required by complex operational rules. To address this, we introduce the Canonical Intermediate Representation (CIR): a schema that LLMs explicitly generate between problem descriptions and optimization models. CIR encodes the semantics of operational rules through constraint archetypes and candidate modeling paradigms, thereby decoupling rule logic from its mathematical instantiation. Upon a newly generated CIR knowledge base, we develop the rule-to-constraint (R2C) framework, a multi-agent pipeline that parses problem texts, synthesizes CIR implementations by retrieving domain knowledge, and instantiates optimization models. To systematically evaluate rule-to-constraint reasoning, we test R2C on our newly constructed benchmark featuring rich operational rules, and benchmarks from prior work. Extensive experiments show that R2C achieves state-of-the-art accuracy on the proposed benchmark (47.2% Accuracy Rate). On established benchmarks from the literature, R2C delivers highly competitive results, approaching the performance of proprietary models (e.g., GPT-5). Moreover, with a reflection mechanism, R2C achieves further gains and sets new best-reported results on some benchmarks.
Problem

Research questions and friction points this paper is trying to address.

optimization problem formulation
large language models
composite constraints
operational rules
code generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Canonical Intermediate Representation
constraint archetypes
rule-to-constraint
optimization modeling
LLM-based code generation
🔎 Similar Papers
No similar papers found.
Zhongyuan Lyu
Zhongyuan Lyu
Lecturer, University of Sydney
mixture modelnetwork analysislatent class model
S
Shuoyu Hu
The Hong Kong Polytechnic University, Hong Kong, China
L
Lujie Liu
The Hong Kong Polytechnic University, Hong Kong, China
Hongxia Yang
Hongxia Yang
Professor, HK Polytechnic University
Machine LearningGenerative AICognitive IntelligenceStatistical Modeling
M
Ming LI
The Hong Kong Polytechnic University, Hong Kong, China