🤖 AI Summary
Existing evaluation methods focus solely on coarse-grained solution-level metrics (e.g., solution quality, runtime), failing to pinpoint structural (e.g., missing variables/constraints) and numerical (e.g., incorrect coefficients) defects in LLM-based natural language-to-mathematical modeling translation. To address this, we propose the first component-level fine-grained evaluation framework, introducing interpretable metrics—including variable recall, constraint precision, and constraint RMSE—and distilling three core modeling principles: constraint completeness, accuracy, and output conciseness. Using this framework, we systematically benchmark GPT-5, LLaMA 3.1 Instruct, and DeepSeek Math across six prompting strategies. Results show that chain-of-thought, self-consistency, and modular prompting yield optimal performance. Crucially, constraint recall and RMSE are identified as key bottlenecks limiting solver performance. Among the models evaluated, GPT-5 achieves the best overall performance.
📝 Abstract
Large language models (LLMs) are increasingly used to convert natural language descriptions into mathematical optimization formulations. Current evaluations often treat formulations as a whole, relying on coarse metrics like solution accuracy or runtime, which obscure structural or numerical errors. In this study, we present a comprehensive, component-level evaluation framework for LLM-generated formulations. Beyond the conventional optimality gap, our framework introduces metrics such as precision and recall for decision variables and constraints, constraint and objective root mean squared error (RMSE), and efficiency indicators based on token usage and latency. We evaluate GPT-5, LLaMA 3.1 Instruct, and DeepSeek Math across optimization problems of varying complexity under six prompting strategies. Results show that GPT-5 consistently outperforms other models, with chain-of-thought, self-consistency, and modular prompting proving most effective. Analysis indicates that solver performance depends primarily on high constraint recall and low constraint RMSE, which together ensure structural correctness and solution reliability. Constraint precision and decision variable metrics play secondary roles, while concise outputs enhance computational efficiency. These findings highlight three principles for NLP-to-optimization modeling: (i) Complete constraint coverage prevents violations, (ii) minimizing constraint RMSE ensures solver-level accuracy, and (iii) concise outputs improve computational efficiency. The proposed framework establishes a foundation for fine-grained, diagnostic evaluation of LLMs in optimization modeling.