🤖 AI Summary
Large language models (LLMs) exhibit limited capability in open-ended problem analysis, abstraction, and principled formalization—critical for real-world mathematical modeling.
Method: This paper proposes the first four-stage LLM-based modeling agent framework, integrating structured prompt engineering, domain-knowledge-guided modeling, verifiable computational chains, and natural-language report synthesis. We construct MM-Bench, a multi-domain benchmark comprising 111 problems from the 2000–2025 MCM/ICM competitions, and implement an end-to-end modeling pipeline using GPT-4o.
Contribution/Results: On MM-Bench, our framework outperforms human experts by 11.88% in solution quality, with an average runtime of 15 minutes and cost of $0.88 per task. It successfully supported two undergraduate teams in winning the 2025 MCM/ICM Outstanding Winner award (top 2.0% globally), marking the first empirical validation of LLMs’ high-level assistance capability in formal mathematical modeling competitions.
📝 Abstract
Mathematical modeling is a cornerstone of scientific discovery and engineering practice, enabling the translation of real-world problems into formal systems across domains such as physics, biology, and economics. Unlike mathematical reasoning, which assumes a predefined formulation, modeling requires open-ended problem analysis, abstraction, and principled formalization. While Large Language Models (LLMs) have shown strong reasoning capabilities, they fall short in rigorous model construction, limiting their utility in real-world problem-solving. To this end, we formalize the task of LLM-powered real-world mathematical modeling, where agents must analyze problems, construct domain-appropriate formulations, and generate complete end-to-end solutions. We introduce MM-Bench, a curated benchmark of 111 problems from the Mathematical Contest in Modeling (MCM/ICM), spanning the years 2000 to 2025 and across ten diverse domains such as physics, biology, and economics. To tackle this task, we propose MM-Agent, an expert-inspired framework that decomposes mathematical modeling into four stages: open-ended problem analysis, structured model formulation, computational problem solving, and report generation. Experiments on MM-Bench show that MM-Agent significantly outperforms baseline agents, achieving an 11.88% improvement over human expert solutions while requiring only 15 minutes and $0.88 per task using GPT-4o. Furthermore, under official MCM/ICM protocols, MM-Agent assisted two undergraduate teams in winning the Finalist Award ( extbf{top 2.0% among 27,456 teams}) in MCM/ICM 2025, demonstrating its practical effectiveness as a modeling copilot. Our code is available at https://github.com/usail-hkust/LLM-MM-Agent