🤖 AI Summary
Existing numerical reasoning datasets lack explicit annotation and structured support for physical formulas, hindering accurate evaluation of large language models’ (LLMs’) ability to perform formula-grounded numerical reasoning.
Method: We introduce FormulaQA—the first explicitly formula-driven bilingual (Chinese/English) numerical reasoning dataset—comprising 4,751 physics-based questions requiring formula application. Each instance is annotated with fine-grained elements (formula ID, parameters, units, numeric values) and linked to a retrievable, authoritative physics formula knowledge base. Methodologically, we systematically incorporate external physical formulas as mandatory reasoning premises, proposing a RAG-enhanced, multi-stage supervised framework: formula generation → parameter extraction → numerical computation, supported by human curation and LLM-assisted quality control.
Results: Explicit formula modeling substantially improves performance of 7B–100B LLMs on complex numerical reasoning tasks; both RAG integration and the staged modeling strategy yield significant, consistent gains.
📝 Abstract
The application of formulas (e.g., physics formulas) is a fundamental ability of humans when solving numerical reasoning problems. Existing numerical reasoning datasets seldom explicitly indicate the formulas employed in reasoning, as their questions rely on implicit commonsense mathematical knowledge. In contrast, in this paper, we introduce FormulaReasoning, a new dataset specifically designed for formula-based numerical reasoning. Each of the 4,751 questions in our dataset requires numerical calculation with external physics formulas, making it a more challenging benchmark for evaluating large language models (LLMs). We offer normalized fine-grained annotations for the questions, available in English and Chinese, including formula structures, parameter names, symbols, numerical values, and units, derived from extensive manual effort with LLM assistance for guaranteed quality. We also provide a consolidated formula database to serve as an external knowledge base accompanying the dataset. We employ FormulaReasoning to evaluate LLMs with 7B to over 100B parameters, and explore retrieval-augmented generation with the formula database. Our evaluation also covers supervised methods that break down the reasoning process into formula generation, parameter extraction, and numerical calculation, as well as direct preference optimization methods based on derived preference data.