FormulaReasoning: A Dataset for Formula-Based Numerical Reasoning

📅 2024-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing numerical reasoning datasets lack explicit annotation and structured support for physical formulas, hindering accurate evaluation of large language models’ (LLMs’) ability to perform formula-grounded numerical reasoning. Method: We introduce FormulaQA—the first explicitly formula-driven bilingual (Chinese/English) numerical reasoning dataset—comprising 4,751 physics-based questions requiring formula application. Each instance is annotated with fine-grained elements (formula ID, parameters, units, numeric values) and linked to a retrievable, authoritative physics formula knowledge base. Methodologically, we systematically incorporate external physical formulas as mandatory reasoning premises, proposing a RAG-enhanced, multi-stage supervised framework: formula generation → parameter extraction → numerical computation, supported by human curation and LLM-assisted quality control. Results: Explicit formula modeling substantially improves performance of 7B–100B LLMs on complex numerical reasoning tasks; both RAG integration and the staged modeling strategy yield significant, consistent gains.

Technology Category

Application Category

📝 Abstract
The application of formulas (e.g., physics formulas) is a fundamental ability of humans when solving numerical reasoning problems. Existing numerical reasoning datasets seldom explicitly indicate the formulas employed in reasoning, as their questions rely on implicit commonsense mathematical knowledge. In contrast, in this paper, we introduce FormulaReasoning, a new dataset specifically designed for formula-based numerical reasoning. Each of the 4,751 questions in our dataset requires numerical calculation with external physics formulas, making it a more challenging benchmark for evaluating large language models (LLMs). We offer normalized fine-grained annotations for the questions, available in English and Chinese, including formula structures, parameter names, symbols, numerical values, and units, derived from extensive manual effort with LLM assistance for guaranteed quality. We also provide a consolidated formula database to serve as an external knowledge base accompanying the dataset. We employ FormulaReasoning to evaluate LLMs with 7B to over 100B parameters, and explore retrieval-augmented generation with the formula database. Our evaluation also covers supervised methods that break down the reasoning process into formula generation, parameter extraction, and numerical calculation, as well as direct preference optimization methods based on derived preference data.
Problem

Research questions and friction points this paper is trying to address.

Develop a dataset for formula-based numerical reasoning challenges
Evaluate large language models using physics formula applications
Provide annotated questions with formula structures and parameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dataset for formula-based numerical reasoning
Fine-grained annotations with LLM assistance
Retrieval-augmented generation with formula database
🔎 Similar Papers
No similar papers found.
X
Xiao Li
State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
Sichen Liu
Sichen Liu
MS Student, Huazhong University of Science and Technology
Generative ModelImage Generation
B
Bolin Zhu
State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
Y
Yin Zhu
State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
Yiwei Liu
Yiwei Liu
Defence Industry Secrecy Examination and Certification Center
Information TheoremSocial networkPrivacy Protection
Gong Cheng
Gong Cheng
Professor, Nanjing University
big data searchknowledge graphLLM inference