MATATA: Weakly Supervised End-to-End MAthematical Tool-Augmented Reasoning for Tabular Applications

📅 2024-11-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Mathematical reasoning over interleaved tables and text in commercial documents poses significant challenges for small language models, as existing tool-augmented agents rely heavily on large models, external data, or extensive prompt engineering. To address this, we propose the first weakly supervised, end-to-end training framework specifically designed for document-level mathematical reasoning—requiring no manually annotated intermediate steps. Our approach uses only the final answer as supervision, enabling backward-guided multi-step tool invocation. We introduce an adaptive task planner and a lightweight, cross-dataset-shared math tool library—including a calculator and table manipulation utilities. This framework empowers open-source small models (3.8B/8B parameters) to perform efficient multi-step reasoning, achieving state-of-the-art results among open-source small models on FinQA and TAT-QA, and approaching GPT-4’s performance on TabMWP—while substantially reducing deployment costs.

Technology Category

Application Category

📝 Abstract
Business documents often contain substantial tabular and textual information with numerical values, requiring mathematical reasoning for effective document understanding. While Small Language Models (SLMs) still struggle at this task, tool-augmented multi-step agents perform better, at the cost of relying on closed-source or larger models, external data, or extensive prompt-engineering. This work introduces MATATA, a novel weakly supervised end-to-end approach to train multi-step reasoning language agents for document tabular applications. MATATA presents an annotation-free paradigm for each agent to enhance 3.8B/8B SLMs. During its two-stage training, MATATA uses the final outcome of the multi-step reasoning chain as weak supervision. This approach avoids having to individually supervise each intermediate agent in the reasoning chain. By employing an adaptive planner and shared tools across different datasets, MATATA shows robust performance. Experiments demonstrate that MATATA achieves state-of-the-art on FinQA, and on TAT-QA among reasoning methods based on open-source SLMs. Although being SLM-based, MATATA closely matches GPT-4-based frameworks on TabMWP. This novel weakly supervised approach enables training an end-to-end multi-step reasoning agent without intermediate supervision, supporting future developments of cost-effective powerful agentic systems.
Problem

Research questions and friction points this paper is trying to address.

Enhance SLMs for tabular document understanding
Reduce reliance on closed-source or large models
Train multi-step reasoning without intermediate supervision
Innovation

Methods, ideas, or system contributions that make the work stand out.

Weakly supervised end-to-end training for tabular reasoning
Adaptive planner and shared tools enhance SLMs
Annotation-free paradigm with outcome-based weak supervision
🔎 Similar Papers
No similar papers found.
V
Vishnou Vinayagame
Docugami Inc, Kirkland, WA 98033, USA
G
Gregory Senay
Docugami Inc, Kirkland, WA 98033, USA
Luis Martí
Luis Martí
Inria
Machine learningneural networksevolutionary computationmulti-objective optimizationmultiobjective optimization