DocFinQA: A Long-Context Financial Reasoning Dataset

📅 2024-01-12
🏛️ Annual Meeting of the Association for Computational Linguistics
📈 Citations: 34
Influential: 1
📄 PDF
🤖 AI Summary
Existing financial QA datasets are limited to short texts and cannot support real-world, hundred-page financial report analysis. Method: We introduce DocFinQA—the first long-document financial QA benchmark—by mapping 7,437 FinQA questions to full annual reports (avg. 123K tokens) and proposing a whole-document alignment annotation strategy. We systematically evaluate dense retrieval (ColBERT), RAG, and long-context LLMs (e.g., LLaMA-2-7B-128K). Contribution/Results: State-of-the-art models achieve <30% accuracy; performance drops by >50% on documents exceeding 500 pages, exposing fundamental bottlenecks in long-range dependency modeling and fine-grained financial semantic alignment. DocFinQA establishes a reusable methodological paradigm for professional long-document reasoning tasks—including finance, law, and biomedicine—where precise, context-intensive inference is critical.

Technology Category

Application Category

📝 Abstract
For large language models (LLMs) to be effective in the financial domain -- where each decision can have a significant impact -- it is necessary to investigate realistic tasks and data. Financial professionals often interact with documents that are hundreds of pages long, but most financial research datasets only deal with short excerpts from these documents. To address this, we introduce a long-document financial QA task. We augment 7,437 questions from the existing FinQA dataset with the full-document context, extending the average context length from under 700 words in FinQA to 123k words in DocFinQA. We conduct extensive experiments over retrieval-based QA pipelines and long-context language models. DocFinQA proves a significant challenge for even state-of-the-art systems. We also provide a case-study on the longest documents in DocFinQA and find that models particularly struggle on these documents. Addressing these challenges may have a wide reaching impact across applications where specificity and long-range contexts are critical, like gene sequences and legal document contract analysis.
Problem

Research questions and friction points this paper is trying to address.

Addressing long-document financial QA challenges for LLMs
Enhancing financial reasoning with full-document context augmentation
Evaluating model performance on lengthy financial document analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Augmented short questions with full-document context
Extended context length from 700 to 123k words
Evaluated retrieval-based QA and long-context models