🤖 AI Summary
Multi-hop financial question answering (QA) suffers from noise sensitivity and excessive token consumption in large language models (LLMs), primarily due to challenges in cross-document, cross-year, and cross-company fact retrieval. Method: We introduce FinHop—the first financial QA benchmark grounded in a domain-specific knowledge graph (KG)—and propose a KG-guided multi-hop reasoning framework. It integrates GICS industry taxonomy, temporal indexing of the KG, schema-driven prompt generation, and a controllable retrieval-evaluation pipeline; high-quality QA pairs are automatically generated by mining industry-invariant subgraph patterns, with multi-stage quality control ensuring data reliability. Contribution/Results: Experiments show that KG-based precise retrieval improves QA accuracy by 24.0% and reduces token consumption by 84.5% over conventional sliding-window text retrieval, establishing a more interpretable, efficient, and robust retrieval–reasoning paradigm for complex financial inference.
📝 Abstract
Multi-hop reasoning over financial disclosures is often a retrieval problem before it becomes a reasoning or generation problem: relevant facts are dispersed across sections, filings, companies, and years, and LLMs often expend excessive tokens navigating noisy context. Without precise Knowledge Graph (KG)-guided selection of relevant context, even strong reasoning models either fail to answer or consume excessive tokens, whereas KG-linked evidence enables models to focus their reasoning on composing already retrieved facts. We present FinReflectKG - MultiHop, a benchmark built on FinReflectKG, a temporally indexed financial KG that links audited triples to source chunks from S&P 100 filings (2022-2024). Mining frequent 2-3 hop subgraph patterns across sectors (via GICS taxonomy), we generate financial analyst style questions with exact supporting evidence from the KG. A two-phase pipeline first creates QA pairs via pattern-specific prompts, followed by a multi-criteria quality control evaluation to ensure QA validity. We then evaluate three controlled retrieval scenarios: (S1) precise KG-linked paths; (S2) text-only page windows centered on relevant text spans; and (S3) relevant page windows with randomizations and distractors. Across both reasoning and non-reasoning models, KG-guided precise retrieval yields substantial gains on the FinReflectKG - MultiHop QA benchmark dataset, boosting correctness scores by approximately 24 percent while reducing token utilization by approximately 84.5 percent compared to the page window setting, which reflects the traditional vector retrieval paradigm. Spanning intra-document, inter-year, and cross-company scopes, our work underscores the pivotal role of knowledge graphs in efficiently connecting evidence for multi-hop financial QA. We also release a curated subset of the benchmark (555 QA Pairs) to catalyze further research.