Expect the Unexpected: FailSafe Long Context QA for Finance

📅 2025-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the insufficient robustness and context-awareness of large language models (LLMs) in financial-domain question-answering systems under human-computer interaction anomalies. It systematically defines six realistic interaction failure categories, focusing on two core scenarios: “query failure” (degradations in domain expertise, completeness, and linguistic accuracy) and “context failure” (deteriorated document quality, irrelevance, or emptiness). To this end, we introduce FailSafeQA—the first benchmark for evaluating robustness in long-context financial QA—and propose a fine-grained three-dimensional evaluation framework: Robustness, Context Grounding, and Compliance. We conduct a comprehensive horizontal evaluation across 24 open-source models using LLM-as-a-Judge (Qwen2.5-72B-Instruct), human annotation, and adversarial perturbation generation. Experiments reveal a fundamental trade-off between robustness and hallucination mitigation: the most compliant model, Palmyra-Fin-128k-Instruct, suffers significant robustness degradation in 17% of cases, while the most robust model, o3-mini, exhibits a 41% hallucination rate—highlighting critical reliability gaps in current financial LLMs.

Technology Category

Application Category

📝 Abstract
We propose a new long-context financial benchmark, FailSafeQA, designed to test the robustness and context-awareness of LLMs against six variations in human-interface interactions in LLM-based query-answer systems within finance. We concentrate on two case studies: Query Failure and Context Failure. In the Query Failure scenario, we perturb the original query to vary in domain expertise, completeness, and linguistic accuracy. In the Context Failure case, we simulate the uploads of degraded, irrelevant, and empty documents. We employ the LLM-as-a-Judge methodology with Qwen2.5-72B-Instruct and use fine-grained rating criteria to define and calculate Robustness, Context Grounding, and Compliance scores for 24 off-the-shelf models. The results suggest that although some models excel at mitigating input perturbations, they must balance robust answering with the ability to refrain from hallucinating. Notably, Palmyra-Fin-128k-Instruct, recognized as the most compliant model, maintained strong baseline performance but encountered challenges in sustaining robust predictions in 17% of test cases. On the other hand, the most robust model, OpenAI o3-mini, fabricated information in 41% of tested cases. The results demonstrate that even high-performing models have significant room for improvement and highlight the role of FailSafeQA as a tool for developing LLMs optimized for dependability in financial applications. The dataset is available at: https://huggingface.co/datasets/Writer/FailSafeQA
Problem

Research questions and friction points this paper is trying to address.

Test LLM robustness in finance
Evaluate context-awareness in QA systems
Mitigate hallucinations in financial LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-as-a-Judge methodology
FailSafeQA benchmark
fine-grained rating criteria
🔎 Similar Papers
2024-01-12Annual Meeting of the Association for Computational LinguisticsCitations: 34