Towards Explainable and Reliable AI in Finance

๐Ÿ“… 2025-10-30
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In financial time series forecasting, the black-box nature of large neural networks severely undermines model trustworthiness and regulatory compliance. To address this, we propose a novel, interpretable, and high-fidelity AI forecasting paradigm comprising three synergistic components: (1) a time-series foundation model (Time-LLM), (2) an uncertainty-aware reliability filtering mechanism, and (3) a domain-knowledge-embedded symbolic reasoning module. Our multi-stage framework selectively executes decisions only on predictions exhibiting both high confidence and semantic interpretability, enabling selective output and full auditability. Experiments on stock and cryptocurrency datasets demonstrate that our approach significantly reduces false positive rates while simultaneously improving both prediction reliability and interpretability over baseline methods. The framework provides a verifiable, controllable deployment pathway for trustworthy financial AI systems.

Technology Category

Application Category

๐Ÿ“ Abstract
Financial forecasting increasingly uses large neural network models, but their opacity raises challenges for trust and regulatory compliance. We present several approaches to explainable and reliable AI in finance. emph{First}, we describe how Time-LLM, a time series foundation model, uses a prompt to avoid a wrong directional forecast. emph{Second}, we show that combining foundation models for time series forecasting with a reliability estimator can filter our unreliable predictions. emph{Third}, we argue for symbolic reasoning encoding domain rules for transparent justification. These approaches shift emphasize executing only forecasts that are both reliable and explainable. Experiments on equity and cryptocurrency data show that the architecture reduces false positives and supports selective execution. By integrating predictive performance with reliability estimation and rule-based reasoning, our framework advances transparent and auditable financial AI systems.
Problem

Research questions and friction points this paper is trying to address.

Addressing opacity challenges in financial neural network models
Enhancing forecast reliability through explainable AI techniques
Integrating domain rules for transparent financial prediction systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Time-LLM uses prompts to prevent wrong directional forecasts
Foundation models combine with reliability estimators for filtering
Symbolic reasoning encodes domain rules for transparent justification
๐Ÿ”Ž Similar Papers
No similar papers found.
A
Albi Isufaj
National Institute of Informatics, Graduate University for Advanced Studies, Tokyo, Japan
P
Pablo Mollรก
National Institute of Informatics, Graduate University for Advanced Studies, Tokyo, Japan
Helmut Prendinger
Helmut Prendinger
Professor, National Institute of Informatics
Artificial Intelligence