๐ค AI Summary
In financial time series forecasting, the black-box nature of large neural networks severely undermines model trustworthiness and regulatory compliance. To address this, we propose a novel, interpretable, and high-fidelity AI forecasting paradigm comprising three synergistic components: (1) a time-series foundation model (Time-LLM), (2) an uncertainty-aware reliability filtering mechanism, and (3) a domain-knowledge-embedded symbolic reasoning module. Our multi-stage framework selectively executes decisions only on predictions exhibiting both high confidence and semantic interpretability, enabling selective output and full auditability. Experiments on stock and cryptocurrency datasets demonstrate that our approach significantly reduces false positive rates while simultaneously improving both prediction reliability and interpretability over baseline methods. The framework provides a verifiable, controllable deployment pathway for trustworthy financial AI systems.
๐ Abstract
Financial forecasting increasingly uses large neural network models, but their opacity raises challenges for trust and regulatory compliance. We present several approaches to explainable and reliable AI in finance. emph{First}, we describe how Time-LLM, a time series foundation model, uses a prompt to avoid a wrong directional forecast. emph{Second}, we show that combining foundation models for time series forecasting with a reliability estimator can filter our unreliable predictions. emph{Third}, we argue for symbolic reasoning encoding domain rules for transparent justification. These approaches shift emphasize executing only forecasts that are both reliable and explainable. Experiments on equity and cryptocurrency data show that the architecture reduces false positives and supports selective execution. By integrating predictive performance with reliability estimation and rule-based reasoning, our framework advances transparent and auditable financial AI systems.