Security Barriers to Trustworthy AI-Driven Cyber Threat Intelligence in Finance: Evidence from Practitioners

📅 2026-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses critical trust barriers—such as governance gaps, workflow integration challenges, and insufficient analyst trust—that hinder the deployment of AI-driven cyber threat intelligence (CTI) in the financial sector. Through a systematic literature review (330 initially screened, 12 included), six in-depth interviews, and 14 practitioner surveys, the research identifies four sociotechnical failure modes. Findings reveal that while 71.4% of practitioners anticipate AI becoming central to CTI within five years, 57.1% report infrequent use due to inadequate explainability and assurance mechanisms, and 28.6% have encountered adversarial risks. To bridge the gap between research and practice in trustworthy AI adoption, the study proposes three security-oriented operational safeguards for AI-CTI systems in finance.

Technology Category

Application Category

📝 Abstract
Financial institutions face increasing cyber risk while operating under strict regulatory oversight. To manage this risk, they rely heavily on Cyber Threat Intelligence (CTI) to inform detection, response, and strategic security decisions. Artificial intelligence (AI) is widely suggested as a means to strengthen CTI. However, evidence of trustworthy production use in finance remains limited. Adoption depends not only on predictive performance, but also on governance, integration into security workflows and analyst trust. Thus, we examine how AI is used for CTI in practice within financial institutions and what barriers prevent trustworthy deployment. We report a mixed-methods, user-centric study combining a CTI-finance-focused systematic literature review, semi-structured interviews, and an exploratory survey. Our review screened 330 publications (2019-2025) and retained 12 finance-relevant studies for analysis; we further conducted six interviews and collected 14 survey responses from banks and consultancies. Across research and practice, we identify four recurrent socio-technical failure modes that hinder trustworthy AI-driven CTI: (i) shadow use of public AI tools outside institutional controls, (ii) license-first enablement without operational integration, (iii) attacker-perception gaps that limit adversarial threat modeling, and (iv) missing security for the AI models themselves, including limited monitoring, robustness evaluation and audit-ready evidence. Survey results provide additional insights: 71.4% of respondents expect AI to become central within five years, 57.1% report infrequent current use due to interpretability and assurance concerns and 28.6% report direct encounters with adversarial risks. Based on these findings, we derive three security-oriented operational safeguards for AI-enabled CTI deployments.
Problem

Research questions and friction points this paper is trying to address.

AI-driven Cyber Threat Intelligence
trustworthy AI
financial institutions
security barriers
adversarial risks
Innovation

Methods, ideas, or system contributions that make the work stand out.

trustworthy AI
cyber threat intelligence
financial security
adversarial robustness
AI governance
🔎 Similar Papers
No similar papers found.
E
Emir Karaosman
University of Liechtenstein, Vaduz, Liechtenstein
A
Advije Rizvani
University of Liechtenstein, Vaduz, Liechtenstein
Irdin Pekaric
Irdin Pekaric
Postdoctoral Researcher, University of Liechtenstein
information securitysecurity of self-adaptive systemsattack modelingattack generation