EXPLICATE: Enhancing Phishing Detection through Explainable AI and LLM-Powered Interpretability

📅 2025-03-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing phishing detection systems suffer from limited interpretability due to the “black-box” nature of machine learning models, undermining user trust and response efficiency. To address this, we propose a three-component explainable detection framework: (1) a high-accuracy classifier leveraging domain-based features; (2) a novel dual-explanation layer synergistically integrating LIME (local interpretability) and SHAP (global interpretability); and (3) a natural language explanation generation module powered by DeepSeek-v3, enabling reliable mapping from technical attributions to human-readable justifications. The framework achieves 98.4% detection accuracy while maintaining 94.2% explanation generation accuracy and 96.8% explanation–prediction consistency. It has been deployed in both a GUI application and a lightweight Chrome extension, effectively bridging the gap between AI decision transparency and end-user trust.

Technology Category

Application Category

📝 Abstract
Sophisticated phishing attacks have emerged as a major cybersecurity threat, becoming more common and difficult to prevent. Though machine learning techniques have shown promise in detecting phishing attacks, they function mainly as"black boxes"without revealing their decision-making rationale. This lack of transparency erodes the trust of users and diminishes their effective threat response. We present EXPLICATE: a framework that enhances phishing detection through a three-component architecture: an ML-based classifier using domain-specific features, a dual-explanation layer combining LIME and SHAP for complementary feature-level insights, and an LLM enhancement using DeepSeek v3 to translate technical explanations into accessible natural language. Our experiments show that EXPLICATE attains 98.4 % accuracy on all metrics, which is on par with existing deep learning techniques but has better explainability. High-quality explanations are generated by the framework with an accuracy of 94.2 % as well as a consistency of 96.8% between the LLM output and model prediction. We create EXPLICATE as a fully usable GUI application and a light Chrome extension, showing its applicability in many deployment situations. The research shows that high detection performance can go hand-in-hand with meaningful explainability in security applications. Most important, it addresses the critical divide between automated AI and user trust in phishing detection systems.
Problem

Research questions and friction points this paper is trying to address.

Enhancing phishing detection with explainable AI
Bridging AI decision-making and user trust
Combining ML and LLM for interpretable cybersecurity
Innovation

Methods, ideas, or system contributions that make the work stand out.

ML classifier with domain-specific features
Dual-explanation layer combining LIME and SHAP
LLM enhancement for natural language explanations
🔎 Similar Papers
No similar papers found.
Bryan Lim
Bryan Lim
Autodesk Research
RoboticsMachine LearningReinforcement Learning
R
Roman Huerta
Department of Computer Science, University of Texas at Permian Basin, Odessa, Texas, USA
A
Alejandro Sotelo
Department of Computer Science, University of Texas at Permian Basin, Odessa, Texas, USA
A
Anthonie Quintela
Department of Computer Science, University of Texas at Permian Basin, Odessa, Texas, USA
Priyanka Kumar
Priyanka Kumar
University of Texas Permian Basin
Artificial IntelligenceData ScienceAI EducationBlockchain Technology