🤖 AI Summary
Existing phishing detection systems suffer from limited interpretability due to the “black-box” nature of machine learning models, undermining user trust and response efficiency. To address this, we propose a three-component explainable detection framework: (1) a high-accuracy classifier leveraging domain-based features; (2) a novel dual-explanation layer synergistically integrating LIME (local interpretability) and SHAP (global interpretability); and (3) a natural language explanation generation module powered by DeepSeek-v3, enabling reliable mapping from technical attributions to human-readable justifications. The framework achieves 98.4% detection accuracy while maintaining 94.2% explanation generation accuracy and 96.8% explanation–prediction consistency. It has been deployed in both a GUI application and a lightweight Chrome extension, effectively bridging the gap between AI decision transparency and end-user trust.
📝 Abstract
Sophisticated phishing attacks have emerged as a major cybersecurity threat, becoming more common and difficult to prevent. Though machine learning techniques have shown promise in detecting phishing attacks, they function mainly as"black boxes"without revealing their decision-making rationale. This lack of transparency erodes the trust of users and diminishes their effective threat response. We present EXPLICATE: a framework that enhances phishing detection through a three-component architecture: an ML-based classifier using domain-specific features, a dual-explanation layer combining LIME and SHAP for complementary feature-level insights, and an LLM enhancement using DeepSeek v3 to translate technical explanations into accessible natural language. Our experiments show that EXPLICATE attains 98.4 % accuracy on all metrics, which is on par with existing deep learning techniques but has better explainability. High-quality explanations are generated by the framework with an accuracy of 94.2 % as well as a consistency of 96.8% between the LLM output and model prediction. We create EXPLICATE as a fully usable GUI application and a light Chrome extension, showing its applicability in many deployment situations. The research shows that high detection performance can go hand-in-hand with meaningful explainability in security applications. Most important, it addresses the critical divide between automated AI and user trust in phishing detection systems.