🤖 AI Summary
Phishing email detection continues to face the challenge of balancing high classification accuracy with model interpretability. To address this, we propose a lightweight, interpretable detection framework based on DistilBERT: it enhances classification performance through text preprocessing, SMOTE-based class balancing, and task-specific fine-tuning. Crucially, we are the first to deeply integrate LIME and Transformer Interpret—two complementary interpretability methods—into the phishing detection pipeline, enabling fine-grained, token-level attribution visualization. Evaluated on standard benchmark datasets, our model achieves state-of-the-art accuracy while substantially improving prediction transparency and debuggability. This enables security operations personnel to rapidly validate, audit, and intervene in detection outcomes. Our work establishes both a methodological foundation and a practical implementation for trustworthy, resource-efficient phishing detection—bridging the gap between performance, explainability, and operational utility in real-world cybersecurity settings.
📝 Abstract
Phishing email is a serious cyber threat that tries to deceive users by sending false emails with the intention of stealing confidential information or causing financial harm. Attackers, often posing as trustworthy entities, exploit technological advancements and sophistication to make detection and prevention of phishing more challenging. Despite extensive academic research, phishing detection remains an ongoing and formidable challenge in the cybersecurity landscape. Large Language Models (LLMs) and Masked Language Models (MLMs) possess immense potential to offer innovative solutions to address long-standing challenges. In this research paper, we present an optimized, fine-tuned transformer-based DistilBERT model designed for the detection of phishing emails. In the detection process, we work with a phishing email dataset and utilize the preprocessing techniques to clean and solve the imbalance class issues. Through our experiments, we found that our model effectively achieves high accuracy, demonstrating its capability to perform well. Finally, we demonstrate our fine-tuned model using Explainable-AI (XAI) techniques such as Local Interpretable Model-Agnostic Explanations (LIME) and Transformer Interpret to explain how our model makes predictions in the context of text classification for phishing emails.