🤖 AI Summary
Existing phishing email detection methods suffer from poor interpretability, insufficient robustness against novel attacks, and limited adaptability to resource-constrained environments. This paper proposes an interpretable and robust detection framework tailored for character-level deep learning models. First, we adapt Grad-CAM—originally designed for image-based models—to character-level inputs, enabling fine-grained, token-wise visualization of model decisions. Second, we systematically evaluate the efficacy of adversarial training in enhancing the robustness of three representative character-level architectures: CharCNN, CharGRU, and CharBiLSTM. Third, experiments on a multi-source fused email dataset demonstrate that CharGRU achieves the best overall performance, and adversarial training significantly improves resilience against perturbation-based attacks. All code and datasets are publicly released, and the framework supports lightweight deployment scenarios, including browser extensions.
📝 Abstract
Phishing attacks targeting both organizations and individuals are becoming an increasingly significant threat as technology advances. Current automatic detection methods often lack explainability and robustness in detecting new phishing attacks. In this work, we investigate the effectiveness of character-level deep learning models for phishing detection, which can provide both robustness and interpretability. We evaluate three neural architectures adapted to operate at the character level, namely CharCNN, CharGRU, and CharBiLSTM, on a custom-built email dataset, which combines data from multiple sources. Their performance is analyzed under three scenarios: (i) standard training and testing, (ii) standard training and testing under adversarial attacks, and (iii) training and testing with adversarial examples. Aiming to develop a tool that operates as a browser extension, we test all models under limited computational resources. In this constrained setup, CharGRU proves to be the best-performing model across all scenarios. All models show vulnerability to adversarial attacks, but adversarial training substantially improves their robustness. In addition, by adapting the Gradient-weighted Class Activation Mapping (Grad-CAM) technique to character-level inputs, we are able to visualize which parts of each email influence the decision of each model. Our open-source code and data is released at https://github.com/chipermaria/every-character-counts.