An LLM Agent-based Framework for Whaling Countermeasures

πŸ“… 2026-01-21
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses the growing threat of high-precision generative AI-powered whaling attacks targeting senior university personnel, for which existing defenses lack personalization and contextual awareness. The work proposes the first defense framework based on large language model (LLM) agents, which constructs individualized vulnerability profiles by mining publicly available information, identifies high-risk scenarios, and generates contextually coherent, interpretable, and personalized defensive strategies. By innovatively deploying LLM agents for whaling protection tailored to academic staff, the approach demonstrates in preliminary experiments its ability to produce realistic risk assessments and strategy explanations aligned with actual professional contexts. These findings validate the framework’s feasibility while also highlighting key challenges for real-world deployment.

Technology Category

Application Category

πŸ“ Abstract
With the spread of generative AI in recent years, attacks known as Whaling have become a serious threat. Whaling is a form of social engineering that targets important high-authority individuals within organizations and uses sophisticated fraudulent emails. In the context of Japanese universities, faculty members frequently hold positions that combine research leadership with authority within institutional workflows. This structural characteristic leads to the wide public disclosure of high-value information such as publications, grants, and detailed researcher profiles. Such extensive information exposure enables the construction of highly precise target profiles using generative AI. This raises concerns that Whaling attacks based on high-precision profiling by generative AI will become prevalent. In this study, we propose a Whaling countermeasure framework for university faculty members that constructs personalized defense profiles and uses large language model (LLM)-based agents. We design agents that (i) build vulnerability profiles for each target from publicly available information on faculty members, (ii) identify potential risk scenarios relevant to Whaling defense based on those profiles, (iii) construct defense profiles corresponding to the vulnerabilities and anticipated risks, and (iv) analyze Whaling emails using the defense profiles. Furthermore, we conduct a preliminary risk-assessment experiment. The results indicate that the proposed method can produce judgments accompanied by explanations of response policies that are consistent with the work context of faculty members who are Whaling targets. The findings also highlight practical challenges and considerations for future operational deployment and systematic evaluation.
Problem

Research questions and friction points this paper is trying to address.

Whaling
social engineering
generative AI
LLM agents
cybersecurity
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM agent
Whaling defense
personalized profiling
social engineering countermeasure
explainable response policy
πŸ”Ž Similar Papers
Daisuke Miyamoto
Daisuke Miyamoto
National Graduate School for Policy Studies
cyber security
T
Takuji Iimura
National Graduate Institute for Policy Studies, Tokyo, Japan
N
Narushige Michishita
National Graduate Institute for Policy Studies, Tokyo, Japan