Promoting Online Safety by Simulating Unsafe Conversations with LLMs

📅 2025-07-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Public online security awareness remains low, and users struggle to recognize deceptive conversational patterns in social engineering scams. Method: This study proposes an interactive, large language model (LLM)-driven security education framework. It employs a dual-LLM adversarial simulation architecture to autonomously generate high-fidelity, diverse scam dialogues; integrates principles from learning science—including just-in-time feedback and guided reflection—to help users identify linguistic cues, assess risks, and practice defensive responses. Contribution/Results: Empirical evaluation demonstrates significant improvements in both scam detection accuracy and willingness to enact protective behaviors. This work constitutes the first systematic validation of LLM-powered, scenario-based simulation for cybersecurity literacy education—establishing its efficacy, scalability, and pedagogical viability. It introduces a novel, AI-augmented paradigm for security education grounded in authentic, adaptive interaction.

Technology Category

Application Category

📝 Abstract
Generative AI, including large language models (LLMs) have the potential -- and already are being used -- to increase the speed, scale, and types of unsafe conversations online. LLMs lower the barrier for entry for bad actors to create unsafe conversations in particular because of their ability to generate persuasive and human-like text. In our current work, we explore ways to promote online safety by teaching people about unsafe conversations that can occur online with and without LLMs. We build on prior work that shows that LLMs can successfully simulate scam conversations. We also leverage research in the learning sciences that shows that providing feedback on one's hypothetical actions can promote learning. In particular, we focus on simulating scam conversations using LLMs. Our work incorporates two LLMs that converse with each other to simulate realistic, unsafe conversations that people may encounter online between a scammer LLM and a target LLM but users of our system are asked provide feedback to the target LLM.
Problem

Research questions and friction points this paper is trying to address.

Simulating unsafe online conversations using LLMs
Teaching people about online scam interactions
Providing feedback to enhance safety awareness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Simulating scam conversations using dual LLMs
Providing feedback on hypothetical unsafe interactions
Teaching online safety through realistic LLM dialogues
🔎 Similar Papers
No similar papers found.
O
Owen Hoffman
Department of Computer Science, Swarthmore College, USA
K
Kangze Peng
Department of Computer Science, Swarthmore College, USA
Z
Zehua You
Department of Computer Science, Swarthmore College, USA
S
Sajid Kamal
Department of Computer Science, Swarthmore College, USA
Sukrit Venkatagiri
Sukrit Venkatagiri
Assistant Professor, Swarthmore College
Human Computer InteractionCrowdsourcingMisinformationContent ModerationOSINT