🤖 AI Summary
To address the public health threat posed by large language models (LLMs) generating vaccine-related misinformation, this paper introduces VaxGuard—the first benchmark for detecting vaccine misinformation, featuring multiple generative models (GPT, Phi-3, Mistral), diverse misinformation types (e.g., fear-mongering, anti-scientific narratives), and heterogeneous speaker roles (e.g., parents, influencers, healthcare professionals). We propose a novel three-dimensional unified modeling framework that jointly accounts for generator diversity, content typology, and speaker role, and design a role-aware evaluation methodology. Our analysis reveals, for the first time, systematic performance degradation under emotionally charged narratives and long-form text—accuracy drops by 9.3% per additional 100 tokens. Experiments show GPT-4o achieves an F1-score of 86.2% on fear-based misinformation, significantly outperforming Phi-3 (62.1%). The dataset and evaluation framework are publicly released, establishing a new standard benchmark for vaccine misinformation detection.
📝 Abstract
Recent advancements in Large Language Models (LLMs) have significantly improved text generation capabilities. However, they also present challenges, particularly in generating vaccine-related misinformation, which poses risks to public health. Despite research on human-authored misinformation, a notable gap remains in understanding how LLMs contribute to vaccine misinformation and how best to detect it. Existing benchmarks often overlook vaccine-specific misinformation and the diverse roles of misinformation spreaders. This paper introduces VaxGuard, a novel dataset designed to address these challenges. VaxGuard includes vaccine-related misinformation generated by multiple LLMs and provides a comprehensive framework for detecting misinformation across various roles. Our findings show that GPT-3.5 and GPT-4o consistently outperform other LLMs in detecting misinformation, especially when dealing with subtle or emotionally charged narratives. On the other hand, PHI3 and Mistral show lower performance, struggling with precision and recall in fear-driven contexts. Additionally, detection performance tends to decline as input text length increases, indicating the need for improved methods to handle larger content. These results highlight the importance of role-specific detection strategies and suggest that VaxGuard can serve as a key resource for improving the detection of LLM-generated vaccine misinformation.