QVAD: A Question-Centric Agentic Framework for Efficient and Training-Free Video Anomaly Detection

📅 2026-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of semantic ambiguity in static prompts and high computational overhead in video anomaly detection, stemming from the open-ended nature of anomalies. To overcome these issues, the authors propose a training-free agent framework that leverages iterative multi-turn dialogues between a Vision-Language Model (VLM) and a Large Language Model (LLM) to dynamically refine query prompts. This context-aware prompt updating mechanism effectively unlocks the reasoning capabilities of lightweight models. The method achieves state-of-the-art performance on UCF-Crime, XD-Violence, and UBNormal benchmarks, demonstrates strong generalization on ComplexVAD, and significantly reduces computational costs, making it suitable for edge deployment.
📝 Abstract
Video Anomaly Detection (VAD) is a fundamental challenge in computer vision, particularly due to the open-set nature of anomalies. While recent training-free approaches utilizing Vision-Language Models (VLMs) have shown promise, they typically rely on massive, resource-intensive foundation models to compensate for the ambiguity of static prompts. We argue that the bottleneck in VAD is not necessarily model capacity, but rather the static nature of inquiry. We propose QVAD, a question-centric agentic framework that treats VLM-LLM interaction as a dynamic dialogue. By iteratively refining queries based on visual context, our LLM agent guides smaller VLMs to produce high-fidelity captions and precise semantic reasoning without parameter updates. This ``prompt-updating" mechanism effectively unlocks the latent capabilities of lightweight models, enabling state-of-the-art performance on UCF-Crime, XD-Violence, and UBNormal using a fraction of the parameters required by competing methods. We further demonstrate exceptional generalizability on the single-scene ComplexVAD dataset. Crucially, QVAD achieves high inference speeds with minimal memory footprints, making advanced VAD capabilities deployable on resource-constrained edge devices.
Problem

Research questions and friction points this paper is trying to address.

Video Anomaly Detection
Training-Free
Vision-Language Models
Open-Set Anomalies
Static Prompts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Question-Centric Agentic Framework
Training-Free Video Anomaly Detection
Dynamic Prompt Updating
Vision-Language Models
Efficient Inference
🔎 Similar Papers
No similar papers found.
L
Lokman Bekit
Department of Electrical Engineering, University of South Florida, Tampa, Florida, USA
H
Hamza Karim
Department of Electrical Engineering, University of South Florida, Tampa, Florida, USA
N
Nghia T Nguyen
Department of Electrical Engineering, University of South Florida, Tampa, Florida, USA
Yasin Yilmaz
Yasin Yilmaz
University of South Florida
Machine LearningComputer VisionAnomaly DetectionCybersecurityAI Security