Towards Adaptive Human-centric Video Anomaly Detection: A Comprehensive Framework and A New Benchmark

📅 2024-08-26
📈 Citations: 2
Influential: 1
📄 PDF
🤖 AI Summary
Human-centered video anomaly detection (VAD) faces challenges including high behavioral diversity, extreme scarcity of abnormal samples, and stringent privacy-ethics constraints—hindering scalable dataset construction and continual learning. To address these, we propose HuVAD, the first privacy-enhanced VAD benchmark, covering seven real-world scenarios and featuring a >5× increase in pose-annotated frames. We further introduce UCAL, an unsupervised continual anomaly learning framework integrating: (i) human-pose-driven feature modeling, (ii) differential-privacy-inspired de-identification for annotation, and (iii) a multi-granularity anomaly scoring mechanism enabling label-free incremental adaptation. Evaluated on both standard and continual-learning benchmarks, UCAL achieves state-of-the-art performance on 82.14% of metrics, demonstrating substantial improvements in long-tail anomaly recognition and cross-scenario generalization.

Technology Category

Application Category

📝 Abstract
Human-centric Video Anomaly Detection (VAD) aims to identify human behaviors that deviate from normal. At its core, human-centric VAD faces substantial challenges, such as the complexity of diverse human behaviors, the rarity of anomalies, and ethical constraints. These challenges limit access to high-quality datasets and highlight the need for a dataset and framework supporting continual learning. Moving towards adaptive human-centric VAD, we introduce the HuVAD (Human-centric privacy-enhanced Video Anomaly Detection) dataset and a novel Unsupervised Continual Anomaly Learning (UCAL) framework. UCAL enables incremental learning, allowing models to adapt over time, bridging traditional training and real-world deployment. HuVAD prioritizes privacy by providing de-identified annotations and includes seven indoor/outdoor scenes, offering over 5x more pose-annotated frames than previous datasets. Our standard and continual benchmarks, utilize a comprehensive set of metrics, demonstrating that UCAL-enhanced models achieve superior performance in 82.14% of cases, setting a new state-of-the-art (SOTA). The dataset can be accessed at https://github.com/TeCSAR-UNCC/HuVAD.
Problem

Research questions and friction points this paper is trying to address.

Identify abnormal human behaviors in videos
Address challenges like diverse behaviors and rare anomalies
Develop adaptive learning for real-world anomaly detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces HuVAD dataset with privacy-enhanced annotations
Proposes UCAL framework for unsupervised continual anomaly learning
Achieves superior performance in 82.14% of cases
🔎 Similar Papers
No similar papers found.