Advancing Video Anomaly Detection: A Bi-Directional Hybrid Framework for Enhanced Single- and Multi-Task Approaches

📅 2024-12-11
🏛️ IEEE Transactions on Image Processing
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the suboptimal performance of single-task proxy frameworks in video anomaly detection—which inherently limits multi-task learning improvement—this paper proposes a bidirectional hybrid prediction framework centered on middle-frame prediction as the core proxy task. We innovatively design a convolutional temporal Transformer coupled with a layer-wise interactive ConvLSTM bridging architecture, enabling long-range spatiotemporal modeling and fine-grained cross-layer, cross-temporal feature co-optimization. Additionally, a forward/backward prediction discrepancy mechanism is introduced to enhance anomaly sensitivity. Evaluated on multiple public benchmarks, our method achieves significant improvements in both single-task and multi-task detection performance, while yielding more stable and precise anomaly localization. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
Despite the prevailing transition from single-task to multi-task approaches in video anomaly detection, we observe that many adopt sub-optimal frameworks for individual proxy tasks. Motivated by this, we contend that optimizing single-task frameworks can advance both single- and multi-task approaches. Accordingly, we leverage middle-frame prediction as the primary proxy task, and introduce an effective hybrid framework designed to generate accurate predictions for normal frames and flawed predictions for abnormal frames. This hybrid framework is built upon a bi-directional structure that seamlessly integrates both vision transformers and ConvLSTMs. Specifically, we utilize this bi-directional structure to fully analyze the temporal dimension by predicting frames in both forward and backward directions, significantly boosting the detection stability. Given the transformer’s capacity to model long-range contextual dependencies, we develop a convolutional temporal transformer that efficiently associates feature maps from all context frames to generate attention-based predictions for target frames. Furthermore, we devise a layer-interactive ConvLSTM bridge that facilitates the smooth flow of low-level features across layers and time-steps, thereby strengthening predictions with fine details. Anomalies are eventually identified by scrutinizing the discrepancies between target frames and their corresponding predictions. Several experiments conducted on public benchmarks affirm the efficacy of our hybrid framework, whether used as a standalone single-task approach or integrated as a branch in a multi-task approach. These experiments also underscore the advantages of merging vision transformers and ConvLSTMs for video anomaly detection. The implementation of our hybrid framework is available at https://github.com/SHENGUODONG19951126/ConvTTrans-ConvLSTM.
Problem

Research questions and friction points this paper is trying to address.

Optimizing single-task frameworks for video anomaly detection
Developing a bi-directional hybrid framework with vision transformers and ConvLSTMs
Enhancing detection stability by analyzing temporal dimensions bidirectionally
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bi-directional hybrid framework with transformers and ConvLSTMs
Convolutional temporal transformer for long-range dependencies
Layer-interactive ConvLSTM bridge for fine detail enhancement
🔎 Similar Papers
No similar papers found.