Towards an Effective Action-Region Tracking Framework for Fine-grained Video Action Recognition

📅 2025-11-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Fine-grained video action recognition (FGAR) faces challenges in modeling subtle temporal variations within localized regions. To address this, we propose the Action-Region Tracking (ART) framework, which dynamically localizes and tracks discriminative action regions via a query-response mechanism, constructing semantically enriched action trajectories. ART introduces two key innovations: a region semantic activation module and a text-constrained query mechanism; it further incorporates multi-level trajectory contrastive loss and spatiotemporal consistency modeling, while employing task-adaptive textual fine-tuning to optimize vision-language alignment. Evaluated on multiple mainstream benchmarks, ART significantly outperforms state-of-the-art methods, achieving consistent improvements in fine-grained discriminability, robustness to perturbations, and cross-domain generalization. By enabling interpretable, traceable, and region-aware modeling, ART establishes a novel paradigm for FGAR that bridges low-level motion dynamics with high-level semantic reasoning.

Technology Category

Application Category

📝 Abstract
Fine-grained action recognition (FGAR) aims to identify subtle and distinctive differences among fine-grained action categories. However, current recognition methods often capture coarse-grained motion patterns but struggle to identify subtle details in local regions evolving over time. In this work, we introduce the Action-Region Tracking (ART) framework, a novel solution leveraging a query-response mechanism to discover and track the dynamics of distinctive local details, enabling effective distinction of similar actions. Specifically, we propose a region-specific semantic activation module that employs discriminative and text-constrained semantics as queries to capture the most action-related region responses in each video frame, facilitating interaction among spatial and temporal dimensions with corresponding video features. The captured region responses are organized into action tracklets, which characterize region-based action dynamics by linking related responses across video frames in a coherent sequence. The text-constrained queries encode nuanced semantic representations derived from textual descriptions of action labels extracted by language branches within Visual Language Models (VLMs). To optimize the action tracklets, we design a multi-level tracklet contrastive constraint among region responses at spatial and temporal levels, enabling effective discrimination within each frame and correlation between adjacent frames. Additionally, a task-specific fine-tuning mechanism refines textual semantics such that semantic representations encoded by VLMs are preserved while optimized for task preferences. Comprehensive experiments on widely used action recognition benchmarks demonstrate the superiority to previous state-of-the-art baselines.
Problem

Research questions and friction points this paper is trying to address.

Identifying subtle differences between similar fine-grained action categories
Tracking evolving local region dynamics across video frames
Capturing discriminative spatial-temporal patterns for action recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Query-response mechanism tracks distinctive local action regions
Multi-level contrastive constraint optimizes spatial-temporal action tracklets
Text-constrained semantics from VLMs enhance region-specific activation
🔎 Similar Papers
No similar papers found.
Baoli Sun
Baoli Sun
Dalian University of Technology
Fine-grained video action recognition
Y
Yihan Wang
DUT-RU International School of Information Science & Engineering, Dalian University of Technology, China
Xinzhu Ma
Xinzhu Ma
Associate Professor, Beihang University
deep learningcomputer vision3D scene understandingai4science
Z
Zhihui Wang
DUT-RU International School of Information Science & Engineering, Dalian University of Technology, China
Kun Lu
Kun Lu
University of Alabama
Applied natural language processingLarge language modelsText mining
Z
Zhiyong Wang
The University of Sydney, Australia