Permutation-Aware Activity Segmentation via Unsupervised Frame-to-Segment Alignment

📅 2023-05-31
🏛️ IEEE Workshop/Winter Conference on Applications of Computer Vision
📈 Citations: 19
Influential: 4
📄 PDF
🤖 AI Summary
In unsupervised temporal action segmentation, existing methods rely solely on frame-level features while neglecting segment-level semantic modeling. To address this, we propose a joint frame-segment modeling framework: a Transformer encoder-decoder architecture jointly predicts frame-wise action labels and generates video action transcripts; a frame-to-segment permutation-aware alignment module is introduced, and—novelly—temporal optimal transport (OT) is employed to construct segment-level pseudo-labels for end-to-end unsupervised training. Our core contributions are: (1) the first segment-level semantic-guided paradigm for unsupervised action segmentation; and (2) an OT-based frame-segment alignment and pseudo-label generation mechanism. Extensive experiments demonstrate state-of-the-art performance across four major benchmarks—50 Salads, YouTube Instructions, Breakfast, and Desktop Assembly—outperforming all prior unsupervised approaches.
📝 Abstract
This paper presents an unsupervised transformer-based framework for temporal activity segmentation which leverages not only frame-level cues but also segment-level cues. This is in contrast with previous methods which often rely on frame-level information only. Our approach begins with a frame-level prediction module which estimates framewise action classes via a transformer encoder. The frame-level prediction module is trained in an unsupervised manner via temporal optimal transport. To exploit segment-level information, we utilize a segment-level prediction module and a frame-to-segment alignment module. The former includes a transformer decoder for estimating video transcripts, while the latter matches frame-level features with segmentlevel features, yielding permutation-aware segmentation results. Moreover, inspired by temporal optimal transport, we introduce simple-yet-effective pseudo labels for unsupervised training of the above modules. Our experiments on four public datasets, i.e., 50 Salads, YouTube Instructions, Breakfast, and Desktop Assembly show that our approach achieves comparable or better performance than previous methods in unsupervised activity segmentation.
Problem

Research questions and friction points this paper is trying to address.

Unsupervised temporal activity segmentation using frame and segment cues
Aligns frame-level features with segment-level features for permutation-aware results
Introduces pseudo labels via temporal optimal transport for unsupervised training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unsupervised transformer framework for activity segmentation
Leverages both frame-level and segment-level cues
Uses temporal optimal transport for pseudo-label training
🔎 Similar Papers
No similar papers found.
Quoc-Huy Tran
Quoc-Huy Tran
Retrocausal, Inc.
Video UnderstandingAction Recognition3D PerceptionAutonomous Driving
A
A. Mehmood
Retrocausal, Inc., Redmond, WA
M
Muhammad Ahmed
Retrocausal, Inc., Redmond, WA
M
Muhammad Naufil
Retrocausal, Inc., Redmond, WA
A
Ana Zafar
Retrocausal, Inc., Redmond, WA
Andrey Konin
Andrey Konin
Chief Architect, Retrocausal, Inc.
Computer visionmachine learning
M
M. Zia
Retrocausal, Inc., Redmond, WA