Explicit Motion Handling and Interactive Prompting for Video Camouflaged Object Detection

📅 2024-03-04
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Existing video camouflage object detection (VCOD) methods suffer from limited dynamic cue exploitation due to reliance on noisy motion estimation or implicit motion modeling, leading to degraded performance in complex scenes. To address this, we propose EMIP—an Explicit Motion processing and Interactive Prompting framework. EMIP innovatively integrates a frozen pre-trained optical flow foundation model into a dual-stream architecture. It introduces learnable Camouflage Feeder and Motion Collector modules to enable bidirectional visual–motion prompting interaction. Furthermore, it incorporates self-supervised optical flow prompting and long-term temporal history prompting to enhance temporal consistency. Extensive experiments demonstrate that EMIP achieves significant improvements over state-of-the-art methods across mainstream VCOD benchmarks. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
Camouflage poses challenges in distinguishing a static target, whereas any movement of the target can break this disguise. Existing video camouflaged object detection (VCOD) approaches take noisy motion estimation as input or model motion implicitly, restricting detection performance in complex dynamic scenes. In this paper, we propose a novel Explicit Motion handling and Interactive Prompting framework for VCOD, dubbed EMIP, which handles motion cues explicitly using a frozen pre-trained optical flow fundamental model. EMIP is characterized by a two-stream architecture for simultaneously conducting camouflaged segmentation and optical flow estimation. Interactions across the dual streams are realized in an interactive prompting way that is inspired by emerging visual prompt learning. Two learnable modules, i.e., the camouflaged feeder and motion collector, are designed to incorporate segmentation-to-motion and motion-to-segmentation prompts, respectively, and enhance outputs of the both streams. The prompt fed to the motion stream is learned by supervising optical flow in a self-supervised manner. Furthermore, we show that long-term historical information can also be incorporated as a prompt into EMIP and achieve more robust results with temporal consistency. Experimental results demonstrate that our EMIP achieves new state-of-the-art records on popular VCOD benchmarks. Our code is made publicly available at https://github.com/zhangxin06/EMIP.
Problem

Research questions and friction points this paper is trying to address.

Explicit motion handling improves video camouflaged object detection
Interactive prompting enhances segmentation and optical flow estimation
Long-term historical information boosts temporal consistency in detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explicit motion handling with optical flow
Two-stream architecture for segmentation and flow
Interactive prompting between segmentation and motion
🔎 Similar Papers
No similar papers found.
X
Xin Zhang
National Key Laboratory of Fundamental Science on Synthetic Vision, Sichuan University, China.
Tao Xiao
Tao Xiao
Kyushu University
Software Engineering
G
Gepeng Ji
Research School of Engineering, Australian National University, Australia.
X
Xuan Wu
College of Computer Science, Sichuan University, China.
Keren Fu
Keren Fu
Sichuan University, College of Computer Science
computer visionimage processingmachine learning
Qijun Zhao
Qijun Zhao
Professor of Computer Science, Sichuan University
Biometrics3D VisionObject Detection and RecognitionFace RecognitionFingerprint Recognition