Hybrid Vision Servoing with Depp Alignment and GRU-Based Occlusion Recovery

πŸ“… 2025-10-29
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address tracking failure in visual servoing under partial or complete target occlusion, this paper proposes a hybrid tracking framework integrating deep feature alignment with sequential prediction. Methodologically, it jointly leverages early-layer VGG deep features, an enhanced deep Lucas–Kanade optical flow estimator, and a lightweight residual regressor for high-accuracy pose estimation. When tracking confidence drops, a GRU-based predictor seamlessly takes over, forecasting translational, rotational, and scale transformations from historical motion sequences to ensure continuous control signal generation. Evaluated on handheld videos with up to 90% occlusion, the method achieves sub-2-pixel tracking error and sustains real-time closed-loop control at 30 Hz. It significantly enhances robustness, accuracy, and responsiveness under severe occlusion, establishing a novel paradigm for robotic visual servoing in complex, dynamic environments.

Technology Category

Application Category

πŸ“ Abstract
Vision-based control systems, such as image-based visual servoing (IBVS), have been extensively explored for precise robot manipulation. A persistent challenge, however, is maintaining robust target tracking under partial or full occlusions. Classical methods like Lucas-Kanade (LK) offer lightweight tracking but are fragile to occlusion and drift, while deep learning-based approaches often require continuous visibility and intensive computation. To address these gaps, we propose a hybrid visual tracking framework that bridges advanced perception with real-time servo control. First, a fast global template matcher constrains the pose search region; next, a deep-feature Lucas-Kanade module operating on early VGG layers refines alignment to sub-pixel accuracy (<2px); then, a lightweight residual regressor corrects local misalignments caused by texture degradation or partial occlusion. When visual confidence falls below a threshold, a GRU-based predictor seamlessly extrapolates pose updates from recent motion history. Crucially, the pipeline's final outputs-translation, rotation, and scale deltas-are packaged as direct control signals for 30Hz image-based servo loops. Evaluated on handheld video sequences with up to 90% occlusion, our system sustains under 2px tracking error, demonstrating the robustness and low-latency precision essential for reliable real-world robot vision applications.
Problem

Research questions and friction points this paper is trying to address.

Maintains robust visual tracking during partial or full occlusions
Bridges advanced perception with real-time servo control systems
Achieves sub-pixel accuracy tracking under texture degradation conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid tracking with global matcher and deep LK
Residual regressor corrects occlusion misalignments
GRU predictor extrapolates pose during visual loss
πŸ”Ž Similar Papers
No similar papers found.