🤖 AI Summary
This work addresses the limitations of existing online video stabilization methods, which often rely on paired annotated data, struggle to deploy on resource-constrained devices, and exhibit poor generalization in challenging scenarios such as non-visible-spectrum imaging or drone-based night vision. To overcome these issues, we propose an unsupervised online video stabilization framework built upon the classical three-stage pipeline but enhanced with a multi-threaded buffering mechanism. Our approach achieves efficient real-time stabilization without requiring lookahead frames or labeled data. By integrating classical motion priors with a lightweight architecture, it substantially improves both controllability and computational efficiency. We further introduce UAV-Test, a multimodal drone video dataset, to evaluate generalization performance. Experiments demonstrate that our method outperforms state-of-the-art online approaches in both quantitative metrics and visual quality, approaching the performance of offline methods.
📝 Abstract
We propose a new unsupervised framework for online video stabilization. Unlike methods based on deep learning that require paired stable and unstable datasets, our approach instantiates the classical stabilization pipeline with three stages and incorporates a multithreaded buffering mechanism. This design addresses three longstanding challenges in end-to-end learning: limited data, poor controllability, and inefficiency on hardware with constrained resources. Existing benchmarks focus mainly on handheld videos with a forward view in visible light, which restricts the applicability of stabilization to domains such as UAV nighttime remote sensing. To fill this gap, we introduce a new multimodal UAV aerial video dataset (UAV-Test). Experiments show that our method consistently outperforms state-of-the-art online stabilizers in both quantitative metrics and visual quality, while achieving performance comparable to offline methods.