🤖 AI Summary
Traditional optical flow methods rely on restrictive assumptions—brightness constancy and small motion—while deep learning approaches demand large-scale annotated datasets and incur high computational costs. HSV-based visualization introduces nonlinear distortions and noise sensitivity, degrading downstream task performance. To address these limitations, we propose ReynoldsFlow: the first training-free optical flow estimation framework grounded in the Reynolds transport theorem, thereby circumventing classical physical assumptions. We further introduce ReynoldsFlow+, a linear RGB-native visualization paradigm that eliminates HSV-induced distortions, and integrate gradient-domain motion integration with unsupervised physical constraints for optimization. Evaluated on UAVDB, Anti-UAV, and GolfDB benchmarks, ReynoldsFlow achieves state-of-the-art performance. It significantly enhances robustness and efficiency for downstream tasks—including small-object detection, infrared target detection, and pose estimation—without requiring labeled data or iterative training.
📝 Abstract
Optical flow is a fundamental technique for motion estimation, widely applied in video stabilization, interpolation, and object tracking. Recent advancements in artificial intelligence (AI) have enabled deep learning models to leverage optical flow as an important feature for motion analysis. However, traditional optical flow methods rely on restrictive assumptions, such as brightness constancy and slow motion constraints, limiting their effectiveness in complex scenes. Deep learning-based approaches require extensive training on large domain-specific datasets, making them computationally demanding. Furthermore, optical flow is typically visualized in the HSV color space, which introduces nonlinear distortions when converted to RGB and is highly sensitive to noise, degrading motion representation accuracy. These limitations inherently constrain the performance of downstream models, potentially hindering object tracking and motion analysis tasks. To address these challenges, we propose Reynolds flow, a novel training-free flow estimation inspired by the Reynolds transport theorem, offering a principled approach to modeling complex motion dynamics. Beyond the conventional HSV-based visualization, denoted ReynoldsFlow, we introduce an alternative representation, ReynoldsFlow+, designed to improve flow visualization. We evaluate ReynoldsFlow and ReynoldsFlow+ across three video-based benchmarks: tiny object detection on UAVDB, infrared object detection on Anti-UAV, and pose estimation on GolfDB. Experimental results demonstrate that networks trained with ReynoldsFlow+ achieve state-of-the-art (SOTA) performance, exhibiting improved robustness and efficiency across all tasks.