MAMo: Leveraging Memory and Attention for Monocular Video Depth Estimation

📅 2023-07-26
🏛️ IEEE International Conference on Computer Vision
📈 Citations: 11
Influential: 1
📄 PDF
🤖 AI Summary
To address insufficient temporal modeling in monocular video depth estimation, this paper proposes a memory-augmented video depth estimation framework. The method introduces (1) a vision-displacement bimodal memory token, enabling differentiable and dynamic memory updates, and (2) a two-stage attention mechanism: intra-memory self-attention to model temporal dependencies within the memory, and cross-attention to efficiently fuse current-frame features with memory representations. Crucially, the framework implicitly leverages video continuity without requiring explicit optical flow or motion priors. Evaluated on KITTI, NYU-Depth V2, and DDAD benchmarks, it achieves state-of-the-art accuracy while exhibiting significantly lower inference latency than existing video-based methods—demonstrating a favorable trade-off between performance and efficiency.
📝 Abstract
We propose MAMo, a novel memory and attention framework for monocular video depth estimation. MAMo can augment and improve any single-image depth estimation networks into video depth estimation models, enabling them to take advantage of the temporal information to predict more accurate depth. In MAMo, we augment model with memory which aids the depth prediction as the model streams through the video. Specifically, the memory stores learned visual and displacement tokens of the previous time instances. This allows the depth network to cross-reference relevant features from the past when predicting depth on the current frame. We introduce a novel scheme to continuously update the memory, optimizing it to keep tokens that correspond with both the past and the present visual information. We adopt attention-based approach to process memory features where we first learn the spatiotemporal relation among the resultant visual and displacement memory tokens using self-attention module. Further, the output features of self-attention are aggregated with the current visual features through cross-attention. The cross-attended features are finally given to a decoder to predict depth on the current frame. Through extensive experiments on several benchmarks, including KITTI, NYU-Depth V2, and DDAD, we show that MAMo consistently improves monocular depth estimation networks and sets new state-of-the-art (SOTA) accuracy. Notably, our MAMo video depth estimation provides higher accuracy with lower latency, when comparing to SOTA cost-volume-based video depth models.
Problem

Research questions and friction points this paper is trying to address.

Video Depth Estimation
Model Accuracy
Temporal Continuity
Innovation

Methods, ideas, or system contributions that make the work stand out.

MAMo
Memory-Attention Mechanism
Depth Prediction