SED-MVS: Segmentation-Driven and Edge-Aligned Deformation Multi-View Stereo with Depth Restoration and Occlusion Constraint

📅 2025-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address geometric instability in multi-view stereo (MVS) reconstruction—particularly deformation artifacts and edge discontinuities arising from textureless regions and occlusions—this paper proposes SED-MVS, a segmentation-driven, edge-aligned deformable patch modeling framework. Its key contributions are: (1) the first integration of SAM2-guided panoptic segmentation with multi-trajectory diffusion alignment to enhance cross-view geometric consistency; (2) robust depth initialization via joint exploitation of LoFTR sparse feature correspondences and DepthAnything V2 monocular depth priors; and (3) an instance-level occlusion-aware dual-edge constraint mechanism that explicitly models object boundaries and occlusion structures. Evaluated on four major benchmarks—ETH3D, Tanks & Temples, BlendedMVS, and Strecha—SED-MVS achieves state-of-the-art performance, significantly improving reconstruction accuracy, robustness, and generalization in textureless and occluded regions.

Technology Category

Application Category

📝 Abstract
Recently, patch-deformation methods have exhibited significant effectiveness in multi-view stereo owing to the deformable and expandable patches in reconstructing textureless areas. However, such methods primarily emphasize broadening the receptive field in textureless areas, while neglecting deformation instability caused by easily overlooked edge-skipping, potentially leading to matching distortions. To address this, we propose SED-MVS, which adopts panoptic segmentation and multi-trajectory diffusion strategy for segmentation-driven and edge-aligned patch deformation. Specifically, to prevent unanticipated edge-skipping, we first employ SAM2 for panoptic segmentation as depth-edge guidance to guide patch deformation, followed by multi-trajectory diffusion strategy to ensure patches are comprehensively aligned with depth edges. Moreover, to avoid potential inaccuracy of random initialization, we combine both sparse points from LoFTR and monocular depth map from DepthAnything V2 to restore reliable and realistic depth map for initialization and supervised guidance. Finally, we integrate segmentation image with monocular depth map to exploit inter-instance occlusion relationship, then further regard them as occlusion map to implement two distinct edge constraint, thereby facilitating occlusion-aware patch deformation. Extensive results on ETH3D, Tanks&Temples, BlendedMVS and Strecha datasets validate the state-of-the-art performance and robust generalization capability of our proposed method.
Problem

Research questions and friction points this paper is trying to address.

Addresses deformation instability in multi-view stereo reconstruction.
Prevents edge-skipping using panoptic segmentation and multi-trajectory diffusion.
Restores depth maps with sparse points and monocular depth for initialization.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Panoptic segmentation guides patch deformation.
Multi-trajectory diffusion aligns patches with edges.
Combines sparse points and monocular depth for initialization.
🔎 Similar Papers
No similar papers found.
Z
Zhenlong Yuan
Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
Z
Zhidong Yang
Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
Yujun Cai
Yujun Cai
NTU → Meta → Lecturer(Assistant Professor) @UQ
Multi-Modal PerceptionVision-Language Models
K
Kuangxin Wu
Information Technology Department, Hunan Police Academy, Changsha 410100, China
M
Mufan Liu
Cooperative MediaNet Innovation Center, Shanghai Jiao Tong University, Shanghai, 200240, China
D
Dapeng Zhang
DSLAB, School of Information Science & Engineering, Lanzhou University, 730000, China
H
Hao Jiang
Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
Zhaoxin Li
Zhaoxin Li
Georgia Institute of Technology
Robot LearningExplainable Artificial Intelligence
Z
Zhaoqi Wang
Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China