Instruct2See: Learning to Remove Any Obstructions Across Distributions

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Real-world images are frequently degraded by arbitrary, physically realistic occlusions—such as raindrops or fences—posing significant challenges for existing methods, which are typically constrained to specific occlusion types and heavily reliant on task-specific training data, thus exhibiting poor generalization. To address this, we propose a zero-shot, cross-distribution occlusion removal framework that reformulates occlusion removal as a joint soft- and hard-mask recovery problem. Our method introduces a cross-modal cross-attention mechanism and a tunable mask adapter, enabling dynamic occlusion understanding and real-time mask refinement guided by multimodal prompts (text + vision). Coupled with prompt-driven generative reconstruction, it achieves high-fidelity restoration without requiring any training samples containing the target occlusion type. Extensive experiments demonstrate strong generalization both in-distribution and out-of-distribution, significantly enhancing robustness and practicality for visual restoration in complex real-world scenarios.

Technology Category

Application Category

📝 Abstract
Images are often obstructed by various obstacles due to capture limitations, hindering the observation of objects of interest. Most existing methods address occlusions from specific elements like fences or raindrops, but are constrained by the wide range of real-world obstructions, making comprehensive data collection impractical. To overcome these challenges, we propose Instruct2See, a novel zero-shot framework capable of handling both seen and unseen obstacles. The core idea of our approach is to unify obstruction removal by treating it as a soft-hard mask restoration problem, where any obstruction can be represented using multi-modal prompts, such as visual semantics and textual instructions, processed through a cross-attention unit to enhance contextual understanding and improve mode control. Additionally, a tunable mask adapter allows for dynamic soft masking, enabling real-time adjustment of inaccurate masks. Extensive experiments on both in-distribution and out-of-distribution obstacles show that Instruct2See consistently achieves strong performance and generalization in obstruction removal, regardless of whether the obstacles were present during the training phase. Code and dataset are available at https://jhscut.github.io/Instruct2See.
Problem

Research questions and friction points this paper is trying to address.

Removing diverse obstructions from images across distributions
Handling unseen obstacles via zero-shot multi-modal prompts
Dynamic soft-hard mask restoration for accurate obstruction removal
Innovation

Methods, ideas, or system contributions that make the work stand out.

Zero-shot framework for obstruction removal
Soft-hard mask restoration with multi-modal prompts
Tunable mask adapter for dynamic adjustments
🔎 Similar Papers
No similar papers found.
J
Junhang Li
School of Computer Science and Engineering, South China University of Technology
Y
Yu Guo
School of Computing and Information Systems, Singapore Management University
Chuhua Xian
Chuhua Xian
South China University of Technology
Computer Graphics
Shengfeng He
Shengfeng He
Singapore Management University
Visual ComputingGenerative ModelsComputer VisionComputational PhotographyComputer Graphics