Efficient RGB-D scene understanding via multi-task adaptive learning and cross-dimensional feature guidance

📅 2025-07-01
🏛️ Knowledge-Based Systems
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of occlusion, ambiguous boundaries, and task-sample discrepancies that hinder adaptive attention in RGB-D scene understanding. To overcome these issues, the authors propose a unified multi-task model that jointly performs semantic segmentation, instance segmentation, orientation estimation, panoptic segmentation, and scene classification. Key innovations include an enhanced RGB-D fusion encoder, a normalized focal channel layer, a context-aware feature interaction mechanism, a non-bottleneck 1D convolutional structure, and a dynamic multi-task adaptive loss function. These components collectively enable effective cross-modal feature fusion and task-aware dynamic learning. Extensive experiments on NYUv2, SUN RGB-D, and Cityscapes demonstrate that the proposed model surpasses existing methods in both segmentation accuracy and inference speed.

Technology Category

Application Category

📝 Abstract
Scene understanding plays a critical role in enabling intelligence and autonomy in robotic systems. Traditional approaches often face challenges, including occlusions, ambiguous boundaries, and the inability to adapt attention based on task-specific requirements and sample variations. To address these limitations, this paper presents an efficient RGB-D scene understanding model that performs a range of tasks, including semantic segmentation, instance segmentation, orientation estimation, panoptic segmentation, and scene classification. The proposed model incorporates an enhanced fusion encoder, which effectively leverages redundant information from both RGB and depth inputs. For semantic segmentation, we introduce normalized focus channel layers and a context feature interaction layer, designed to mitigate issues such as shallow feature misguidance and insufficient local-global feature representation. The instance segmentation task benefits from a non-bottleneck 1D structure, which achieves superior contour representation with fewer parameters. Additionally, we propose a multi-task adaptive loss function that dynamically adjusts the learning strategy for different tasks based on scene variations. Extensive experiments on the NYUv2, SUN RGB-D, and Cityscapes datasets demonstrate that our approach outperforms existing methods in both segmentation accuracy and processing speed.
Problem

Research questions and friction points this paper is trying to address.

RGB-D scene understanding
occlusions
ambiguous boundaries
task-specific attention
multi-task learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-task adaptive learning
cross-dimensional feature guidance
enhanced fusion encoder
non-bottleneck 1D structure
adaptive loss function
🔎 Similar Papers
No similar papers found.
Guodong Sun
Guodong Sun
INRIA
wireless communicationresource managementprobability theory
J
Junjie Liu
School of Mechanical Engineering, Hubei University of Technology, Wuhan, 430068, China; Vehicle Measurement, Control and Safety Key Laboratory of Sichuan Province, Xihua University, Chengdu, 610039, China; Hubei Key Laboratory of Modern Manufacturing Quality Engineering, Hubei University of Technology, Wuhan, 430068, China
G
Gaoyang Zhang
School of Mechanical Engineering, Hubei University of Technology, Wuhan, 430068, China; Hubei Key Laboratory of Modern Manufacturing Quality Engineering, Hubei University of Technology, Wuhan, 430068, China
Bo Wu
Bo Wu
Senior Researcher, Tencent AI Lab, Shenzhen, China
speech enhancementacoustic modelingmicrophone array signal processing
Yang Zhang
Yang Zhang
HBUT<<CUHK<<Tencent<<NJU
Computer visionMachine visionPattern recognitionMulti-task learning