Integrating Deep RL and Bayesian Inference for ObjectNav in Mobile Robotics

📅 2026-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of partial observability, perceptual uncertainty, and the exploration–navigation trade-off in indoor target search for mobile robots. The authors propose a novel approach that integrates online Bayesian belief updating with end-to-end deep reinforcement learning. By constructing a spatial belief map to continuously infer the probability distribution over potential target locations, the method guides a policy network toward efficient navigation decisions. This framework represents the first tight coupling of interpretable Bayesian inference with deep reinforcement learning in this context. Experimental results on the Habitat 3.0 simulation platform demonstrate significant improvements in task success rates and reduced search costs across two indoor environments, validating the effectiveness and superiority of the proposed hybrid architecture in partially observable settings.

Technology Category

Application Category

📝 Abstract
Autonomous object search is challenging for mobile robots operating in indoor environments due to partial observability, perceptual uncertainty, and the need to trade off exploration and navigation efficiency. Classical probabilistic approaches explicitly represent uncertainty but typically rely on handcrafted action-selection heuristics, while deep reinforcement learning enables adaptive policies but often suffers from slow convergence and limited interpretability. This paper proposes a hybrid object-search framework that integrates Bayesian inference with deep reinforcement learning. The method maintains a spatial belief map over target locations, updated online through Bayesian inference from calibrated object detections, and trains a reinforcement learning policy to select navigation actions directly from this probabilistic representation. The approach is evaluated in realistic indoor simulation using Habitat 3.0 and compared against developed baseline strategies. Across two indoor environments, the proposed method improves success rate while reducing search effort. Overall, the results support the value of combining Bayesian belief estimation with learned action selection to achieve more efficient and reliable objectsearch behavior under partial observability.
Problem

Research questions and friction points this paper is trying to address.

ObjectNav
partial observability
perceptual uncertainty
mobile robotics
autonomous object search
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian inference
deep reinforcement learning
spatial belief map
object navigation
partial observability
🔎 Similar Papers
No similar papers found.