🤖 AI Summary
Non-grasping interactive navigation—such as object pushing—for mobile robots in unstructured environments lacks reproducible, comparable benchmarks. Method: This work introduces the first comprehensive simulation benchmark covering four distinct tasks: maze traversal, icy-surface ship navigation, box delivery, and area cleaning. We propose a unified evaluation framework with multi-dimensional quantitative metrics balancing task completion, interaction efficiency, and operational cost. Built on a modular Python architecture, the benchmark integrates PyBullet and Gazebo physics engines, supports ROS interfaces and plug-and-play reinforcement learning policies, and provides pre-trained models and standardized APIs. Contribution/Results: We conduct a systematic cross-method evaluation across all four tasks, demonstrating the framework’s effectiveness in distinguishing algorithmic performance in interaction rationality, robustness, and generalization. The codebase and models are open-sourced and have been widely adopted by the research community.
📝 Abstract
Mobile robots are increasingly deployed in unstructured environments where obstacles and objects are movable. Navigation in such environments is known as interactive navigation, where task completion requires not only avoiding obstacles but also strategic interactions with movable objects. Non-prehensile interactive navigation focuses on non-grasping interaction strategies, such as pushing, rather than relying on prehensile manipulation. Despite a growing body of research in this field, most solutions are evaluated using case-specific setups, limiting reproducibility and cross-comparison. In this paper, we present Bench-NPIN, the first comprehensive benchmark for non-prehensile interactive navigation. Bench-NPIN includes multiple components: 1) a comprehensive range of simulated environments for non-prehensile interactive navigation tasks, including navigating a maze with movable obstacles, autonomous ship navigation in icy waters, box delivery, and area clearing, each with varying levels of complexity; 2) a set of evaluation metrics that capture unique aspects of interactive navigation, such as efficiency, interaction effort, and partial task completion; and 3) demonstrations using Bench-NPIN to evaluate example implementations of established baselines across environments. Bench-NPIN is an open-source Python library with a modular design. The code, documentation, and trained models can be found at https://github.com/IvanIZ/BenchNPIN.