Static or Temporal? Semantic Scene Simplification to Aid Wayfinding in Immersive Simulations of Bionic Vision

๐Ÿ“… 2025-07-14
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Bionic vision systems face challenges in scene understanding under extremely low resolution and bandwidth constraints, often leading to information overload. Method: This study proposes a semantic scene simplification framework comprising SemanticEdges (semantic edge extraction) and SemanticRaster (temporal semantic rasterization) to dynamically compress navigation-relevant information in urban environments. Unlike full-information rendering, the method leverages a biologically inspired visual model implemented in a VR-based immersive simulation platform, where controlled experiments compare static versus temporally interleaved semantic presentation paradigms. Contribution/Results: SemanticEdges significantly improves path navigation success rate, while SemanticRaster substantially reduces collision frequencyโ€”both outperforming baseline approaches. The results empirically validate that semantic preprocessing enhances both task performance and user experience. This work establishes an adaptive, lightweight design paradigm for bandwidth-constrained bionic vision interfaces.

Technology Category

Application Category

๐Ÿ“ Abstract
Visual neuroprostheses (bionic eye) aim to restore a rudimentary form of vision by translating camera input into patterns of electrical stimulation. To improve scene understanding under extreme resolution and bandwidth constraints, prior work has explored computer vision techniques such as semantic segmentation and depth estimation. However, presenting all task-relevant information simultaneously can overwhelm users in cluttered environments. We compare two complementary approaches to semantic preprocessing in immersive virtual reality: SemanticEdges, which highlights all relevant objects at once, and SemanticRaster, which staggers object categories over time to reduce visual clutter. Using a biologically grounded simulation of prosthetic vision, 18 sighted participants performed a wayfinding task in a dynamic urban environment across three conditions: edge-based baseline (Control), SemanticEdges, and SemanticRaster. Both semantic strategies improved performance and user experience relative to the baseline, with each offering distinct trade-offs: SemanticEdges increased the odds of success, while SemanticRaster boosted the likelihood of collision-free completions. These findings underscore the value of adaptive semantic preprocessing for prosthetic vision and, more broadly, may inform the design of low-bandwidth visual interfaces in XR that must balance information density, task relevance, and perceptual clarity.
Problem

Research questions and friction points this paper is trying to address.

Improving scene understanding in bionic vision under extreme constraints
Reducing visual clutter in prosthetic vision for better wayfinding
Balancing information density and clarity in low-bandwidth visual interfaces
Innovation

Methods, ideas, or system contributions that make the work stand out.

SemanticEdges highlights relevant objects simultaneously
SemanticRaster staggers object categories over time
Adaptive semantic preprocessing improves prosthetic vision
๐Ÿ”Ž Similar Papers
No similar papers found.