SymbolSight: Minimizing Inter-Symbol Interference for Reading with Prosthetic Vision

📅 2026-01-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenges of reading with retinal prostheses, where low resolution and visual persistence cause significant letter confusion due to overlapping afterimages. To mitigate systematic misreading, the authors propose a computational framework that integrates linguistic bigram statistics with a neural surrogate observer model. This approach uniquely combines symbol-to-letter mapping optimization with language priors to design non-uniform symbol sets tailored to the perceptual characteristics of prosthetic vision. Through simulations of prosthetic vision, confusion estimation, and an optimization algorithm informed by large-scale corpora, the method demonstrates substantial improvements across Arabic, Bulgarian, and English: the optimized symbol sets reduce average letter confusion by a factor of 22 compared to native alphabets and significantly enhance predicted readability, thereby overcoming the limitations of conventional fonts under low-bandwidth, serial visual processing.

Technology Category

Application Category

📝 Abstract
Retinal prostheses restore limited visual perception, but low spatial resolution and temporal persistence make reading difficult. In sequential letter presentation, the afterimage of one symbol can interfere with perception of the next, leading to systematic recognition errors. Rather than relying on future hardware improvements, we investigate whether optimizing the visual symbols themselves can mitigate this temporal interference. We present SymbolSight, a computational framework that selects symbol-to-letter mappings to minimize confusion among frequently adjacent letters. Using simulated prosthetic vision (SPV) and a neural proxy observer, we estimate pairwise symbol confusability and optimize assignments using language-specific bigram statistics. Across simulations in Arabic, Bulgarian, and English, the resulting heterogeneous symbol sets reduced predicted confusion by a median factor of 22 relative to native alphabets. These results suggest that standard typography is poorly matched to serial, low-bandwidth prosthetic vision and demonstrate how computational modeling can efficiently narrow the design space of visual encodings to generate high-potential candidates for future psychophysical and clinical evaluation.
Problem

Research questions and friction points this paper is trying to address.

prosthetic vision
inter-symbol interference
temporal persistence
reading
symbol recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

prosthetic vision
symbol optimization
inter-symbol interference
computational modeling
visual encoding
🔎 Similar Papers
No similar papers found.
J
Jasmine Lesner
Department of Computer Science, University of California, Santa Barbara, CA 93106, USA
Michael Beyeler
Michael Beyeler
University of California, Santa Barbara
Bionic VisionBlindnessLow VisionComputational NeuroscienceNeuroengineering