Perceptual Reality Transformer: Neural Architectures for Simulating Neurological Perception Conditions

📅 2025-08-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neurological disorders—such as simultanagnosia, prosopagnosia, and ADHD-related attentional deficits—induce profound perceptual distortions that create a critical experiential gap between clinicians and patients. To bridge this gap, we propose the first systematic neuroperceptual simulation framework, covering eight clinically representative disorders. Leveraging Vision Transformers (ViTs), CNNs, and generative models, we design disorder-specific perturbation functions grounded in clinical literature to map natural images into high-fidelity pathological perceptual states. We further introduce the first neuroperceptual simulation benchmark, equipped with quantitative fidelity metrics. Experiments demonstrate that ViT-based simulations significantly outperform CNN- and generation-based alternatives in perceptual fidelity. The framework has been deployed in medical education, clinician empathy training, and assistive technology development. By providing interpretable, empirically validated perceptual simulations, our work establishes a rigorous technical foundation for enhancing clinical understanding and intersubjective empathy.

Technology Category

Application Category

📝 Abstract
Neurological conditions affecting visual perception create profound experiential divides between affected individuals and their caregivers, families, and medical professionals. We present the Perceptual Reality Transformer, a comprehensive framework employing six distinct neural architectures to simulate eight neurological perception conditions with scientifically-grounded visual transformations. Our system learns mappings from natural images to condition-specific perceptual states, enabling others to experience approximations of simultanagnosia, prosopagnosia, ADHD attention deficits, visual agnosia, depression-related changes, anxiety tunnel vision, and Alzheimer's memory effects. Through systematic evaluation across ImageNet and CIFAR-10 datasets, we demonstrate that Vision Transformer architectures achieve optimal performance, outperforming traditional CNN and generative approaches. Our work establishes the first systematic benchmark for neurological perception simulation, contributes novel condition-specific perturbation functions grounded in clinical literature, and provides quantitative metrics for evaluating simulation fidelity. The framework has immediate applications in medical education, empathy training, and assistive technology development, while advancing our fundamental understanding of how neural networks can model atypical human perception.
Problem

Research questions and friction points this paper is trying to address.

Simulating neurological visual perception conditions using neural networks
Mapping natural images to condition-specific perceptual states
Establishing benchmarks for neurological perception simulation fidelity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses six neural architectures for perception simulation
Learns mappings from images to perceptual states
Vision Transformer outperforms CNN and generative methods
🔎 Similar Papers
No similar papers found.