AVM: Towards Structure-Preserving Neural Response Modeling in the Visual Cortex Across Stimuli and Individuals

๐Ÿ“… 2025-12-17
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Deep learning models effectively fit neural responses but struggle to disentangle stable visual encoding from condition-dependent neural adaptation, limiting generalization across stimuli and subjects. To address this, we propose a structure-preserving visual cortical response model featuring a novel modular conditional modulation pathway. Crucially, it explicitly decouples stimulus-content representation from subject-identity modeling while keeping the ViT backbone frozen. Our method integrates plug-and-play modulation subnetworks, condition-aware feature adaptation, and a multi-scenario transfer evaluation framework. Evaluated on two large-scale mouse V1 datasets, our model achieves a 2% higher prediction correlation than state-of-the-art methods and improves cross-dataset FEVE by 9.1%. These results demonstrate substantial gains in generalization, robustness, and neuroscientific interpretability.

Technology Category

Application Category

๐Ÿ“ Abstract
While deep learning models have shown strong performance in simulating neural responses, they often fail to clearly separate stable visual encoding from condition-specific adaptation, which limits their ability to generalize across stimuli and individuals. We introduce the Adaptive Visual Model (AVM), a structure-preserving framework that enables condition-aware adaptation through modular subnetworks, without modifying the core representation. AVM keeps a Vision Transformer-based encoder frozen to capture consistent visual features, while independently trained modulation paths account for neural response variations driven by stimulus content and subject identity. We evaluate AVM in three experimental settings, including stimulus-level variation, cross-subject generalization, and cross-dataset adaptation, all of which involve structured changes in inputs and individuals. Across two large-scale mouse V1 datasets, AVM outperforms the state-of-the-art V1T model by approximately 2% in predictive correlation, demonstrating robust generalization, interpretable condition-wise modulation, and high architectural efficiency. Specifically, AVM achieves a 9.1% improvement in explained variance (FEVE) under the cross-dataset adaptation setting. These results suggest that AVM provides a unified framework for adaptive neural modeling across biological and experimental conditions, offering a scalable solution under structural constraints. Its design may inform future approaches to cortical modeling in both neuroscience and biologically inspired AI systems.
Problem

Research questions and friction points this paper is trying to address.

Separates stable visual encoding from condition-specific adaptation in neural modeling.
Generalizes across different stimuli and individual subjects in visual cortex simulations.
Enables condition-aware adaptation without altering core visual representation structure.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses frozen Vision Transformer encoder for stable visual features
Employs modular subnetworks for condition-aware adaptation
Achieves robust generalization across stimuli and individuals
๐Ÿ”Ž Similar Papers
No similar papers found.
Q
Qi Xu
School of Computer Science and Technology, Dalian University of Technology
S
Shuai Gong
School of Computer Science and Technology, Dalian University of Technology
Xuming Ran
Xuming Ran
National University of Singapore
Generative modelVisual cortex computationMemory modellingContinual learningAI for Science.
H
Haihua Luo
School of Computer Science and Technology, Dalian University of Technology; Faculty of Information Technology, University of Jyvรคskylรค
Y
Yangfan Hu
School of Information Technology and Artificial Intelligence, Zhejiang University of Finance and Economics