A Recurrent Spiking Network with Hierarchical Intrinsic Excitability Modulation for Schema Learning

📅 2025-01-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing pattern learning research predominantly focuses on unimodal settings, and prevailing neural architectures lack biological plausibility and interpretability. Method: We propose a general behavioral paradigm framework and three novel cognitive tasks, and introduce a Hierarchical Modulatory Recurrent Spiking Neural Network (HM-RSNN) featuring layered intrinsic excitability modulation. Contribution/Results: HM-RSNN introduces two key innovations: (1) a hierarchical dynamic regulation mechanism—where high-level units adapt to task categories and low-level units optimize intra-task dynamics; and (2) the first integration of biologically inspired lesion experiments, revealing task-specific spatial distributions of excitability in pattern representation. Experiments demonstrate HM-RSNN’s significant superiority over both standard RSNN baselines and conventional RNNs across all cognitive tasks. Furthermore, computational neurovisualization and hierarchical plasticity modeling elucidate excitability evolution patterns and cross-task differences in neural coordination, yielding biologically interpretable, deep dynamical insights into pattern learning.

Technology Category

Application Category

📝 Abstract
Schema, a form of structured knowledge that promotes transfer learning, is attracting growing attention in both neuroscience and artificial intelligence (AI). Current schema research in neural computation is largely constrained to a single behavioral paradigm and relies heavily on recurrent neural networks (RNNs) which lack the neural plausibility and biological interpretability. To address these limitations, this work first constructs a generalized behavioral paradigm framework for schema learning and introduces three novel cognitive tasks, thus supporting a comprehensive schema exploration. Second, we propose a new model using recurrent spiking neural networks with hierarchical intrinsic excitability modulation (HM-RSNNs). The top level of the model selects excitability properties for task-specific demands, while the bottom level fine-tunes these properties for intra-task problems. Finally, extensive visualization analyses of HM-RSNNs are conducted to showcase their computational advantages, track the intrinsic excitability evolution during schema learning, and examine neural coordination differences across tasks. Biologically inspired lesion studies further uncover task-specific distributions of intrinsic excitability within schemas. Experimental results show that HM-RSNNs significantly outperform RSNN baselines across all tasks and exceed RNNs in three novel cognitive tasks. Additionally, HM-RSNNs offer deeper insights into neural dynamics underlying schema learning.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Learning
Brain-inspired Computing
Neural Network Model
Innovation

Methods, ideas, or system contributions that make the work stand out.

HM-RSNNs
Dynamic Neuronal Activation
Pattern Learning
🔎 Similar Papers
No similar papers found.
Y
Yingchao Yu
College of Information Science and Technology, Donghua University, No. 2999 Renmin North Road, Songjiang District, Shanghai, 201620, Shanghai, China
Y
Yaochu Jin
School of Engineering, Westlake University, 600 Dunyu Road, Xihu District, Hangzhou, 310030, Zhejiang, China
Yuchen Xiao
Yuchen Xiao
Lead of Embodied AI R&D, Unitree | Research Scientist, J.P. Morgan | Ph.D. Northeastern University
Generative ModelsRobot LearningReinforcement LearningMulti-Agent Systems
Y
Yuping Yan
School of Engineering, Westlake University, 600 Dunyu Road, Xihu District, Hangzhou, 310030, Zhejiang, China