🤖 AI Summary
Current neuromorphic chips suffer from rigid network topologies and insufficient programmability of neurons, limiting task adaptability and scalability. To address these challenges, this work proposes a hierarchical topology encoding mechanism enabling flexible mapping of arbitrary sparse spiking neural networks; designs a multi-granularity programmable instruction set supporting dynamic configuration of spiking neurons, synapses, and on-chip learning; and constructs an event-driven multi-core architecture with brain-inspired hierarchical communication, co-optimized with a hardware-aware compiler stack for efficient resource scheduling. Evaluated on speech recognition, ECG classification, and cross-day brain–machine interface decoding tasks, the system achieves over 200× higher energy efficiency than an NVIDIA RTX 3090 GPU while maintaining comparable accuracy. This work establishes a scalable, software–hardware co-design paradigm for highly adaptive and large-scale neuromorphic computing.
📝 Abstract
Brain-inspired computing has emerged as a promising paradigm to overcome the energy-efficiency limitations of conventional intelligent systems by emulating the brain's partitioned architecture and event-driven sparse computation. However, existing brain-inspired chips often suffer from rigid network topology constraints and limited neuronal programmability, hindering their adaptability. To address these challenges, we present TaiBai, an event-driven, programmable many-core brain-inspired processor that leverages temporal and spatial spike sparsity to minimize bandwidth and computational overhead. TaiBai chip contains three key features: First, a brain-inspired hierarchical topology encoding scheme is designed to flexibly support arbitrary network architectures while slashing storage overhead for large-scale networks; Second, a multi-granularity instruction set enables programmability of brain-like spiking neuron or synapses with various dynamics and on-chip learning rules; Third, a co-designed compiler stack optimizes task mapping and resource allocation. After evaluating across various tasks, such as speech recognition, ECG classification, and cross-day brain-computer interface decoding, we found spiking neural networks embedded on the TaiBai chip could achieve more than 200 times higher energy efficiency than a standard NVIDIA RTX 3090 GPU at a comparable accuracy. These results demonstrated its high potentiation as a scalable, programmable, and ultra-efficient solution for both multi-scale brain simulation and brain-inspired computation.