๐ค AI Summary
This work addresses the high computational cost and energy consumption of large language models, primarily caused by the quadratic complexity of attention mechanisms and dense feedforward networks. To mitigate these issues, the authors propose the Module-Aware Refinement (MAR) framework, which first replaces attention with state space models (SSMs) to achieve linear-complexity sequence modeling and then reduces feedforward computation via activation sparsification. Furthermore, they introduce Adaptive Ternary Multi-step Neurons (ATMN) and a Spiking-aware Bidirectional Distillation Strategy (SBDS) to effectively alleviate low information density and temporal misalignment arising from integrating SSMs with spiking neural networks. Experiments demonstrate that MAR significantly reduces inference energy consumption while recovering the performance of dense models, outperforming both comparable and larger-scale efficient architectures.
๐ Abstract
Large Language Models (LLMs) excel across diverse domains but suffer from high energy costs due to quadratic attention and dense Feed-Forward Network (FFN) operations. To address these issues, we propose Module-aware Architecture Refinement (MAR), a two-stage framework that integrates State Space Models (SSMs) for linear-time sequence modeling and applies activation sparsification to reduce FFN costs. In addition, to mitigate low information density and temporal mismatch in integrating Spiking Neural Networks (SNNs) with SSMs, we design the Adaptive Ternary Multi-step Neuron (ATMN) and the Spike-aware Bidirectional Distillation Strategy (SBDS). Extensive experiments demonstrate that MAR effectively restores the performance of its dense counterpart under constrained resources while substantially reducing inference energy consumption. Furthermore, it outperforms efficient models of comparable or even larger scale, underscoring its potential for building efficient and practical LLMs.