🤖 AI Summary
This work addresses the high energy consumption of conventional deep neural networks in aspect term extraction by introducing spiking neural networks (SNNs) to this task for the first time. The authors propose a sequence labeling model based on ternary spiking neurons and an event-driven mechanism, which captures inter-word temporal dependencies through sparse activation. By integrating direct spike-based training with pseudo-gradient optimization, the model achieves performance comparable to state-of-the-art deep learning approaches on four SemEval benchmark datasets while significantly reducing energy consumption. These results demonstrate the efficiency and feasibility of SNNs for natural language processing tasks.
📝 Abstract
Aspect Term Extraction (ATE) identifies aspect terms in review sentences, a key subtask of sentiment analysis. While most existing approaches use energy-intensive deep neural networks (DNNs) for ATE as sequence labeling, this paper proposes a more energy-efficient alternative using Spiking Neural Networks (SNNs). Using sparse activations and event-driven inferences, SNNs capture temporal dependencies between words, making them suitable for ATE. The proposed architecture, SpikeATE, employs ternary spiking neurons and direct spike training fine-tuned with pseudo-gradients. Evaluated on four benchmark SemEval datasets, SpikeATE achieves performance comparable to state-of-the-art DNNs with significantly lower energy consumption. This highlights the use of SNNs as a practical and sustainable choice for ATE tasks.