TensorSLM: Energy-efficient Embedding Compression of Sub-billion Parameter Language Models on Low-end Devices

📅 2025-06-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high redundancy, excessive energy consumption, and difficulty in balancing accuracy and efficiency when deploying small language models (SLMs) on resource-constrained edge devices (e.g., Raspberry Pi), this paper proposes a training-free tensor chain decomposition method for compressing pretrained word embedding layers. Specifically, it introduces the first integration of tensor-train decomposition (TTD) and matrix product state (MPS) representations to achieve low-rank reconstruction of embedding layers without fine-tuning. The approach significantly reduces both computational and memory overhead. Experiments on Raspberry Pi demonstrate approximately 2.0× compression of the embedding layer with zero accuracy loss on downstream language tasks, while reducing per-inference energy consumption by 50%. The method achieves an optimal trade-off among latency, energy efficiency, and model accuracy—enabling efficient SLM deployment on ultra-low-power edge hardware.

Technology Category

Application Category

📝 Abstract
Small Language Models (SLMs, or on-device LMs) have significantly fewer parameters than Large Language Models (LLMs). They are typically deployed on low-end devices, like mobile phones and single-board computers. Unlike LLMs, which rely on increasing model size for better generalisation, SLMs designed for edge applications are expected to have adaptivity to the deployment environments and energy efficiency given the device battery life constraints, which are not addressed in datacenter-deployed LLMs. This paper addresses these two requirements by proposing a training-free token embedding compression approach using Tensor-Train Decomposition (TTD). Each pre-trained token embedding vector is converted into a lower-dimensional Matrix Product State (MPS). We comprehensively evaluate the extracted low-rank structures across compression ratio, language task performance, latency, and energy consumption on a typical low-end device, i.e. Raspberry Pi. Taking the sub-billion parameter versions of GPT-2/Cerebres-GPT and OPT models as examples, our approach achieves a comparable language task performance to the original model with around $2.0 imes$ embedding layer compression, while the energy consumption of a single query drops by half.
Problem

Research questions and friction points this paper is trying to address.

Compress token embeddings for energy-efficient SLMs on low-end devices
Maintain language task performance despite embedding layer compression
Reduce energy consumption per query in sub-billion parameter models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free token embedding compression
Tensor-Train Decomposition for efficiency
Matrix Product State reduces energy
🔎 Similar Papers
No similar papers found.
Mingxue Xu
Mingxue Xu
Imperial College London
Y
Yao Lei Xu
Imperial College London, London, United Kingdom
D
Danilo P. Mandic
Imperial College London, London, United Kingdom