ReLATE: Learning Efficient Sparse Encoding for High-Performance Tensor Decomposition

📅 2025-08-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High-dimensional sparse tensor decomposition suffers from parallel performance bottlenecks due to irregular computation and memory access patterns. Method: This paper proposes the first reinforcement learning (RL)-based adaptive sparse tensor encoding framework, which requires no human-labeled samples. It synergistically integrates model-free and model-based RL algorithms, incorporates rule-driven action masking, and employs a dynamic-aware filtering mechanism to jointly learn optimal encoding strategies across real and simulated environments—automatically adapting to diverse tensor shapes and data distributions. Contribution/Results: Compared to expert-designed sparse formats, our framework achieves up to 2× speedup and a geometric mean improvement of 1.4–1.46× across multiple benchmark datasets. It significantly enhances both tensor decomposition (TD) computational efficiency and generalization capability.

Technology Category

Application Category

📝 Abstract
Tensor decomposition (TD) is essential for analyzing high-dimensional sparse data, yet its irregular computations and memory-access patterns pose major performance challenges on modern parallel processors. Prior works rely on expert-designed sparse tensor formats that fail to adapt to irregular tensor shapes and/or highly variable data distributions. We present the reinforcement-learned adaptive tensor encoding (ReLATE) framework, a novel learning-augmented method that automatically constructs efficient sparse tensor representations without labeled training samples. ReLATE employs an autonomous agent that discovers optimized tensor encodings through direct interaction with the TD environment, leveraging a hybrid model-free and model-based algorithm to learn from both real and imagined actions. Moreover, ReLATE introduces rule-driven action masking and dynamics-informed action filtering mechanisms that ensure functionally correct tensor encoding with bounded execution time, even during early learning stages. By automatically adapting to both irregular tensor shapes and data distributions, ReLATE generates sparse tensor representations that consistently outperform expert-designed formats across diverse sparse tensor data sets, achieving up to 2X speedup compared to the best sparse format, with a geometric-mean speedup of 1.4-1.46X.
Problem

Research questions and friction points this paper is trying to address.

Optimizing sparse tensor encoding for efficient decomposition
Adapting to irregular tensor shapes and data distributions
Improving performance on modern parallel processors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning for adaptive tensor encoding
Hybrid model-free and model-based algorithm
Rule-driven action masking for correct encoding
🔎 Similar Papers
No similar papers found.