MonoSparse-CAM: Efficient Tree Model Processing via Monotonicity and Sparsity in CAMs

📅 2024-07-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Tree models deployed on content-addressable memory (CAM) hardware suffer from low energy efficiency due to the lack of circuit-level exploitation of their structural properties. Method: This paper proposes a structure-aware hardware-software co-optimization framework that jointly exploits monotonicity and sparsity inherent in tree models. It enables decision-path pruning, monotonic feature grouping, and sparse rule encoding within CAM, establishing both compression and skip mechanisms. Crucially, the design explicitly maps tree model structure onto circuit-level CAM architecture to maximize hardware resource utilization. Results: Experiments demonstrate a 28.56× energy reduction over baseline CPU execution and an 18.51× improvement over the state-of-the-art CAM accelerator, alongside ≥1.68× higher throughput. This work establishes a new paradigm for energy-efficient hardware deployment of interpretable machine learning models.

Technology Category

Application Category

📝 Abstract
While the tree-based machine learning (TBML) models exhibit superior performance compared to neural networks on tabular data and hold promise for energy-efficient acceleration using aCAM arrays, their ideal deployment on hardware with explicit exploitation of TBML structure and aCAM circuitry remains a challenging task. In this work, we present MonoSparse-CAM, a new CAM-based optimization technique that exploits TBML sparsity and monotonicity in CAM circuitry to further advance processing performance. Our results indicate that MonoSparse-CAM reduces energy consumption by upto to 28.56x compared to raw processing and by 18.51x compared to state-of-the-art techniques, while improving the efficiency of computation by at least 1.68x.
Problem

Research questions and friction points this paper is trying to address.

Efficient Hardware Implementation
Tree-based Machine Learning Models
Energy-saving and Data Processing Acceleration
Innovation

Methods, ideas, or system contributions that make the work stand out.

MonoSparse-CAM
Energy Efficiency
Computational Speedup
🔎 Similar Papers
No similar papers found.
Tergel Molom-Ochir
Tergel Molom-Ochir
Duke University
AI acceleratorsIn-memory computingAnalog ComputingEmerging DevicesMemory
Brady Taylor
Brady Taylor
Postdoctoral Appointee, Sandia National Laboratories
Neuromorphic computingMicroelectronicsComputer architecture
H
Hai Li
Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina
Y
Yiran Chen
Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina