LogHD: Robust Compression of Hyperdimensional Classifiers via Logarithmic Class-Axis Reduction

๐Ÿ“… 2025-11-06
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the high memory overhead (O(CD)) and the difficulty in balancing robustness and scalability inherent in โ€œone-prototype-per-classโ€ designs in Hyperdimensional Computing (HDC), this paper proposes LogHDโ€”the first logarithmic-class-axis compression framework for HDC classification. Its core innovations are a capacity-aware codebook and an activation-contour decoding mechanism, which jointly compress the class dimension to logarithmic scale while preserving high-dimensional representational fidelity. LogHD integrates bundle-based hypervector construction, k-ary encoding, and feature-axis sparsification to enable hardware-efficient bit-level operations. Experiments demonstrate that LogHD achieves 2.5โ€“3.0ร— higher bit-flip resilience under identical memory budgets. In ASIC implementation, it delivers 498ร— energy efficiency and 62.6ร— speedup over CPU/GPU baselines, significantly outperforming existing HDC hardware approaches.

Technology Category

Application Category

๐Ÿ“ Abstract
Hyperdimensional computing (HDC) suits memory, energy, and reliability-constrained systems, yet the standard"one prototype per class"design requires $O(CD)$ memory (with $C$ classes and dimensionality $D$). Prior compaction reduces $D$ (feature axis), improving storage/compute but weakening robustness. We introduce LogHD, a logarithmic class-axis reduction that replaces the $C$ per-class prototypes with $n!approx!lceillog_k C ceil$ bundle hypervectors (alphabet size $k$) and decodes in an $n$-dimensional activation space, cutting memory to $O(Dlog_k C)$ while preserving $D$. LogHD uses a capacity-aware codebook and profile-based decoding, and composes with feature-axis sparsification. Across datasets and injected bit flips, LogHD attains competitive accuracy with smaller models and higher resilience at matched memory. Under equal memory, it sustains target accuracy at roughly $2.5$-$3.0 imes$ higher bit-flip rates than feature-axis compression; an ASIC instantiation delivers $498 imes$ energy efficiency and $62.6 imes$ speedup over an AMD Ryzen 9 9950X and $24.3 imes$/$6.58 imes$ over an NVIDIA RTX 4090, and is $4.06 imes$ more energy-efficient and $2.19 imes$ faster than a feature-axis HDC ASIC baseline.
Problem

Research questions and friction points this paper is trying to address.

Reduces memory footprint of hyperdimensional classifiers from O(CD) to O(D log C)
Maintains model robustness while achieving significant compression ratios
Enhances resilience against bit-flip errors compared to feature-axis compression
Innovation

Methods, ideas, or system contributions that make the work stand out.

Logarithmic class-axis reduction replaces C prototypes
Capacity-aware codebook and profile-based decoding used
Composes with feature-axis sparsification for compression
๐Ÿ”Ž Similar Papers
No similar papers found.