Rethinking Attention Output Projection: Structured Hadamard Transforms for Efficient Transformers

๐Ÿ“… 2026-03-09
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the quadratic growth in parameters and computational cost of dense output projections in multi-head attention with respect to model dimensionality. To mitigate this, the authors propose replacing the conventional learnable projection with a fixed, parameter-free Walshโ€“Hadamard transform, augmented by lightweight learnable affine scaling. This approach preserves global cross-head interactions while substantially reducing complexity. As the first study to integrate structured Hadamard transforms into the attention output layer, it achieves an effective balance between parameter-free orthogonal transformations and model performance. Experiments demonstrate a 25% reduction in parameters per attention block, a 7% decrease in total model parameters, 8.9% lower peak memory usage, and a 6.6% increase in throughput, with comparable or slightly improved downstream task performance. Moreover, training FLOPs utilization improves, and efficiency gains scale monotonically with model size.

Technology Category

Application Category

๐Ÿ“ Abstract
The dense output projection in multi-head attention scales quadratically with model dimension, contributing significantly to parameter count, memory footprint, and inference cost. We propose replacing this projection with a fixed, parameter-free Walsh Hadamard Transform followed by a lightweight learnable affine rescaling, eliminating approximately 25 percent of attention parameters per block while preserving global cross head interaction through an orthogonal, norm-preserving transformation. Across different model sizes, we demonstrate that this structured substitution maintains comparable or slightly superior downstream task performance on standard benchmarks, while achieving up to 7 percent aggregate parameter reduction, 8.9 percent peak memory savings, and 6.6 percent throughput improvement at scale, with efficiency gains growing monotonically with model size, batch size, and sequence length. Interestingly, we observe that structured Hadamard-based models exhibit a steeper validation loss curve relative to training FLOPs compared to their dense counterparts, suggesting more favorable compute utilization during training.
Problem

Research questions and friction points this paper is trying to address.

attention output projection
parameter efficiency
multi-head attention
model scaling
memory footprint
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hadamard Transform
Efficient Transformers
Parameter Reduction
Structured Attention
Memory Efficiency
๐Ÿ”Ž Similar Papers
No similar papers found.