๐ค AI Summary
This work addresses the quadratic growth in parameters and computational cost of dense output projections in multi-head attention with respect to model dimensionality. To mitigate this, the authors propose replacing the conventional learnable projection with a fixed, parameter-free WalshโHadamard transform, augmented by lightweight learnable affine scaling. This approach preserves global cross-head interactions while substantially reducing complexity. As the first study to integrate structured Hadamard transforms into the attention output layer, it achieves an effective balance between parameter-free orthogonal transformations and model performance. Experiments demonstrate a 25% reduction in parameters per attention block, a 7% decrease in total model parameters, 8.9% lower peak memory usage, and a 6.6% increase in throughput, with comparable or slightly improved downstream task performance. Moreover, training FLOPs utilization improves, and efficiency gains scale monotonically with model size.
๐ Abstract
The dense output projection in multi-head attention scales quadratically with model dimension, contributing significantly to parameter count, memory footprint, and inference cost. We propose replacing this projection with a fixed, parameter-free Walsh Hadamard Transform followed by a lightweight learnable affine rescaling, eliminating approximately 25 percent of attention parameters per block while preserving global cross head interaction through an orthogonal, norm-preserving transformation. Across different model sizes, we demonstrate that this structured substitution maintains comparable or slightly superior downstream task performance on standard benchmarks, while achieving up to 7 percent aggregate parameter reduction, 8.9 percent peak memory savings, and 6.6 percent throughput improvement at scale, with efficiency gains growing monotonically with model size, batch size, and sequence length. Interestingly, we observe that structured Hadamard-based models exhibit a steeper validation loss curve relative to training FLOPs compared to their dense counterparts, suggesting more favorable compute utilization during training.