🤖 AI Summary
Large language models face escalating parameter counts that outpace computational resource growth, necessitating highly efficient compression techniques. Method: This paper proposes a “hyper-compression” paradigm that reformulates model compression as a parameter representation problem, leveraging the trajectory length of low-dimensional irrational winding chaotic dynamical systems—termed *hyperfunctions*—to implicitly encode high-dimensional weight tensors. It introduces, for the first time, irrational rotation-number dynamical systems as parameter generators and theoretically derives a reconstruction error bound, transcending conventional pruning, quantization, and distillation frameworks. Contribution/Results: Integrated with lightweight engineering innovations—including block-wise mapping and cache-aware optimization—the method compresses LLaMA2-7B within one hour without retraining; inference overhead remains bounded, performance degradation is under 1%, and compression efficiency matches that of int4 quantization.
📝 Abstract
The rapid growth of large models' size has far outpaced that of computing resources. To bridge this gap, encouraged by the parsimonious relationship between genotype and phenotype in the brain's growth and development, we propose the so-called hyper-compression that turns the model compression into the issue of parameter representation via a hyperfunction. Specifically, it is known that the trajectory of some low-dimensional dynamic systems can fill the high-dimensional space eventually. Thus, hyper-compression, using these dynamic systems as the hyperfunctions, represents the parameters of the target network by their corresponding composition number or trajectory length. This suggests a novel mechanism for model compression, substantially different from the existing pruning, quantization, distillation, and decomposition. Along this direction, we methodologically identify a suitable dynamic system with the irrational winding as the hyperfunction and theoretically derive its associated error bound. Next, guided by our theoretical insights, we propose several engineering twists to make the hyper-compression pragmatic and effective. Lastly, systematic and comprehensive experiments confirm that hyper-compression enjoys the following extbf{PNAS} merits: 1) extbf{P}referable compression ratio; 2) extbf{N}o post-hoc retraining; 3) extbf{A}ffordable inference time; and 4) extbf{S}hort compression time. It compresses LLaMA2-7B in an hour and achieves close-to-int4-quantization performance, without retraining and with a performance drop of less than 1%. We have open-sourced our code in https://github.com/Juntongkuki/Hyper-Compression.git for free download and evaluation.