Beyond Real Weights: Hypercomplex Representations for Stable Quantization

📅 2025-12-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the excessive parameter count and low deployment efficiency of multimodal large language models (MLLMs) caused by high-dimensional vision-language alignment, this paper proposes a progressive hypercomplex reparameterization method: dense feed-forward layers are gradually replaced with parameterized hypercomplex multiplication (PHM) layers; residual interpolation scheduling and a lightweight reconstruction loss are introduced during training, jointly enabling structural compression via knowledge distillation. The approach preserves multimodal alignment capability and behavioral fidelity to the original model while natively supporting quantization. Experiments across multiple mainstream vision-language models demonstrate that the compressed models achieve significant reductions in parameter count and inference latency, matching the original performance while accelerating inference by up to 2.1×. The core contribution lies in the first systematic application of progressive hypercomplex substitution to MLLM efficiency optimization, achieving a balanced trade-off among accuracy, computational efficiency, and deployment friendliness.

Technology Category

Application Category

📝 Abstract
Multimodal language models (MLLMs) require large parameter capacity to align high-dimensional visual features with linguistic representations, making them computationally heavy and difficult to deploy efficiently. We introduce a progressive reparameterization strategy that compresses these models by gradually replacing dense feed-forward network blocks with compact Parameterized Hypercomplex Multiplication (PHM) layers. A residual interpolation schedule, together with lightweight reconstruction and knowledge distillation losses, ensures that the PHM modules inherit the functional behavior of their dense counterparts during training. This transition yields substantial parameter and FLOP reductions while preserving strong multimodal alignment, enabling faster inference without degrading output quality. We evaluate the approach on multiple vision-language models (VLMs). Our method maintains performance comparable to the base models while delivering significant reductions in model size and inference latency. Progressive PHM substitution thus offers an architecture-compatible path toward more efficient multimodal reasoning and complements existing low-bit quantization techniques.
Problem

Research questions and friction points this paper is trying to address.

Compress multimodal language models to reduce computational load
Replace dense network blocks with hypercomplex layers for efficiency
Maintain performance while decreasing model size and inference latency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Progressive reparameterization with hypercomplex multiplication layers
Residual interpolation and lightweight losses for functional inheritance
Architecture-compatible compression for efficient multimodal reasoning
🔎 Similar Papers
No similar papers found.
Jawad Ibn Ahad
Jawad Ibn Ahad
Artificial Intelligence Department, RobotBulls Labs, Geneva, Switzerland
M
Maisha Rahman
Artificial Intelligence Department, RobotBulls Labs, Geneva, Switzerland
A
Amrijit Biswas
Artificial Intelligence Department, RobotBulls Labs, Geneva, Switzerland
Muhammad Rafsan Kabir
Muhammad Rafsan Kabir
Department of Electrical and Computer Engineering, North South University
machine learningnatural language processingcomputer vision
R
Robin Krambroeckers
Artificial Intelligence Department, RobotBulls Labs, Geneva, Switzerland
S
Sifat Momen
Machine Intelligence Lab (MILab), North South University, Bangladesh
Nabeel Mohammed
Nabeel Mohammed
North South University
Natural Language ProcessingComputer VisionDeep Learning
Shafin Rahman
Shafin Rahman
Associate Professor, ECE, North South University, Bangladesh
Computer VisionMachine Learning