🤖 AI Summary
To address computational resource constraints in deploying medical multimodal large language models (MLLMs), this paper proposes an efficient compression pipeline for healthcare-adapted LLaVA models, comprising three stages: structured pruning, supervised fine-tuning (SFT), and activation-aware quantization. We introduce a novel layer-selection strategy for pruning, guided by inter-layer activation distributions, and tightly couple it with activation-aware quantization to preserve semantic sensitivity while enhancing compression fidelity. Experimental results demonstrate that our method reduces GPU memory consumption of the 7B-parameter model by 70%, enabling successful deployment on devices with only 4 GB VRAM. On multiple medical visual question answering and radiology report generation benchmarks, it achieves a 4% absolute accuracy improvement over conventional pruning-plus-quantization baselines, significantly advancing the efficiency–accuracy trade-off in resource-constrained clinical AI deployment.
📝 Abstract
Multimodal Large Language Models (MLLMs) hold huge potential for usage in the medical domain, but their computational costs necessitate efficient compression techniques. This paper evaluates the impact of structural pruning and activation-aware quantization on a fine-tuned LLAVA model for medical applications. We propose a novel layer selection method for pruning, analyze different quantization techniques, and assess the performance trade-offs in a prune-SFT-quantize pipeline. Our proposed method enables MLLMs with 7B parameters to run within 4 GB of VRAM, reducing memory usage by 70% while achieving 4% higher model performance compared to traditional pruning and quantization techniques in the same compression ratio.