LoC-Path: Learning to Compress for Pathology Multimodal Large Language Models

📅 2025-12-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the extreme sparsity of diagnostically relevant regions in Whole-Slide Images (WSIs), which leads to prohibitive computational overhead for Multimodal Large Language Models (MLLMs), this paper proposes an efficient WSI–text joint modeling framework. We first identify significant semantic redundancy among pathological image patches and accordingly design a Sparse Token Merger and a Cross-Attention Routing Adapter. Integrated with MAE-pretrained resampling and a Token Importance Scorer, our approach achieves feature compression and cross-modal alignment while preserving diagnostic-critical information. Experiments demonstrate that the method reduces computational cost by 62% (FLOPs) and GPU memory usage by 58% on average, without sacrificing performance—matching state-of-the-art full-slide multimodal models across multiple benchmarks.

Technology Category

Application Category

📝 Abstract
Whole Slide Image (WSI) understanding is fundamentally challenging due to its gigapixel scale and the extreme sparsity of diagnostically relevant regions. Unlike human experts who primarily rely on key areas to arrive at a diagnosis, existing slide-level multimodal large language models (MLLMs) for pathology rely on heavy slide-level encoders that process thousands of patch features in a brute-force manner, resulting in excessive computational cost. In this work, we revisit the WSI-language modeling paradigm and show that tile-level features exhibit strong global and local redundancy, whereas only a small subset of tiles are truly task-relevant. Motivated by this observation, we introduce an efficient MLLM framework, called LoC-Path, that replaces the expensive slide-level encoder with redundancy-reducing modules. We first design a Sparse Token Merger (STM) and an MAE-pretrained resampler to remove local redundancy and compress globally redundant tile tokens into a compact slide-level representation set. We then propose a Cross-Attention Routing Adapter (CARA) and a Token Importance Scorer (TIS) to integrate the compressed visual representation with the language model in a computation-efficient manner. Extensive experiments demonstrate that our approach achieves performance comparable to existing state-of-the-art whole-slide MLLMs, while requiring significantly lower computation and memory.
Problem

Research questions and friction points this paper is trying to address.

Compress gigapixel pathology images efficiently
Reduce computational cost of slide-level MLLMs
Identify task-relevant tiles in sparse diagnostic regions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse Token Merger reduces local redundancy
MAE-pretrained resampler compresses global tile tokens
Cross-Attention Routing Adapter integrates visual features efficiently
🔎 Similar Papers
No similar papers found.
Qingqiao Hu
Qingqiao Hu
Stony Brook University
Medical Imaging AnalysisDeep Learning
Weimin Lyu
Weimin Lyu
Stony Brook University
Natural Language ProcessingComputer VisionVision Language Model
Meilong Xu
Meilong Xu
Stony Brook University
Machine LearningComputer VisionTopological Data Analysis
Kehan Qi
Kehan Qi
Stony Brook University
Medical Image Analysis
X
Xiaoling Hu
Harvard Medical School, Boston, MA, USA
S
Saumya Gupta
Stony Brook University, Stony Brook, NY , USA
J
Jiawei Zhou
Stony Brook University, Stony Brook, NY , USA
C
Chao Chen
Stony Brook University, Stony Brook, NY , USA