Efficient Whole Slide Pathology VQA via Token Compression

📅 2025-07-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Whole-slide images (WSIs) in digital pathology are extremely large (up to 10K×10K pixels), posing dual challenges of excessive context length and high computational overhead for existing multimodal large language models (MLLMs) in visual question answering (VQA). To address this, we propose the first learnable token compression architecture tailored for WSI-VQA. Our method introduces a modality-specific compression module that aggregates thousands of image patch tokens into a small set of highly discriminative compressed tokens, augmented by a BERT-style [CLS] mechanism for joint vision–language representation. Only these compressed tokens are fed into the LLM decoder, drastically reducing input sequence length and GPU memory consumption. Evaluated on ten tumor subtypes from The Cancer Genome Atlas (TCGA), our approach outperforms mainstream MLLM baselines in VQA accuracy while reducing training resource consumption by over 60%.

Technology Category

Application Category

📝 Abstract
Whole-slide images (WSIs) in pathology can reach up to 10,000 x 10,000 pixels, posing significant challenges for multimodal large language model (MLLM) due to long context length and high computational demands. Previous methods typically focus on patch-level analysis or slide-level classification using CLIP-based models with multi-instance learning, but they lack the generative capabilities needed for visual question answering (VQA). More recent MLLM-based approaches address VQA by feeding thousands of patch tokens directly into the language model, which leads to excessive resource consumption. To address these limitations, we propose Token Compression Pathology LLaVA (TCP-LLaVA), the first MLLM architecture to perform WSI VQA via token compression. TCP-LLaVA introduces a set of trainable compression tokens that aggregate visual and textual information through a modality compression module, inspired by the [CLS] token mechanism in BERT. Only the compressed tokens are forwarded to the LLM for answer generation, significantly reducing input length and computational cost. Experiments on ten TCGA tumor subtypes show that TCP-LLaVA outperforms existing MLLM baselines in VQA accuracy while reducing training resource consumption by a substantial margin.
Problem

Research questions and friction points this paper is trying to address.

Handles large whole-slide images for pathology analysis efficiently
Reduces computational cost in multimodal large language models
Improves visual question answering accuracy for pathology images
Innovation

Methods, ideas, or system contributions that make the work stand out.

Token compression for WSI VQA
Trainable compression tokens aggregation
Reduced input length and cost
🔎 Similar Papers
2024-08-15European Conference on Computer VisionCitations: 0
Weimin Lyu
Weimin Lyu
Stony Brook University
Natural Language ProcessingComputer VisionVision Language Model
Qingqiao Hu
Qingqiao Hu
Stony Brook University
Medical Imaging AnalysisDeep Learning
Kehan Qi
Kehan Qi
Stony Brook University
Medical Image Analysis
Z
Zhan Shi
Stony Brook University
W
Wentao Huang
Stony Brook University
S
Saumya Gupta
Stony Brook University
C
Chao Chen
Stony Brook University