🤖 AI Summary
Whole-slide images (WSIs) in digital pathology are extremely large (up to 10K×10K pixels), posing dual challenges of excessive context length and high computational overhead for existing multimodal large language models (MLLMs) in visual question answering (VQA). To address this, we propose the first learnable token compression architecture tailored for WSI-VQA. Our method introduces a modality-specific compression module that aggregates thousands of image patch tokens into a small set of highly discriminative compressed tokens, augmented by a BERT-style [CLS] mechanism for joint vision–language representation. Only these compressed tokens are fed into the LLM decoder, drastically reducing input sequence length and GPU memory consumption. Evaluated on ten tumor subtypes from The Cancer Genome Atlas (TCGA), our approach outperforms mainstream MLLM baselines in VQA accuracy while reducing training resource consumption by over 60%.
📝 Abstract
Whole-slide images (WSIs) in pathology can reach up to 10,000 x 10,000 pixels, posing significant challenges for multimodal large language model (MLLM) due to long context length and high computational demands. Previous methods typically focus on patch-level analysis or slide-level classification using CLIP-based models with multi-instance learning, but they lack the generative capabilities needed for visual question answering (VQA). More recent MLLM-based approaches address VQA by feeding thousands of patch tokens directly into the language model, which leads to excessive resource consumption. To address these limitations, we propose Token Compression Pathology LLaVA (TCP-LLaVA), the first MLLM architecture to perform WSI VQA via token compression. TCP-LLaVA introduces a set of trainable compression tokens that aggregate visual and textual information through a modality compression module, inspired by the [CLS] token mechanism in BERT. Only the compressed tokens are forwarded to the LLM for answer generation, significantly reducing input length and computational cost. Experiments on ten TCGA tumor subtypes show that TCP-LLaVA outperforms existing MLLM baselines in VQA accuracy while reducing training resource consumption by a substantial margin.