2DMamba: Efficient State Space Model for Image Representation with Applications on Giga-Pixel Whole Slide Image Classification

📅 2024-12-01
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Modeling long-range 2D dependencies in gigapixel whole-slide image (WSI) classification remains challenging: existing Transformers suffer from quadratic complexity and spatial distortion due to 1D tokenization, while conventional 2D state space models (SSMs) incur prohibitive computational overhead. This paper introduces the first hardware-aware 2D selective SSM, which natively processes images in 2D via raster-scan order, enables efficient state propagation across rows and columns, and leverages custom GPU kernels for optimal throughput. The method preserves linear complexity and high parallelism while faithfully capturing intrinsic 2D continuity. Extending the Mamba architecture, it synergizes with VMamba for hierarchical visual representation learning. Evaluated on ten public WSI datasets, our approach achieves +2.48% AUC and +3.11% F1; it also improves mIoU by 0.5–0.7 on ADE20k and top-1 accuracy by 0.2% on ImageNet-1K.

Technology Category

Application Category

📝 Abstract
Efficiently modeling large 2D contexts is essential for various fields including Giga-Pixel Whole Slide Imaging (WSI) and remote sensing. Transformer-based models offer high parallelism but face challenges due to their quadratic complexity for handling long sequences. Recently, Mamba introduced a selective State Space Model (SSM) with linear complexity and high parallelism, enabling effective and efficient modeling of wide context in 1D sequences. However, extending Mamba to vision tasks, which inherently involve 2D structures, results in spatial discrepancies due to the limitations of 1D sequence processing. On the other hand, current 2D SSMs inherently model 2D structures but they suffer from prohibitively slow computation due to the lack of efficient parallel algorithms. In this work, we propose 2DMamba, a novel 2D selective SSM framework that incorporates the 2D spatial structure of images into Mamba, with a highly optimized hardware-aware operator, adopting both spatial continuity and computational efficiency. We validate the versatility of our approach on both WSIs and natural images. Extensive experiments on 10 public datasets for WSI classification and survival analysis show that 2DMamba improves up to 2.48% in AUC, 3.11% in F1 score, 2.47% in accuracy and 5.52% in C-index. Additionally, integrating our method with VMamba for natural imaging yields 0.5 to 0.7 improvements in mIoU on the ADE20k semantic segmentation dataset, and 0.2% accuracy improvement on ImageNet-1K classification dataset. Our code is available at https://github.com/AtlasAnalyticsLab/2DMamba.
Problem

Research questions and friction points this paper is trying to address.

Efficiently model large 2D contexts for Giga-Pixel Whole Slide Imaging.
Overcome quadratic complexity of Transformer models for long sequences.
Extend Mamba to 2D vision tasks with spatial and computational efficiency.
Innovation

Methods, ideas, or system contributions that make the work stand out.

2DMamba: 2D selective State Space Model
Optimized hardware-aware operator for efficiency
Improves WSI classification and natural imaging
J
Jingwei Zhang
Stony Brook University, Stony Brook, NY , USA
A
Anh Tien Nguyen
Korea University, Seoul, South Korea
X
Xi Han
Stony Brook University, Stony Brook, NY , USA
Vincent Quoc-Huy Trinh
Vincent Quoc-Huy Trinh
University of Montreal
Pathology GI Liver Pancreas
H
Hong Qin
Stony Brook University, Stony Brook, NY , USA
Dimitris Samaras
Dimitris Samaras
Stony Brook University
Computer VisionMachine LearningComputer GraphicsMedical Imaging
Mahdi S. Hosseini
Mahdi S. Hosseini
Assistant Professor, Concordia University, Mila Quebec AI Institute, McGill University
Computer VisionDeep LearningComputational Pathology