A Multicenter Benchmark of Multiple Instance Learning Models for Lymphoma Subtyping from HE-stained Whole Slide Images

📅 2025-12-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the clinical challenge of delayed lymphoma subtype diagnosis due to reliance on costly equipment and scarce expert pathologists. We propose a novel deep learning–based diagnostic paradigm leveraging routine hematoxylin–eosin (HE)-stained whole-slide images (WSIs). To this end, we construct the first multi-center HE-WSI benchmark dataset encompassing four lymphoma subtypes and normal tissue. We systematically evaluate foundational pathology models—including H-optimus-1 and Virchow2—integrated with AB-MIL and TransMIL multiple-instance learning (MIL) frameworks across 10×, 20×, and 40× magnifications. This work establishes the first multi-center MIL benchmark for lymphoma classification, revealing that 40× magnification alone achieves optimal performance without cross-magnification fusion. In-distribution balanced accuracy exceeds 80%, yet drops sharply to ~60% under out-of-distribution settings, exposing critical generalization limitations. We publicly release a full-stack evaluation toolkit to advance clinically deployable AI-assisted pathological diagnosis.

Technology Category

Application Category

📝 Abstract
Timely and accurate lymphoma diagnosis is essential for guiding cancer treatment. Standard diagnostic practice combines hematoxylin and eosin (HE)-stained whole slide images with immunohistochemistry, flow cytometry, and molecular genetic tests to determine lymphoma subtypes, a process requiring costly equipment, skilled personnel, and causing treatment delays. Deep learning methods could assist pathologists by extracting diagnostic information from routinely available HE-stained slides, yet comprehensive benchmarks for lymphoma subtyping on multicenter data are lacking. In this work, we present the first multicenter lymphoma benchmarking dataset covering four common lymphoma subtypes and healthy control tissue. We systematically evaluate five publicly available pathology foundation models (H-optimus-1, H0-mini, Virchow2, UNI2, Titan) combined with attention-based (AB-MIL) and transformer-based (TransMIL) multiple instance learning aggregators across three magnifications (10x, 20x, 40x). On in-distribution test sets, models achieve multiclass balanced accuracies exceeding 80% across all magnifications, with all foundation models performing similarly and both aggregation methods showing comparable results. The magnification study reveals that 40x resolution is sufficient, with no performance gains from higher resolutions or cross-magnification aggregation. However, on out-of-distribution test sets, performance drops substantially to around 60%, highlighting significant generalization challenges. To advance the field, larger multicenter studies covering additional rare lymphoma subtypes are needed. We provide an automated benchmarking pipeline to facilitate such future research.
Problem

Research questions and friction points this paper is trying to address.

Benchmarking multiple instance learning models for lymphoma subtyping
Evaluating model performance across multicenter data and magnifications
Addressing generalization challenges in out-of-distribution test sets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multicenter dataset for lymphoma subtyping benchmarking
Evaluating foundation models with MIL aggregators across magnifications
Automated pipeline for scalable lymphoma research
🔎 Similar Papers
No similar papers found.
R
Rao Muhammad Umer
Institute of AI for Health, Helmholtz Munich, Munich, Germany
D
Daniel Sens
Institute of AI for Health, Helmholtz Munich, Munich, Germany
J
Jonathan Noll
Institute of AI for Health, Helmholtz Munich, Munich, Germany
C
Christian Matek
Institute of AI for Health, Helmholtz Munich, Munich, Germany; Department of Medicine III, Ludwig-Maximilian-University Hospital, Munich, Germany; Institute of Pathology, Erlangen, Germany
L
Lukas Wolfseher
University of Kiel, Kiel, Germany
Rainer Spang
Rainer Spang
University of Regensburg
BioinformaticsBiostatisticsTumor Biology
R
Ralf Huss
Institute for Digital Medicine, University Hospital, Augsburg, Germany
J
Johannes Raffler
Institute for Digital Medicine, University Hospital, Augsburg, Germany
S
Sarah Reinke
Institute of Pathology, University Hospital, Kiel, Germany
W
Wolfram Klapper
Institute of Pathology, University Hospital, Kiel, Germany
K
Katja Steiger
Technical University of Munich, Munich, Germany
K
Kristina Schwamborn
Technical University of Munich, Munich, Germany
Carsten Marr
Carsten Marr
Institute of AI for Health @ Helmholtz Munich & Clinics @ LMU München
AI for Biomed & Health