Prompt-Free SAM-Based Multi-Task Framework for Breast Ultrasound Lesion Segmentation and Classification

📅 2026-01-09
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Breast ultrasound image analysis remains highly challenging due to low contrast, speckle noise, and diverse lesion morphologies, which complicate tumor segmentation and classification. This work proposes a novel multi-task deep learning framework that, for the first time, dispenses with conventional prompting mechanisms and instead leverages the prompt-free high-dimensional features from the Segment Anything Model (SAM) vision encoder. These features are combined with a lightweight convolutional or U-Net-style decoder for fully supervised segmentation, while a mask-guided attention mechanism is introduced to enhance the classification branch, enabling synergistic optimization of segmentation and diagnostic tasks. Evaluated on the PRECISE 2025 dataset, the proposed method achieves a Dice coefficient of 0.887 and a classification accuracy of 92.3%, ranking among the top performers in the challenge.

Technology Category

Application Category

📝 Abstract
Accurate tumor segmentation and classification in breast ultrasound (BUS) imaging remain challenging due to low contrast, speckle noise, and diverse lesion morphology. This study presents a multi-task deep learning framework that jointly performs lesion segmentation and diagnostic classification using embeddings from the Segment Anything Model (SAM) vision encoder. Unlike prompt-based SAM variants, our approach employs a prompt-free, fully supervised adaptation where high-dimensional SAM features are decoded through either a lightweight convolutional head or a UNet-inspired decoder for pixel-wise segmentation. The classification branch is enhanced via mask-guided attention, allowing the model to focus on lesion-relevant features while suppressing background artifacts. Experiments on the PRECISE 2025 breast ultrasound dataset, split per class into 80 percent training and 20 percent testing, show that the proposed method achieves a Dice Similarity Coefficient (DSC) of 0.887 and an accuracy of 92.3 percent, ranking among the top entries on the PRECISE challenge leaderboard. These results demonstrate that SAM-based representations, when coupled with segmentation-guided learning, significantly improve both lesion delineation and diagnostic prediction in breast ultrasound imaging.
Problem

Research questions and friction points this paper is trying to address.

breast ultrasound
lesion segmentation
lesion classification
speckle noise
low contrast
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prompt-Free
SAM-Based
Multi-Task Learning
Mask-Guided Attention
Breast Ultrasound Segmentation
🔎 Similar Papers
No similar papers found.
S
Samuel E. Johnny
Carnegie Mellon University Africa, Kigali, Rwanda
B
Bernes L. Atabonfack
Carnegie Mellon University Africa, Kigali, Rwanda
I
Israel Alagbe
Carnegie Mellon University Africa, Kigali, Rwanda
Assane Gueye
Assane Gueye
Associate Teaching Professor
Carnegie Mellon University Africa