π€ AI Summary
Evaluating the performance limits of general-purpose foundation models versus domain-specific fine-tuned models for instance segmentation in dense, heavily occluded orchard scenes.
Method: We conduct a systematic benchmark on the MinneApple dataset, introducing a multi-threshold F1/IoU joint evaluation framework to rigorously compare SAM3 (zero-shot segmentation) against Ultralytics YOLO11 nano/medium/large (fine-tuned instance segmentation).
Contribution/Results: We find that SAM3 exhibits superior mask boundary stabilityβits performance degrades by only 4 percentage points under varying IoU thresholds, just 1/12 that of YOLO11. However, under lenient IoU=0.15, YOLO11 achieves higher F1 scores (68.9β72.2%) than SAM3 (59.8%). Critically, we quantify that biased IoU threshold selection can distort comparative results by up to 30%, prompting a formal recommendation for standardized evaluation protocols. Our work quantifies the robustness advantage of generalist models, delineates their operational boundaries in high-density, occlusion-prone environments, and advances methodological standardization for instance segmentation evaluation.
π Abstract
Deep learning has advanced two fundamentally different paradigms for instance segmentation: specialized models optimized through task-specific fine-tuning and generalist foundation models capable of zero-shot segmentation. This work presents a comprehensive comparison between SAM3 (Segment Anything Model, also called SAMv3) operating in zero-shot mode and three variants of Ultralytics YOLO11 (nano, medium, and large) fine-tuned for instance segmentation. The evaluation is conducted on the MinneApple dataset, a dense benchmark comprising 670 orchard images with 28,179 annotated apple instances, enabling rigorous validation of model behavior under high object density and occlusion. Our analysis shows IoU choices can inflate performance gaps by up to 30%. At the appropriate IoU = 0.15 threshold, YOLO models achieve 68.9%, 72.2%, and 71.9% F1, while SAM3 reaches 59.8% in pure zero-shot mode. However, YOLO exhibits steep degradation 48-50 points across IoU ranges whereas SAM3 drops only 4 points, revealing 12 times superior boundary stability of SAM3. This highlights the strength of SAMv3 in mask precision versus specialization in detection completeness of YOLO11. We provide open-source code, evaluation pipelines, and methodological recommendations, contributing to a deeper understanding of when specialized fine-tuned models or generalist foundation models are preferable for dense instance segmentation tasks. This project repository is available on GitHub as https://github.com/Applied-AI-Research-Lab/Segment-Anything-Model-SAM3-Zero-Shot-Segmentation-Against-Fine-Tuned-YOLO-Detectors