🤖 AI Summary
To address the low efficiency and subjectivity inherent in manual phenotypic assessment of oysters, this paper proposes a high-precision instance segmentation method targeting four key anatomical components: shell, adductor muscle, gonad, and mantle. We introduce a novel multi-network ensemble framework integrating Mask R-CNN, YOLOv8-Seg, and TransUNet, augmented with a global-local hierarchical attention mechanism and a cross-scale feature alignment module. An adaptive weight fusion strategy further enhances robustness—particularly for small objects (e.g., gonads) and occluded instances. Evaluated on three real-world aquaculture datasets, our method achieves a mean Average Precision (mAP) of 86.7%, with gonad Intersection-over-Union (IoU) improved by 12.3% over baseline methods and an average mAP gain of 9.5% compared to individual models. The framework demonstrates strong generalization capability and practical suitability for industrial deployment.
📝 Abstract
Phenotype segmentation is pivotal in analysing visual features of living organisms, enhancing our understanding of their characteristics. In the context of oysters, meat quality assessment is paramount, focusing on shell, meat, gonad, and muscle components. Traditional manual inspection methods are time-consuming and subjective, prompting the adoption of machine vision technology for efficient and objective evaluation. We explore machine vision's capacity for segmenting oyster components, leading to the development of a multi-network ensemble approach with a global-local hierarchical attention mechanism. This approach integrates predictions from diverse models and addresses challenges posed by varying scales, ensuring robust instance segmentation across components. Finally, we provide a comprehensive evaluation of the proposed method's performance using different real-world datasets, highlighting its efficacy and robustness in enhancing oyster phenotype segmentation.