🤖 AI Summary
To address the limitations of conventional fluorescent labeling—namely, structural disruption of organoids and inability to enable non-invasive, long-term dynamic tracking—this paper proposes a fully automated, non-invasive segmentation and tracking framework. Methodologically, it introduces a dual-branch encoder integrating CNN and Transformer architectures to capture multi-scale complementary features; a learnable Gaussian bandpass fusion module for adaptive, weighted integration of local details and global context; and a bidirectional cross-fusion decoder to enhance hierarchical feature interaction and deformation robustness. Evaluated on the SROrga dataset, the method achieves significant improvements in segmentation accuracy and temporal consistency, enabling stable quantification of morphological dynamics throughout organoid growth. This work provides an efficient, reliable, and automated tool for live organoid analysis.
📝 Abstract
Organoids replicate organ structure and function, playing a crucial role in fields such as tumor treatment and drug screening. Their shape and size can indicate their developmental status, but traditional fluorescence labeling methods risk compromising their structure. Therefore, this paper proposes an automated, non-destructive approach to organoid segmentation and tracking. We introduced the LGBP-OrgaNet, a deep learning-based system proficient in accurately segmenting, tracking, and quantifying organoids. The model leverages complementary information extracted from CNN and Transformer modules and introduces the innovative feature fusion module, Learnable Gaussian Band Pass Fusion, to merge data from two branches. Additionally, in the decoder, the model proposes a Bidirectional Cross Fusion Block to fuse multi-scale features, and finally completes the decoding through progressive concatenation and upsampling. SROrga demonstrates satisfactory segmentation accuracy and robustness on organoids segmentation datasets, providing a potent tool for organoid research.