DINeuro: Distilling Knowledge from 2D Natural Images via Deformable Tubular Transferring Strategy for 3D Neuron Reconstruction

📅 2024-10-29
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D neuronal morphology reconstruction methods typically model voxel features directly, neglecting intrinsic tubular geometric priors and failing to leverage rich knowledge from pre-trained 2D vision foundation models. To address this, we propose a cross-domain knowledge distillation framework: (1) We introduce a deformable tubular transfer strategy—first of its kind—to adapt the representational capacity of a pre-trained 2D Vision Transformer (ViT) to 3D neuronal tubular structures in latent space; (2) We integrate tubular geometric constraints with a deformable feature alignment mechanism to guide a 3D ViT-based segmentation model toward morphology-aware representations. Evaluated on the BigNeuron Janelia dataset, our method achieves a 4.53% improvement in mean Dice coefficient and a 3.56% reduction in 95% Hausdorff distance over state-of-the-art approaches. This work establishes the first effective knowledge transfer from 2D vision foundation models to 3D neural microstructure reconstruction, delivering an interpretable and generalizable cross-modal learning paradigm for biomedical image analysis.

Technology Category

Application Category

📝 Abstract
Reconstructing neuron morphology from 3D light microscope imaging data is critical to aid neuroscientists in analyzing brain networks and neuroanatomy. With the boost from deep learning techniques, a variety of learning-based segmentation models have been developed to enhance the signal-to-noise ratio of raw neuron images as a pre-processing step in the reconstruction workflow. However, most existing models directly encode the latent representative features of volumetric neuron data but neglect their intrinsic morphological knowledge. To address this limitation, we design a novel framework that distills the prior knowledge from a 2D Vision Transformer pre-trained on extensive 2D natural images to facilitate neuronal morphological learning of our 3D Vision Transformer. To bridge the knowledge gap between the 2D natural image and 3D microscopic morphologic domains, we propose a deformable tubular transferring strategy that adapts the pre-trained 2D natural knowledge to the inherent tubular characteristics of neuronal structure in the latent embedding space. The experimental results on the Janelia dataset of the BigNeuron project demonstrate that our method achieves a segmentation performance improvement of 4.53% in mean Dice and 3.56% in mean 95% Hausdorff distance.
Problem

Research questions and friction points this paper is trying to address.

3D neuron reconstruction challenge
knowledge distillation from 2D images
deformable tubular transferring strategy
Innovation

Methods, ideas, or system contributions that make the work stand out.

2D Vision Transformer
deformable tubular transferring
3D neuron reconstruction
🔎 Similar Papers
No similar papers found.
Y
Yik San Cheng
School of Computer Science, The University of Sydney, Sydney, Australia
Runkai Zhao
Runkai Zhao
University of Sydney
Multi-Dimensional Data AnalysisAI4Science3D Computer Vision
H
Heng Wang
School of Computer Science, The University of Sydney, Sydney, Australia
H
Hanchuan Peng
SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
Y
Yui Lo
School of Computer Science, The University of Sydney, Sydney, Australia; Harvard Medical School, Boston, MA, USA; Brigham and Women’s Hospital, Boston, USA
Yuqian Chen
Yuqian Chen
Postdoc Research Fellow; Harvard Medical School; The University of Sydney
medical computer vision
L
L. O’Donnell
Harvard Medical School, Boston, MA, USA; Brigham and Women’s Hospital, Boston, USA
Weidong Cai
Weidong Cai
Clinical Associate Professor, Stanford University School of Medicine
functional neuroimagingmachine learningcognitivedevelopmentalclinical neuroscience