🤖 AI Summary
This study addresses the challenge of anisotropic resolution in volume electron microscopy (VEM) imaging, where limited axial resolution hinders accurate three-dimensional structural analysis. To overcome this limitation, the authors propose VEMamba, a novel framework that achieves efficient isotropic reconstruction through 3D dependency reordering. The key innovations include an Axial-Lateral Chunk Selective Scanning Module (ALCSSM) to model directional consistency, a Dynamic Weight Aggregation Module (DWAM) for optimized feature fusion, and a self-supervised learning strategy combining realistic degradation modeling with Momentum Contrast (MoCo). Extensive experiments demonstrate that VEMamba significantly outperforms existing methods on both simulated and real-world anisotropic VEM data, achieving state-of-the-art performance in terms of reconstruction quality and computational efficiency.
📝 Abstract
Volume Electron Microscopy (VEM) is crucial for 3D tissue imaging but often produces anisotropic data with poor axial resolution, hindering visualization and downstream analysis. Existing methods for isotropic reconstruction often suffer from neglecting abundant axial information and employing simple downsampling to simulate anisotropic data. To address these limitations, we propose VEMamba, an efficient framework for isotropic reconstruction. The core of VEMamba is a novel 3D Dependency Reordering paradigm, implemented via two key components: an Axial-Lateral Chunking Selective Scan Module (ALCSSM), which intelligently re-maps complex 3D spatial dependencies (both axial and lateral) into optimized 1D sequences for efficient Mamba-based modeling, explicitly enforcing axial-lateral consistency; and a Dynamic Weights Aggregation Module (DWAM) to adaptively aggregate these reordered sequence outputs for enhanced representational power. Furthermore, we introduce a realistic degradation simulation and then leverage Momentum Contrast (MoCo) to integrate this degradation-aware knowledge into the network for superior reconstruction. Extensive experiments on both simulated and real-world anisotropic VEM datasets demonstrate that VEMamba achieves highly competitive performance across various metrics while maintaining a lower computational footprint. The source code is available on GitHub: https://github.com/I2-Multimedia-Lab/VEMamba