π€ AI Summary
Conventional vision models struggle with accurate neuron segmentation in electron microscopy (EM) images characterized by high noise, anisotropy, and ultra-long-range spatial dependencies.
Method: We propose TokenUnifyβa novel hybrid token prediction framework that unifies random, next-token, and full-next-token prediction to theoretically suppress error accumulation in autoregressive pretraining. TokenUnify integrates the Mamba architecture with a large-scale EM image serialization scheme, enabling efficient modeling of ultra-long spatial sequences.
Contribution/Results: We introduce the largest high-resolution EM neuron segmentation benchmark to date, comprising over 120 million annotated voxels. On downstream segmentation tasks, TokenUnify achieves a 45% mAP improvement over prior methods while significantly reducing computational complexity. It outperforms both masked autoencoders (MAE) and classical autoregressive approaches, marking the first successful alignment of vision and language pretraining paradigms for long-sequence modeling.
π Abstract
Autoregressive next-token prediction is a standard pretraining method for large-scale language models, but its application to vision tasks is hindered by the non-sequential nature of image data, leading to cumulative errors. Most vision models employ masked autoencoder (MAE) based pretraining, which faces scalability issues. To address these challenges, we introduce extbf{TokenUnify}, a novel pretraining method that integrates random token prediction, next-token prediction, and next-all token prediction. We provide theoretical evidence demonstrating that TokenUnify mitigates cumulative errors in visual autoregression. Cooperated with TokenUnify, we have assembled a large-scale electron microscopy (EM) image dataset with ultra-high resolution, ideal for creating spatially correlated long sequences. This dataset includes over 120 million annotated voxels, making it the largest neuron segmentation dataset to date and providing a unified benchmark for experimental validation. Leveraging the Mamba network inherently suited for long-sequence modeling on this dataset, TokenUnify not only reduces the computational complexity but also leads to a significant 45% improvement in segmentation performance on downstream EM neuron segmentation tasks compared to existing methods. Furthermore, TokenUnify demonstrates superior scalability over MAE and traditional autoregressive methods, effectively bridging the gap between pretraining strategies for language and vision models. Code is available at url{https://github.com/ydchen0806/TokenUnify}.