š¤ AI Summary
Current spiking neural networks (SNNs) suffer from poor generalization, modality fragmentation, and heavy reliance on large foundation models in 3D open-world understanding, hindering multimodal question answering and zero-shot 3D classification. To address these limitationsāparticularly under energy-constrained settingsāwe propose the first SNN-based vision-language pretraining framework. Our method introduces two core innovations: (1) Multi-scale Tri-modal Alignment (MTA), enabling unsupervised contrastive learning across 3D point clouds, images, and text; and (2) Reparameterizable VisionāLanguage Integration (Rep-VLI), eliminating dependence on large pretrained text encoders. Experiments demonstrate that our approach achieves 85.4% Top-1 accuracy on zero-shot 3D classificationāsurpassing state-of-the-art artificial neural network (ANN) baselinesāand yields average downstream performance gains exceeding 2%. Notably, it is the first SNN framework to support open-world 3D multimodal question answering while significantly reducing computational energy consumption.
š Abstract
Spiking Neural Networks (SNNs) provide an energy-efficient way to extract 3D spatio-temporal features. However, existing SNNs still exhibit a significant performance gap compared to Artificial Neural Networks (ANNs) due to inadequate pre-training strategies. These limitations manifest as restricted generalization ability, task specificity, and a lack of multimodal understanding, particularly in challenging tasks such as multimodal question answering and zero-shot 3D classification. To overcome these challenges, we propose a Spike-based Vision-Language (SVL) pretraining framework that empowers SNNs with open-world 3D understanding while maintaining spike-driven efficiency. SVL introduces two key components: (i) Multi-scale Triple Alignment (MTA) for label-free triplet-based contrastive learning across 3D, image, and text modalities, and (ii) Re-parameterizable Vision-Language Integration (Rep-VLI) to enable lightweight inference without relying on large text encoders. Extensive experiments show that SVL achieves a top-1 accuracy of 85.4% in zero-shot 3D classification, surpassing advanced ANN models, and consistently outperforms prior SNNs on downstream tasks, including 3D classification (+6.1%), DVS action recognition (+2.1%), 3D detection (+1.1%), and 3D segmentation (+2.1%) with remarkable efficiency. Moreover, SVL enables SNNs to perform open-world 3D question answering, sometimes outperforming ANNs. To the best of our knowledge, SVL represents the first scalable, generalizable, and hardware-friendly paradigm for 3D open-world understanding, effectively bridging the gap between SNNs and ANNs in complex open-world understanding tasks. Code is available https://github.com/bollossom/SVL.