🤖 AI Summary
To address the low adaptation efficiency and high memory/computational overhead of large-scale pretrained Transformers in audio-visual downstream tasks, this paper proposes MetaToken, a lightweight meta-token learning framework. Methodologically, it introduces (1) Layer-Centric Distillation (LCD), which parallelly distills audio-visual features at each Transformer layer to generate compact, transferable meta-tokens; and (2) Meta-Token Injection (MTI), which dynamically injects these meta-tokens into early layers to guide cross-modal feature alignment and task-specific adaptation. By preserving pretrained knowledge while drastically improving parameter efficiency, MetaToken achieves competitive accuracy on both classification and fine-grained segmentation tasks. Experiments demonstrate up to 52% memory reduction and 49% training time savings compared to baseline methods, with consistent performance gains across diverse audio-visual benchmarks. The framework thus validates strong effectiveness, broad generalizability, and deployment friendliness for resource-constrained multimodal learning scenarios.
📝 Abstract
We present extbf{Met}a- extbf{T}oken extbf{Le}arning (Mettle), a simple and memory-efficient method for adapting large-scale pretrained transformer models to downstream audio-visual tasks. Instead of sequentially modifying the output feature distribution of the transformer backbone, Mettle utilizes a lightweight extit{Layer-Centric Distillation (LCD)} module to distill in parallel the intact audio or visual features embedded by each transformer layer into compact meta-tokens. This distillation process considers both pretrained knowledge preservation and task-specific adaptation. The obtained meta-tokens can be directly applied to classification tasks, such as audio-visual event localization and audio-visual video parsing. To further support fine-grained segmentation tasks, such as audio-visual segmentation, we introduce a extit{Meta-Token Injection (MTI)} module, which utilizes the audio and visual meta-tokens distilled from the top transformer layer to guide feature adaptation in earlier layers. Extensive experiments on multiple audiovisual benchmarks demonstrate that our method significantly reduces memory usage and training time while maintaining parameter efficiency and competitive accuracy.