Multimodal Function Vectors for Spatial Relations

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the internal representation and modulation mechanisms of spatial relational knowledge in large multimodal models (LMMs). Addressing the opacity of relational reasoning during in-context learning, we propose a causal mediation analysis-based attention head localization method to identify local attention heads critical for encoding spatial relations and extract editable multimodal function vectors from them. Under frozen backbone parameters, fine-tuning only these vectors enables zero-shot inference, lightweight adaptation, and linearly composable generalization to relational analogies—revealing, for the first time, a modular architecture underlying spatial relational reasoning in LMMs. Experiments demonstrate substantial improvements in zero-shot accuracy on both synthetic and real-world image benchmarks over in-context learning baselines, and further enable vector-arithmetic solutions to unseen relational analogies.

Technology Category

Application Category

📝 Abstract
Large Multimodal Models (LMMs) demonstrate impressive in-context learning abilities from limited multimodal demonstrations, yet the internal mechanisms supporting such task learning remain opaque. Building on prior work of large language models, we show that a small subset of attention heads in the vision-language model OpenFlamingo-4B is responsible for transmitting representations of spatial relations. The activations of these attention heads, termed function vectors, can be extracted and manipulated to alter an LMM's performance on relational tasks. First, using both synthetic and real image datasets, we apply causal mediation analysis to identify attention heads that strongly influence relational predictions, and extract multimodal function vectors that improve zero-shot accuracy at inference time. We further demonstrate that these multimodal function vectors can be fine-tuned with a modest amount of training data, while keeping LMM parameters frozen, to significantly outperform in-context learning baselines. Finally, we show that relation-specific function vectors can be linearly combined to solve analogy problems involving novel and untrained spatial relations, highlighting the strong generalization ability of this approach. Our results show that LMMs encode spatial relational knowledge within localized internal structures, which can be systematically extracted and optimized, thereby advancing our understanding of model modularity and enhancing control over relational reasoning in LMMs.
Problem

Research questions and friction points this paper is trying to address.

Identifying attention heads encoding spatial relations in multimodal models
Extracting and tuning function vectors to enhance relational reasoning
Enabling generalization to novel spatial relations through vector combination
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extract spatial relation vectors from attention heads
Fine-tune function vectors with frozen model parameters
Combine relation vectors linearly for analogy solving
🔎 Similar Papers
No similar papers found.