🤖 AI Summary
This work addresses the limitations of existing multimodal large language models (MLLMs) in biology, which typically support only a single modality and rely on input-agnostic, heuristic fusion strategies that fail to capture modality-specific characteristics effectively. To overcome this, the authors propose a representation-aware fusion framework that leverages modality-specific signals in the embedding space to guide integration. Specifically, by designing probe inputs embedded with multimodal tokens, the method extracts hierarchical embedding responses from specialized MLLMs and jointly estimates fusion coefficients at both layer-level (coarse-grained) and element-level (fine-grained) granularities. Evaluated on an interaction prediction benchmark, the proposed approach significantly outperforms existing fusion strategies and even surpasses task-specific fine-tuned models, demonstrating the efficacy and superiority of embedding-space signals for cross-modal MLLM fusion.
📝 Abstract
Biological multimodal large language models (MLLMs) have emerged as powerful foundation models for scientific discovery. However, existing models are specialized to a single modality, limiting their ability to solve inherently cross-modal scientific problems. While model merging is an efficient method to combine the different modalities into a unified MLLM, existing methods rely on input-agnostic parameter space heuristics that fail to faithfully capture modality specialization. To overcome this limitation, we propose a representation-aware merging framework that estimates merging coefficients from embedding space signals. We first design a probe input that consists of different modality tokens and forward it through each specialized MLLM to obtain layer-wise embedding responses that reflect modality-specific representation changes. We then estimate complementary merging coefficients at two granularities from the embedding space: layer-wise coefficients from coarse-grained signals and element-wise coefficients from fine-grained signals, which are jointly combined for robust coefficient estimation. Experiments on interactive effect prediction benchmarks show that our method outperforms existing merging methods and even surpasses task-specific fine-tuned models, establishing that embedding space signals provide a principled and effective foundation for cross-modal MLLM merging.