ES-Merging: Biological MLLM Merging via Embedding Space Signals

📅 2026-03-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing multimodal large language models (MLLMs) in biology, which typically support only a single modality and rely on input-agnostic, heuristic fusion strategies that fail to capture modality-specific characteristics effectively. To overcome this, the authors propose a representation-aware fusion framework that leverages modality-specific signals in the embedding space to guide integration. Specifically, by designing probe inputs embedded with multimodal tokens, the method extracts hierarchical embedding responses from specialized MLLMs and jointly estimates fusion coefficients at both layer-level (coarse-grained) and element-level (fine-grained) granularities. Evaluated on an interaction prediction benchmark, the proposed approach significantly outperforms existing fusion strategies and even surpasses task-specific fine-tuned models, demonstrating the efficacy and superiority of embedding-space signals for cross-modal MLLM fusion.

Technology Category

Application Category

📝 Abstract
Biological multimodal large language models (MLLMs) have emerged as powerful foundation models for scientific discovery. However, existing models are specialized to a single modality, limiting their ability to solve inherently cross-modal scientific problems. While model merging is an efficient method to combine the different modalities into a unified MLLM, existing methods rely on input-agnostic parameter space heuristics that fail to faithfully capture modality specialization. To overcome this limitation, we propose a representation-aware merging framework that estimates merging coefficients from embedding space signals. We first design a probe input that consists of different modality tokens and forward it through each specialized MLLM to obtain layer-wise embedding responses that reflect modality-specific representation changes. We then estimate complementary merging coefficients at two granularities from the embedding space: layer-wise coefficients from coarse-grained signals and element-wise coefficients from fine-grained signals, which are jointly combined for robust coefficient estimation. Experiments on interactive effect prediction benchmarks show that our method outperforms existing merging methods and even surpasses task-specific fine-tuned models, establishing that embedding space signals provide a principled and effective foundation for cross-modal MLLM merging.
Problem

Research questions and friction points this paper is trying to address.

biological MLLM
cross-modal integration
model merging
embedding space
modality specialization
Innovation

Methods, ideas, or system contributions that make the work stand out.

embedding space signals
model merging
multimodal large language models
representation-aware fusion
cross-modal integration
🔎 Similar Papers
No similar papers found.
W
Wonbin Lee
KAIST
D
Dongki Kim
KAIST
Sung Ju Hwang
Sung Ju Hwang
KAIST, DeepAuto
Machine learning