Eagle: Exploring The Design Space for Multimodal LLMs with Mixture of Encoders

πŸ“… 2024-08-28
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 30
✨ Influential: 3
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the bottleneck in visual understanding of multimodal large language models (MLLMs) imposed by single-encoder representations, systematically exploring the design space of hybrid vision encoders. We propose three key techniques: (1) lightweight concatenative fusion of multi-resolution complementary vision encoders; (2) a Pre-Alignment mechanism that explicitly aligns vision tokens with language tokens in semantic space; and (3) an end-to-end joint training framework. Experiments demonstrate that our Eagle series models achieve new state-of-the-art performance across major benchmarksβ€”including MMBench, OCRBench, and DocVQAβ€”outperforming all open-source SOTA models. The approach significantly mitigates hallucination and delivers substantial gains on fine-grained tasks such as OCR and document parsing.

Technology Category

Application Category

πŸ“ Abstract
The ability to accurately interpret complex visual information is a crucial topic of multimodal large language models (MLLMs). Recent work indicates that enhanced visual perception significantly reduces hallucinations and improves performance on resolution-sensitive tasks, such as optical character recognition and document analysis. A number of recent MLLMs achieve this goal using a mixture of vision encoders. Despite their success, there is a lack of systematic comparisons and detailed ablation studies addressing critical aspects, such as expert selection and the integration of multiple vision experts. This study provides an extensive exploration of the design space for MLLMs using a mixture of vision encoders and resolutions. Our findings reveal several underlying principles common to various existing strategies, leading to a streamlined yet effective design approach. We discover that simply concatenating visual tokens from a set of complementary vision encoders is as effective as more complex mixing architectures or strategies. We additionally introduce Pre-Alignment to bridge the gap between vision-focused encoders and language tokens, enhancing model coherence. The resulting family of MLLMs, Eagle, surpasses other leading open-source models on major MLLM benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Explores design space for multimodal LLMs with vision encoders.
Addresses lack of systematic comparisons in vision expert integration.
Introduces Pre-Alignment to enhance vision-language model coherence.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture of vision encoders for multimodal LLMs
Pre-Alignment bridges vision and language tokens
Concatenating visual tokens simplifies architecture
πŸ”Ž Similar Papers
No similar papers found.