🤖 AI Summary
Current multimodal large language models (MLLMs) face a dual dilemma: they over-utilize costly, high-fidelity reasoning pathways for simple queries—hurting efficiency—while enhancing specialized reasoning often degrades general-purpose understanding. To address this, we propose Metis-HOME, a hybrid-thinking, multi-expert framework featuring parallel “thinking” and “non-thinking” branches. A lightweight, learnable router dynamically dispatches inputs to the appropriate branch, enabling synergistic optimization between complex, stepwise reasoning and rapid, direct inference. Built upon Qwen2.5-VL-7B, Metis-HOME instantiates a Mixture-of-Experts (MoE) architecture integrating multimodal reasoning capabilities with dense model decomposition techniques. Experiments demonstrate substantial performance gains across mathematical reasoning, visual question answering (VQA), and OCR tasks, while simultaneously improving generalization—marking the first instance where domain-specialized reasoning enhances, rather than erodes, broad multimodal comprehension.
📝 Abstract
Inspired by recent advancements in LLM reasoning, the field of multimodal reasoning has seen remarkable progress, achieving significant performance gains on intricate tasks such as mathematical problem-solving. Despite this progress, current multimodal large reasoning models exhibit two key limitations. They tend to employ computationally expensive reasoning even for simple queries, leading to inefficiency. Furthermore, this focus on specialized reasoning often impairs their broader, more general understanding capabilities. In this paper, we propose Metis-HOME: a Hybrid Optimized Mixture-of-Experts framework designed to address this trade-off. Metis-HOME enables a ''Hybrid Thinking'' paradigm by structuring the original dense model into two distinct expert branches: a thinking branch tailored for complex, multi-step reasoning, and a non-thinking branch optimized for rapid, direct inference on tasks like general VQA and OCR. A lightweight, trainable router dynamically allocates queries to the most suitable expert. We instantiate Metis-HOME by adapting the Qwen2.5-VL-7B into an MoE architecture. Comprehensive evaluations reveal that our approach not only substantially enhances complex reasoning abilities but also improves the model's general capabilities, reversing the degradation trend observed in other reasoning-specialized models. Our work establishes a new paradigm for building powerful and versatile MLLMs, effectively resolving the prevalent reasoning-vs-generalization dilemma.