Dynamic-DINO: Fine-Grained Mixture of Experts Tuning for Real-time Open-Vocabulary Object Detection

📅 2025-07-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of balancing model compactness and detection accuracy in real-time open-vocabulary object detection, this paper proposes Dynamic-DINO: a dynamic Mixture-of-Experts (MoE) inference framework tailored for lightweight vision-language models. Unlike static expert assignment, Dynamic-DINO identifies and models fixed collaborative patterns among deep-layer experts, and introduces an input-aware sparse subnetwork activation mechanism. It further incorporates MoE-Tuning, fine-grained FFN splitting, and router initialization guided by pretrained weights to expand the learnable parameter space while preserving model compactness—achieving significantly fewer parameters than baseline models. Trained solely on 1.56M publicly available images, Dynamic-DINO surpasses Grounding DINO 1.5 Edge—trained on the private Grounding20M dataset—on open-vocabulary detection, delivering superior accuracy–latency trade-offs and real-time performance.

Technology Category

Application Category

📝 Abstract
The Mixture of Experts (MoE) architecture has excelled in Large Vision-Language Models (LVLMs), yet its potential in real-time open-vocabulary object detectors, which also leverage large-scale vision-language datasets but smaller models, remains unexplored. This work investigates this domain, revealing intriguing insights. In the shallow layers, experts tend to cooperate with diverse peers to expand the search space. While in the deeper layers, fixed collaborative structures emerge, where each expert maintains 2-3 fixed partners and distinct expert combinations are specialized in processing specific patterns. Concretely, we propose Dynamic-DINO, which extends Grounding DINO 1.5 Edge from a dense model to a dynamic inference framework via an efficient MoE-Tuning strategy. Additionally, we design a granularity decomposition mechanism to decompose the Feed-Forward Network (FFN) of base model into multiple smaller expert networks, expanding the subnet search space. To prevent performance degradation at the start of fine-tuning, we further propose a pre-trained weight allocation strategy for the experts, coupled with a specific router initialization. During inference, only the input-relevant experts are activated to form a compact subnet. Experiments show that, pretrained with merely 1.56M open-source data, Dynamic-DINO outperforms Grounding DINO 1.5 Edge, pretrained on the private Grounding20M dataset.
Problem

Research questions and friction points this paper is trying to address.

Exploring MoE in real-time open-vocabulary object detection
Developing Dynamic-DINO for dynamic inference via MoE-Tuning
Enhancing performance with granularity decomposition and weight allocation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Efficient MoE-Tuning strategy for dynamic inference
Granularity decomposition mechanism for expert networks
Pre-trained weight allocation with router initialization
🔎 Similar Papers
No similar papers found.
Yehao Lu
Yehao Lu
Zhejiang University
Autonomous Driving3D ReconstructionSwarm Robot
M
Minghe Weng
College of Computer Science and Technology, Zhejiang University
Z
Zekang Xiao
College of Computer Science and Technology, Zhejiang University
Rui Jiang
Rui Jiang
College of Computer Science and Technology, Zhejiang University
W
Wei Su
College of Computer Science and Technology, Zhejiang University
Guangcong Zheng
Guangcong Zheng
College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China.
Controllable Video/Image SynthesisDiffusion ModelPersonalization Generation Multi-ModalBEV
P
Ping Lu
ZTE
X
Xi Li
Polytechnic Institute, Zhejiang University