🤖 AI Summary
To address the low efficiency and poor generalization of multimodal understanding in e-commerce and short-video scenarios, this paper proposes a parameter-efficient and scalable vision-language large model architecture. Our method introduces three key innovations: (1) a lightweight, high-efficiency visual encoder; (2) a dynamic cross-modal alignment mechanism enabling fine-grained image-text interaction; and (3) a domain-adaptive training strategy tailored to real-world applications. Evaluated on an e-commerce multimodal benchmark, our model achieves a state-of-the-art score of 79.66. In the OpenCompass leaderboard, it ranks second among models with fewer than 10 billion parameters, attaining an average score of 67.4. The source code and pre-trained weights are publicly released, significantly broadening the practical applicability of lightweight cross-modal models in industrial settings.
📝 Abstract
Recently, vision-language models have made remarkable progress, demonstrating outstanding capabilities in various tasks such as image captioning and video understanding. We introduce Valley2, a novel multimodal large language model designed to enhance performance across all domains and extend the boundaries of practical applications in e-commerce and short video scenarios. Notably, Valley2 achieves state-of-the-art (SOTA) performance on e-commerce benchmarks, surpassing open-source models of similar size by a large margin (79.66 vs. 72.76). Additionally, Valley2 ranks second on the OpenCompass leaderboard among models with fewer than 10B parameters, with an impressive average score of 67.4. The code and model weights are open-sourced at https://github.com/bytedance/Valley.