🤖 AI Summary
Current multimodal foundation models lag behind large language models (LLMs) in architectural sophistication, limiting their cross-modal representation capability. To address this, we systematically integrate three key LLM components—Gaussian Error Linear Unit (GELU), Root Mean Square Layer Normalization (RMSNorm), and Rotary Position Embedding (RoPE)—into both the vision encoder and text decoder of the CoCa framework, thereby enhancing joint image-text modeling and cross-modal alignment. Our method employs end-to-end joint optimization via contrastive learning and generative pretraining. During pretraining, the model achieves a 27.25% reduction in contrastive loss and a 3.71% decrease in perplexity; after fine-tuning, average contrastive loss further improves by 13.66%. This work significantly advances multimodal model expressivity and generalization performance, establishing a novel architectural paradigm for unifying multimodal and unimodal modeling techniques.
📝 Abstract
State-of-the-art (SOTA) image and text generation models are multimodal models that have many similarities to large language models (LLMs). Despite achieving strong performances, leading foundational multimodal model architectures frequently lag behind the architectural sophistication of contemporary LLMs. We propose GRR-CoCa, an improved SOTA Contrastive Captioner (CoCa) model that incorporates Gaussian error gated linear units, root mean squared normalization, and rotary positional embedding into the textual decoders and the vision transformer (ViT) encoder. Each architectural modification has been shown to improve model performance in LLMs, but has yet to be adopted in CoCa. We benchmarked GRR-CoCa against Baseline CoCa, a model with the same modified textual decoders but with CoCa's original ViT encoder. We used standard pretraining and fine-tuning workflows to benchmark the models on contrastive and generative tasks. Our GRR-CoCa significantly outperformed Baseline CoCa on the pretraining dataset and three diverse fine-tuning datasets. Pretraining improvements were 27.25% in contrastive loss, 3.71% in perplexity, and 7.15% in CoCa loss. The average fine-tuning improvements were 13.66% in contrastive loss, 5.18% in perplexity, and 5.55% in CoCa loss. We show that GRR-CoCa's modified architecture improves performance and generalization across vision-language domains.