๐ค AI Summary
To address the high computational cost and slow inference of large language models (LLMs) in Korean search relevance assessment, this paper proposes a dual-architecture small language model (SLM) collaboration paradigm: integrating a generative SLM with an embedding-based SLM, enabled by multimodal relevance fusion and consistency modeling for lightweight, efficient evaluation. This work introduces the first architecture-diversity-driven collaborative judgment mechanism, achieving high accuracy while significantly improving efficiency. Experiments demonstrate a Cohenโs Kappa of 0.646โ67% higher than state-of-the-art LLMsโwith 60ร faster inference latency and a 1.9% improvement in online nDCG@5. The method has been deployed in production, supporting over ten million daily queries.
๐ Abstract
Large language models (LLMs) have been widely used for relevance assessment in information retrieval. However, our study demonstrates that combining two distinct small language models (SLMs) with different architectures can outperform LLMs in this task. Our approach -- QUPID -- integrates a generative SLM with an embedding-based SLM, achieving higher relevance judgment accuracy while reducing computational costs compared to state-of-the-art LLM solutions. This computational efficiency makes QUPID highly scalable for real-world search systems processing millions of queries daily. In experiments across diverse document types, our method demonstrated consistent performance improvements (Cohen's Kappa of 0.646 versus 0.387 for leading LLMs) while offering 60x faster inference times. Furthermore, when integrated into production search pipelines, QUPID improved nDCG@5 scores by 1.9%. These findings underscore how architectural diversity in model combinations can significantly enhance both search relevance and operational efficiency in information retrieval systems.