QUPID: Quantified Understanding for Enhanced Performance, Insights, and Decisions in Korean Search Engines

๐Ÿ“… 2025-05-12
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the high computational cost and slow inference of large language models (LLMs) in Korean search relevance assessment, this paper proposes a dual-architecture small language model (SLM) collaboration paradigm: integrating a generative SLM with an embedding-based SLM, enabled by multimodal relevance fusion and consistency modeling for lightweight, efficient evaluation. This work introduces the first architecture-diversity-driven collaborative judgment mechanism, achieving high accuracy while significantly improving efficiency. Experiments demonstrate a Cohenโ€™s Kappa of 0.646โ€”67% higher than state-of-the-art LLMsโ€”with 60ร— faster inference latency and a 1.9% improvement in online nDCG@5. The method has been deployed in production, supporting over ten million daily queries.

Technology Category

Application Category

๐Ÿ“ Abstract
Large language models (LLMs) have been widely used for relevance assessment in information retrieval. However, our study demonstrates that combining two distinct small language models (SLMs) with different architectures can outperform LLMs in this task. Our approach -- QUPID -- integrates a generative SLM with an embedding-based SLM, achieving higher relevance judgment accuracy while reducing computational costs compared to state-of-the-art LLM solutions. This computational efficiency makes QUPID highly scalable for real-world search systems processing millions of queries daily. In experiments across diverse document types, our method demonstrated consistent performance improvements (Cohen's Kappa of 0.646 versus 0.387 for leading LLMs) while offering 60x faster inference times. Furthermore, when integrated into production search pipelines, QUPID improved nDCG@5 scores by 1.9%. These findings underscore how architectural diversity in model combinations can significantly enhance both search relevance and operational efficiency in information retrieval systems.
Problem

Research questions and friction points this paper is trying to address.

Combining small language models outperforms LLMs in relevance assessment
QUPID enhances search relevance with lower computational costs
Architectural diversity improves both accuracy and efficiency in search systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines generative and embedding-based small language models
Achieves higher accuracy with lower computational costs
Enhances search relevance and operational efficiency significantly
๐Ÿ”Ž Similar Papers
No similar papers found.
O
Ohjoon Kwon
Naver Corporation
C
Changsu Lee
Naver Corporation
J
Jihye Back
Naver Corporation
L
Lim Sun Suk
Naver Corporation
Inho Kang
Inho Kang
NAVER
NLPInformation Retrieval
D
Donghyeon Jeon
Naver Corporation