Improving Semantic Proximity in Information Retrieval through Cross-Lingual Alignment

๐Ÿ“… 2026-04-07
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study addresses the prevalent "English bias" in multilingual retrieval models, wherein irrelevant English documents are prioritized over relevant ones in the queryโ€™s native language within mixed-language corporaโ€”a symptom of insufficient cross-lingual semantic alignment. The work formally defines and quantifies this issue, introduces a novel evaluation framework with tailored metrics, and proposes a sample-efficient fine-tuning strategy requiring only 2.8k examples to substantially enhance cross-lingual alignment. Experimental results demonstrate that the proposed method consistently improves cross-lingual retrieval performance across multiple state-of-the-art multilingual embedding models and effectively mitigates the English preference problem.
๐Ÿ“ Abstract
With the increasing accessibility and utilization of multilingual documents, Cross-Lingual Information Retrieval (CLIR) has emerged as an important research area. Conventionally, CLIR tasks have been conducted under settings where the language of documents differs from that of queries, and typically, the documents are composed in a single coherent language. In this paper, we highlight that in such a setting, the cross-lingual alignment capability may not be evaluated adequately. Specifically, we observe that, in a document pool where English documents coexist with another language, most multilingual retrievers tend to prioritize unrelated English documents over the related document written in the same language as the query. To rigorously analyze and quantify this phenomenon, we introduce various scenarios and metrics designed to evaluate the cross-lingual alignment performance of multilingual retrieval models. Furthermore, to improve cross-lingual performance under these challenging conditions, we propose a novel training strategy aimed at enhancing cross-lingual alignment. Using only a small dataset consisting of 2.8k samples, our method significantly improves the cross-lingual retrieval performance while simultaneously mitigating the English inclination problem. Extensive analyses demonstrate that the proposed method substantially enhances the cross-lingual alignment capabilities of most multilingual embedding models.
Problem

Research questions and friction points this paper is trying to address.

Cross-Lingual Information Retrieval
Semantic Proximity
Multilingual Retrieval
English Inclination
Cross-Lingual Alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-Lingual Alignment
Multilingual Information Retrieval
English Bias Mitigation
CLIR Evaluation
Contrastive Training
๐Ÿ”Ž Similar Papers
No similar papers found.