Improving Cross-lingual Representation for Semantic Retrieval with Code-switching

📅 2024-03-03
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the lack of task-aware signals in pretrained models for cross-lingual semantic retrieval, this paper proposes a code-switching–based alternative paradigm for cross-lingual continual pretraining, specifically tailored to multilingual intelligent customer service scenarios. Our method explicitly models code-switching as a structured prior for retrieval tasks and constructs multilingual code-switched data to guide the model—during continual pretraining—to jointly learn cross-lingual alignment and semantic matching representations. Extensive experiments across 20+ languages, three proprietary business corpora, and four public benchmarks demonstrate that our approach achieves significant improvements over existing state-of-the-art methods on both cross-lingual semantic retrieval and semantic textual similarity tasks.

Technology Category

Application Category

📝 Abstract
Semantic Retrieval (SR) has become an indispensable part of the FAQ system in the task-oriented question-answering (QA) dialogue scenario. The demands for a cross-lingual smart-customer-service system for an e-commerce platform or some particular business conditions have been increasing recently. Most previous studies exploit cross-lingual pre-trained models (PTMs) for multi-lingual knowledge retrieval directly, while some others also leverage the continual pre-training before fine-tuning PTMs on the downstream tasks. However, no matter which schema is used, the previous work ignores to inform PTMs of some features of the downstream task, i.e. train their PTMs without providing any signals related to SR. To this end, in this work, we propose an Alternative Cross-lingual PTM for SR via code-switching. We are the first to utilize the code-switching approach for cross-lingual SR. Besides, we introduce the novel code-switched continual pre-training instead of directly using the PTMs on the SR tasks. The experimental results show that our proposed approach consistently outperforms the previous SOTA methods on SR and semantic textual similarity (STS) tasks with three business corpora and four open datasets in 20+ languages.
Problem

Research questions and friction points this paper is trying to address.

Enhancing cross-lingual semantic retrieval performance
Addressing lack of task-specific signals in pre-trained models
Utilizing code-switching for improved multilingual representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes code-switching for cross-lingual semantic retrieval
Introduces code-switched continual pre-training method
Outperforms SOTA methods on multiple datasets
🔎 Similar Papers
No similar papers found.
M
M. Maimaiti
Alibaba DAMO Academy
Yuanhang Zheng
Yuanhang Zheng
College of Computer Science, Sichuan University
Complex Cognitive Information Decision-MakingArtificial Intelligence
J
Ji Zhang
Alibaba DAMO Academy
F
Fei Huang
Alibaba DAMO Academy
Y
Yue Zhang
Department of Computer Science and Technology, Westlake University, Hangzhou, China
W
Wenpei Luo
Department of Computer Science and Technology, Dalian University of Technology, Dalian
K
Kaiyu Huang
Beijing Key Lab of Traffic Data Analysis and Mining, Beijing Jiaotong University, China