Comparative Analysis of Listwise Reranking with Large Language Models in Limited-Resource Language Contexts

📅 2024-12-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of listwise re-ranking for low-resource African languages, where labeled data and domain-specific ranking models are scarce. Method: We propose a lightweight LLM-based re-ranking framework leveraging listwise prompt engineering and cross-lingual transfer inference, systematically evaluating models—including RankGPT-3.5 and Rank4o-mini—under standardized IR metrics (nDCG@10, MRR@100). Contribution/Results: All LLM rankers significantly outperform the BM25-DT baseline, achieving 32–57% improvements in both nDCG@10 and MRR@100. These results demonstrate strong zero-shot generalization to under-resourced languages and highlight the framework’s viability for low-cost, scalable deployment. To our knowledge, this is the first systematic evaluation of LLMs for listwise re-ranking in low-resource African languages, establishing a novel, resource-efficient re-ranking paradigm for information retrieval in data-scarce linguistic settings.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated significant effectiveness across various NLP tasks, including text ranking. This study assesses the performance of large language models (LLMs) in listwise reranking for limited-resource African languages. We compare proprietary models RankGPT3.5, Rank4o-mini, RankGPTo1-mini and RankClaude-sonnet in cross-lingual contexts. Results indicate that these LLMs significantly outperform traditional baseline methods such as BM25-DT in most evaluation metrics, particularly in nDCG@10 and MRR@100. These findings highlight the potential of LLMs in enhancing reranking tasks for low-resource languages and offer insights into cost-effective solutions.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
African Languages
Text Ranking
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Low-resource African Languages
Ranking Performance
🔎 Similar Papers
No similar papers found.
Yanxin Shen
Yanxin Shen
Unknown affiliation
Lun Wang
Lun Wang
Google Deepmind
LLM post-trainingMultimodal LLMLLM safety
C
Chuanqi Shi
University of California San Diego, California, USA
S
Shaoshuai Du
University of Amsterdam, Amsterdam, Netherlands
Yiyi Tao
Yiyi Tao
Peking University
Machine LearningArtificial IntelligenceTrustworthy AI
Yixian Shen
Yixian Shen
University of Amsterdam
Efficient DNNComputer ArchitectureSystem Optimization
H
Hang Zhang
University of California San Diego, California, USA