WebFAQ 2.0: A Multilingual QA Dataset with Mined Hard Negatives for Dense Retrieval

📅 2026-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of current dense retrieval models in multilingual and cross-lingual settings, which stem from the scarcity of large-scale, high-quality question-answer data and hard negative examples. To this end, we introduce WebFAQ 2.0—the largest multilingual FAQ dataset to date—spanning 108 languages with 198 million question-answer pairs, enriched through web crawling to enhance contextual diversity. Additionally, we release 1.25 million hard negatives across 20 languages, each accompanied by cross-encoder scores over 200 negative candidates. Our pipeline employs a two-stage retrieval approach to mine high-quality negatives and integrates contrastive learning (via MultipleNegativesRanking loss) with knowledge distillation (using MarginMSE loss). The dataset supports continuous updates and significantly advances research in multilingual dense retrieval.

Technology Category

Application Category

📝 Abstract
We introduce WebFAQ 2.0, a new version of the WebFAQ dataset, containing 198 million FAQ-based natural question-answer pairs across 108 languages. Compared to the previous version, it significantly expands multilingual coverage and the number of bilingual aligned QA pairs to over 14.3M, making it the largest FAQ-based resource. Unlike the original release, WebFAQ 2.0 uses a novel data collection strategy that directly crawls and extracts relevant web content, resulting in a substantially more diverse and multilingual dataset with richer context through page titles and descriptions. In response to community feedback, we also release a hard negatives dataset for training dense retrievers, with 1.25M queries across 20 languages. These hard negatives were mined using a two-stage retrieval pipeline and include cross-encoder scores for 200 negatives per query. We further show how this resource enables two primary fine-tuning strategies for dense retrievers: Contrastive Learning with MultipleNegativesRanking loss, and Knowledge Distillation with MarginMSE loss. WebFAQ 2.0 is not a static resource but part of a long-term effort. Since late 2025, structured FAQs are being regularly released through the Open Web Index, enabling continuous expansion and refinement. We publish the datasets and training scripts to facilitate further research in multilingual and cross-lingual IR. The dataset itself and all related resources are publicly available on GitHub and HuggingFace.
Problem

Research questions and friction points this paper is trying to address.

multilingual QA
dense retrieval
hard negatives
cross-lingual IR
FAQ dataset
Innovation

Methods, ideas, or system contributions that make the work stand out.

hard negatives
dense retrieval
multilingual QA
contrastive learning
knowledge distillation
🔎 Similar Papers
No similar papers found.
M
Michael Dinzinger
University of Passau
Laura Caspari
Laura Caspari
Chair of Data Science, University of Passau
Dense RetrievalLarge Language ModelsMachine Learning
A
Ali Salman
University of Passau
I
Irvin Topi
University of Passau
Jelena Mitrović
Jelena Mitrović
University of Passau
Natural Language ProcessingArtificial IntelligenceComputational RhetoricLegal NLP
M
Michael Granitzer
University of Passau, and IT:U Austria