Understanding QA generation: Extracting Parametric and Contextual Knowledge with CQA for Low Resource Bangla Language

📅 2026-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of question answering in low-resource languages like Bengali, where scarcity of annotated data and linguistic complexity hinder performance, and it is difficult to disentangle whether models rely on parametric knowledge or contextual information. The study introduces BanglaCQA, the first counterfactual question answering dataset for Bengali, and proposes a framework combining encoder-decoder fine-tuning with large language model prompting—particularly chain-of-thought reasoning—to systematically decouple the contributions of parametric and contextual knowledge in the QA process. Through both automatic and human evaluations of semantic similarity, experiments demonstrate that the proposed approach effectively identifies the source of knowledge, achieving strong performance in both factual and counterfactual settings, thereby establishing a new paradigm for question answering research in low-resource languages.

Technology Category

Application Category

📝 Abstract
Question-Answering (QA) models for low-resource languages like Bangla face challenges due to limited annotated data and linguistic complexity. A key issue is determining whether models rely more on pre-encoded (parametric) knowledge or contextual input during answer generation, as existing Bangla QA datasets lack the structure required for such analysis. We introduce BanglaCQA, the first Counterfactual QA dataset in Bangla, by extending a Bangla dataset while integrating counterfactual passages and answerability annotations. In addition, we propose fine-tuned pipelines for encoder-decoder language-specific and multilingual baseline models, and prompting-based pipelines for decoder-only LLMs to disentangle parametric and contextual knowledge in both factual and counterfactual scenarios. Furthermore, we apply LLM-based and human evaluation techniques that measure answer quality based on semantic similarity. We also present a detailed analysis of how models perform across different QA settings in low-resource languages, and show that Chain-of-Thought (CoT) prompting reveals a uniquely effective mechanism for extracting parametric knowledge in counterfactual scenarios, particularly in decoder-only LLMs. Our work not only introduces a novel framework for analyzing knowledge sources in Bangla QA but also uncovers critical findings that open up broader directions for counterfactual reasoning in low-resource language settings.
Problem

Research questions and friction points this paper is trying to address.

Question Answering
Low-resource Languages
Parametric Knowledge
Contextual Knowledge
Counterfactual Reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Counterfactual QA
Parametric Knowledge
Contextual Knowledge
Chain-of-Thought Prompting
Low-Resource Language
🔎 Similar Papers
No similar papers found.
U
Umme Abira Azmary
Department of Computer Science and Engineering, BRAC University
M
MD Ikramul Kayes
Department of Computer Science and Engineering, BRAC University
Swakkhar Shatabda
Swakkhar Shatabda
Professor, School of Data and Sciences, BRAC University
optimizationmachine learningcomputational biologybioinformatics
F
Farig Yousuf Sadeque
Department of Computer Science and Engineering, BRAC University