ObfusQAte: A Proposed Framework to Evaluate LLM Robustness on Obfuscated Factual Question Answering

📅 2025-08-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of robustness evaluation for large language models (LLMs) on fact-based question answering under semantic obfuscation. We propose ObfusQAte, a novel evaluation framework, and ObfusQA, a dedicated benchmark dataset. ObfusQAte is the first to systematically construct a multi-level obfuscation suite covering three semantic perturbation types: named entity indirectization, distractor-induced confusion, and context overload. It employs a fine-grained, rule- and template-based perturbation generation methodology—including entity substitution, irrelevant information injection, and contextual redundancy. Extensive experiments demonstrate that mainstream LLMs suffer substantial performance degradation and significantly increased hallucination rates across all obfuscation categories. This work establishes the first systematic, reproducible benchmark and methodology for assessing LLM robustness under semantic obfuscation, revealing critical vulnerabilities in handling complex linguistic variations.

Technology Category

Application Category

📝 Abstract
The rapid proliferation of Large Language Models (LLMs) has significantly contributed to the development of equitable AI systems capable of factual question-answering (QA). However, no known study tests the LLMs' robustness when presented with obfuscated versions of questions. To systematically evaluate these limitations, we propose a novel technique, ObfusQAte and, leveraging the same, introduce ObfusQA, a comprehensive, first of its kind, framework with multi-tiered obfuscation levels designed to examine LLM capabilities across three distinct dimensions: (i) Named-Entity Indirection, (ii) Distractor Indirection, and (iii) Contextual Overload. By capturing these fine-grained distinctions in language, ObfusQA provides a comprehensive benchmark for evaluating LLM robustness and adaptability. Our study observes that LLMs exhibit a tendency to fail or generate hallucinated responses when confronted with these increasingly nuanced variations. To foster research in this direction, we make ObfusQAte publicly available.
Problem

Research questions and friction points this paper is trying to address.

Evaluates LLM robustness on obfuscated factual QA
Tests LLM performance under multi-tiered obfuscation
Measures LLM adaptability to nuanced language variations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-tiered obfuscation framework for LLMs
Tests robustness via Named-Entity Indirection
Evaluates LLMs with Distractor Indirection
🔎 Similar Papers
No similar papers found.