Persona-Augmented Benchmarking: Evaluating LLMs Across Diverse Writing Styles

πŸ“… 2025-07-29
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing LLM evaluation benchmarks exhibit stylistic homogeneity, failing to capture natural linguistic diversity and thus compromising model robustness on non-canonical inputs. To address this, we propose a role-based prompt rewriting method that systematically generates semantically preserved yet stylistically diverse prompts, thereby enhancing expressive variability in benchmarks. Extensive experiments reveal that specific writing styles consistently induce significant and reproducible performance gains or degradation across diverse LLMsβ€”a phenomenon we term β€œstyle-induced performance modulation.” Leveraging this insight, we design a scalable benchmark augmentation framework that substantially improves external validity of evaluations in real-world settings. This work constitutes the first systematic investigation into the impact of writing style on LLM evaluation outcomes, establishing a novel paradigm for developing more robust and ecologically valid language model assessment methodologies, accompanied by fully reproducible technical procedures.

Technology Category

Application Category

πŸ“ Abstract
Current benchmarks for evaluating Large Language Models (LLMs) often do not exhibit enough writing style diversity, with many adhering primarily to standardized conventions. Such benchmarks do not fully capture the rich variety of communication patterns exhibited by humans. Thus, it is possible that LLMs, which are optimized on these benchmarks, may demonstrate brittle performance when faced with "non-standard" input. In this work, we test this hypothesis by rewriting evaluation prompts using persona-based LLM prompting, a low-cost method to emulate diverse writing styles. Our results show that, even with identical semantic content, variations in writing style and prompt formatting significantly impact the estimated performance of the LLM under evaluation. Notably, we identify distinct writing styles that consistently trigger either low or high performance across a range of models and tasks, irrespective of model family, size, and recency. Our work offers a scalable approach to augment existing benchmarks, improving the external validity of the assessments they provide for measuring LLM performance across linguistic variations.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs lacks diverse writing styles in benchmarks.
LLMs may underperform with non-standard input styles.
Persona-based prompting reveals performance variations across styles.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Persona-based LLM prompting for diversity
Rewriting prompts to emulate writing styles
Scalable benchmark augmentation method
πŸ”Ž Similar Papers
No similar papers found.