🤖 AI Summary
Large language models (LLMs) exhibit significant sensitivity to non-semantic prompt perturbations—such as punctuation and formatting changes—undermining their deployment stability. This work presents the first systematic, unified evaluation of five categories of prompt robustness enhancement methods—including both fine-tuning and in-context learning paradigms—across eight open-source models (e.g., Llama, Qwen, Gemma) and state-of-the-art closed-source models (e.g., GPT-4.1, DeepSeek-V3), benchmarked on 52 diverse tasks. We introduce a novel, natural-instruction-based multi-task evaluation framework that jointly assesses robustness to formatting perturbations, generalization under distributional shift, and cross-model consistency. Our large-scale empirical analysis reveals method-specific efficacy boundaries and contextual applicability, providing reproducible, transferable evidence for enhancing real-world LLM reliability. The code is publicly available.
📝 Abstract
Large Language Models (LLMs) are highly sensitive to subtle, non-semantic variations in prompt phrasing and formatting. In this work, we present the first systematic evaluation of 5 methods for improving prompt robustness within a unified experimental framework. We benchmark these techniques on 8 models from Llama, Qwen and Gemma families across 52 tasks from Natural Instructions dataset. Our evaluation covers robustness methods from both fine-tuned and in-context learning paradigms, and tests their generalization against multiple types of distribution shifts. Finally, we extend our analysis to GPT-4.1 and DeepSeek V3 to assess frontier models' current robustness to format perturbations. Our findings offer actionable insights into the relative effectiveness of these robustness methods, enabling practitioners to make informed decisions when aiming for stable and reliable LLM performance in real-world applications. Code: https://github.com/AIRI-Institute/when-punctuation-matters.