Revisiting Prompt Sensitivity in Large Language Models for Text Classification: The Role of Prompt Underspecification

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether underspecified prompts are a key factor underlying the high sensitivity of large language models (LLMs) in text classification tasks, where minor prompt variations can significantly affect performance. Through comprehensive analyses—including performance evaluation, logit inspection, and linear probing—we demonstrate that underspecified prompts substantially increase output variance, with these effects predominantly localized in the model’s final layers. In contrast, prompts containing explicit instructions consistently reduce variance and elevate logit scores for relevant tokens. Our findings underscore the critical role of prompt specificity in enhancing model stability and provide both theoretical grounding and practical guidance for effective prompt engineering in LLM-based classification systems.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are widely used as zero-shot and few-shot classifiers, where task behaviour is largely controlled through prompting. A growing number of works have observed that LLMs are sensitive to prompt variations, with small changes leading to large changes in performance. However, in many cases, the investigation of sensitivity is performed using underspecified prompts that provide minimal task instructions and weakly constrain the model's output space. In this work, we argue that a significant portion of the observed prompt sensitivity can be attributed to prompt underspecification. We systematically study and compare the sensitivity of underspecified prompts and prompts that provide specific instructions. Utilising performance analysis, logit analysis, and linear probing, we find that underspecified prompts exhibit higher performance variance and lower logit values for relevant tokens, while instruction-prompts suffer less from such problems. However, linear probing analysis suggests that the effects of prompt underspecification have only a marginal impact on the internal LLM representations, instead emerging in the final layers. Overall, our findings highlight the need for more rigour when investigating and mitigating prompt sensitivity.
Problem

Research questions and friction points this paper is trying to address.

prompt sensitivity
large language models
prompt underspecification
text classification
zero-shot classification
Innovation

Methods, ideas, or system contributions that make the work stand out.

prompt underspecification
prompt sensitivity
large language models
instruction prompting
linear probing
B
Branislav Pecher
Kempelen Institute of Intelligent Technologies, Bratislava, Slovakia
Michal Spiegel
Michal Spiegel
Masaryk University, Brno; Kempelen Institute of Intelligent Technologies, Bratislava
computer scienceartificial intelligencenatural language processinglarge language models
R
Róbert Belanec
Faculty of Information Technology, Brno University of Technology, Brno, Czechia; Kempelen Institute of Intelligent Technologies, Bratislava, Slovakia
J
Ján Cegin
Kempelen Institute of Intelligent Technologies, Bratislava, Slovakia