🤖 AI Summary
This study investigates whether underspecified prompts are a key factor underlying the high sensitivity of large language models (LLMs) in text classification tasks, where minor prompt variations can significantly affect performance. Through comprehensive analyses—including performance evaluation, logit inspection, and linear probing—we demonstrate that underspecified prompts substantially increase output variance, with these effects predominantly localized in the model’s final layers. In contrast, prompts containing explicit instructions consistently reduce variance and elevate logit scores for relevant tokens. Our findings underscore the critical role of prompt specificity in enhancing model stability and provide both theoretical grounding and practical guidance for effective prompt engineering in LLM-based classification systems.
📝 Abstract
Large language models (LLMs) are widely used as zero-shot and few-shot classifiers, where task behaviour is largely controlled through prompting. A growing number of works have observed that LLMs are sensitive to prompt variations, with small changes leading to large changes in performance. However, in many cases, the investigation of sensitivity is performed using underspecified prompts that provide minimal task instructions and weakly constrain the model's output space. In this work, we argue that a significant portion of the observed prompt sensitivity can be attributed to prompt underspecification. We systematically study and compare the sensitivity of underspecified prompts and prompts that provide specific instructions. Utilising performance analysis, logit analysis, and linear probing, we find that underspecified prompts exhibit higher performance variance and lower logit values for relevant tokens, while instruction-prompts suffer less from such problems. However, linear probing analysis suggests that the effects of prompt underspecification have only a marginal impact on the internal LLM representations, instead emerging in the final layers. Overall, our findings highlight the need for more rigour when investigating and mitigating prompt sensitivity.