๐ค AI Summary
This study addresses the lack of systematic evaluation regarding the implementation choices of large language models (LLMs) in political text annotation and the unclear mechanisms underlying their effects. The authors propose a validation-first evaluation framework, conducting controlled multi-model experiments and ablation analyses under unified hardware, quantization settings, and prompt templates to systematically assess six open-source LLMs across four political science annotation tasks. Findings reveal no universally optimal model or prompting strategy, an inconsistent relationship between model scale and performance or cost-efficiency, and unreliable or even detrimental effects of certain widely adopted prompting techniques. Moreover, substantial efficiency differences emerge across model families. Challenging prevailing empirical โbest practicesโ in LLM-based annotation, this work also releases an open-source toolchain to support reproducible research.
๐ Abstract
Political scientists are rapidly adopting large language models (LLMs) for text annotation, yet the sensitivity of annotation results to implementation choices remains poorly understood. Most evaluations test a single model or configuration; how model choice, model size, learning approach, and prompt style interact, and whether popular "best practices" survive controlled comparison, are largely unexplored. We present a controlled evaluation of these pipeline choices, testing six open-weight models across four political science annotation tasks under identical quantisation, hardware, and prompt-template conditions. Our central finding is methodological: interaction effects dominate main effects, so seemingly reasonable pipeline choices can become consequential researcher degrees of freedom. No single model, prompt style, or learning approach is uniformly superior, and the best-performing model varies across tasks. Two corollaries follow. First, model size is an unreliable guide both to cost and to performance: cross-family efficiency differences are so large that some larger models are less resource-intensive than much smaller alternatives, while within model families mid-range variants often match or exceed larger counterparts. Second, widely recommended prompt engineering techniques yield inconsistent and sometimes negative effects on annotation performance. We use these benchmark results to develop a validation-first framework - with a principled ordering of pipeline decisions, guidance on prompt freezing and held-out evaluation, reporting standards, and open-source tools - to help researchers navigate this decision space transparently.