Magic Words or Methodical Work? Challenging Conventional Wisdom in LLM-Based Political Text Annotation

๐Ÿ“… 2026-03-27
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study addresses the lack of systematic evaluation regarding the implementation choices of large language models (LLMs) in political text annotation and the unclear mechanisms underlying their effects. The authors propose a validation-first evaluation framework, conducting controlled multi-model experiments and ablation analyses under unified hardware, quantization settings, and prompt templates to systematically assess six open-source LLMs across four political science annotation tasks. Findings reveal no universally optimal model or prompting strategy, an inconsistent relationship between model scale and performance or cost-efficiency, and unreliable or even detrimental effects of certain widely adopted prompting techniques. Moreover, substantial efficiency differences emerge across model families. Challenging prevailing empirical โ€œbest practicesโ€ in LLM-based annotation, this work also releases an open-source toolchain to support reproducible research.
๐Ÿ“ Abstract
Political scientists are rapidly adopting large language models (LLMs) for text annotation, yet the sensitivity of annotation results to implementation choices remains poorly understood. Most evaluations test a single model or configuration; how model choice, model size, learning approach, and prompt style interact, and whether popular "best practices" survive controlled comparison, are largely unexplored. We present a controlled evaluation of these pipeline choices, testing six open-weight models across four political science annotation tasks under identical quantisation, hardware, and prompt-template conditions. Our central finding is methodological: interaction effects dominate main effects, so seemingly reasonable pipeline choices can become consequential researcher degrees of freedom. No single model, prompt style, or learning approach is uniformly superior, and the best-performing model varies across tasks. Two corollaries follow. First, model size is an unreliable guide both to cost and to performance: cross-family efficiency differences are so large that some larger models are less resource-intensive than much smaller alternatives, while within model families mid-range variants often match or exceed larger counterparts. Second, widely recommended prompt engineering techniques yield inconsistent and sometimes negative effects on annotation performance. We use these benchmark results to develop a validation-first framework - with a principled ordering of pipeline decisions, guidance on prompt freezing and held-out evaluation, reporting standards, and open-source tools - to help researchers navigate this decision space transparently.
Problem

Research questions and friction points this paper is trying to address.

large language models
text annotation
implementation choices
prompt engineering
political science
Innovation

Methods, ideas, or system contributions that make the work stand out.

controlled evaluation
interaction effects
validation-first framework
prompt engineering
model efficiency
L
Lorcan McLaren
University College Dublin
James P. Cross
James P. Cross
School of Politics and International Relations, University College Dublin
EU politicsquantitative text analysisparliamentscentral bankinglegislative decision making
Z
Zuzanna Krakowska
University College Dublin
R
Robin Rauner
University College Dublin
M
Martijn Schoonvelde
University of Groningen