🤖 AI Summary
This study systematically evaluates the fine-grained semantic understanding capabilities of ChatGPT variants (GPT-3.5, GPT-4, GPT-4o) on social media text classification—particularly for nuanced phenomena such as irony, metaphor, and culture-specific expressions—and benchmarks them against human annotators. Using four distinct prompt templates, performance is quantitatively assessed via precision, recall, and F1-score, complemented by qualitative error analysis. Results show that while label definitions improve model accuracy, GPT-4 still underperforms humans significantly in complex pragmatic contexts, especially for high-sensitivity categories. This work provides the first cross-version, multi-prompt empirical demonstration of structural limitations in large language models’ sociolinguistic comprehension. It offers methodological cautions and practical boundary guidelines for deploying LLMs in AI-assisted social science annotation tasks.
📝 Abstract
Generative artificial intelligence tools, like ChatGPT, are an increasingly utilized resource among computational social scientists. Nevertheless, there remains space for improved understanding of the performance of ChatGPT in complex tasks such as classifying and annotating datasets containing nuanced language. Method. In this paper, we measure the performance of GPT-4 on one such task and compare results to human annotators. We investigate ChatGPT versions 3.5, 4, and 4o to examine performance given rapid changes in technological advancement of large language models. We craft four prompt styles as input and evaluate precision, recall, and F1 scores. Both quantitative and qualitative evaluations of results demonstrate that while including label definitions in prompts may help performance, overall GPT-4 has difficulty classifying nuanced language. Qualitative analysis reveals four specific findings. Our results suggest the use of ChatGPT in classification tasks involving nuanced language should be conducted with prudence.