A Comparison of Human and ChatGPT Classification Performance on Complex Social Media Data

📅 2025-11-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the fine-grained semantic understanding capabilities of ChatGPT variants (GPT-3.5, GPT-4, GPT-4o) on social media text classification—particularly for nuanced phenomena such as irony, metaphor, and culture-specific expressions—and benchmarks them against human annotators. Using four distinct prompt templates, performance is quantitatively assessed via precision, recall, and F1-score, complemented by qualitative error analysis. Results show that while label definitions improve model accuracy, GPT-4 still underperforms humans significantly in complex pragmatic contexts, especially for high-sensitivity categories. This work provides the first cross-version, multi-prompt empirical demonstration of structural limitations in large language models’ sociolinguistic comprehension. It offers methodological cautions and practical boundary guidelines for deploying LLMs in AI-assisted social science annotation tasks.

Technology Category

Application Category

📝 Abstract
Generative artificial intelligence tools, like ChatGPT, are an increasingly utilized resource among computational social scientists. Nevertheless, there remains space for improved understanding of the performance of ChatGPT in complex tasks such as classifying and annotating datasets containing nuanced language. Method. In this paper, we measure the performance of GPT-4 on one such task and compare results to human annotators. We investigate ChatGPT versions 3.5, 4, and 4o to examine performance given rapid changes in technological advancement of large language models. We craft four prompt styles as input and evaluate precision, recall, and F1 scores. Both quantitative and qualitative evaluations of results demonstrate that while including label definitions in prompts may help performance, overall GPT-4 has difficulty classifying nuanced language. Qualitative analysis reveals four specific findings. Our results suggest the use of ChatGPT in classification tasks involving nuanced language should be conducted with prudence.
Problem

Research questions and friction points this paper is trying to address.

Compares ChatGPT and human performance on nuanced social media classification
Evaluates GPT-4's ability to classify complex, subtle language in datasets
Assesses prompt styles and model versions for classification task accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

GPT-4 performance compared to human annotators
Four prompt styles evaluated for classification tasks
Quantitative and qualitative analysis of nuanced language classification
🔎 Similar Papers
No similar papers found.
B
Breanna E. Green
A
Ashley L. Shea
Pengfei Zhao
Pengfei Zhao
ATB Potsdam
LLMCompressionXAIMechanistic Interpretability
D
Drew B. Margolin