GPT-4 Surpassing Human Performance in Linguistic Pragmatics

📅 2023-12-15
🏛️ arXiv.org
📈 Citations: 11
Influential: 0
📄 PDF
🤖 AI Summary
Prior work lacks empirical evaluation of large language models’ (LLMs) pragmatic competence—specifically their ability to infer implicatures and perform context-sensitive reasoning grounded in Gricean conversational maxims. Method: We introduce a standardized pragmatic inference benchmark, validated via human annotation (76 participants) and rigorous statistical testing, and conduct a cross-generational evaluation spanning GPT-2 through GPT-4. Contribution/Results: GPT-4 significantly outperforms the human average in both accuracy and response latency—and exceeds even the best individual human participant. This study establishes, for the first time, a statistically validated, strictly monotonic performance hierarchy across LLM generations. Moreover, GPT-4 demonstrates strong zero-shot generalization to pragmatically rich, unseen human-authored texts. Our benchmark and methodology provide a reproducible, human-aligned framework for assessing LLMs’ pragmatic capabilities, setting a new standard for computational pragmatics evaluation.
📝 Abstract
As Large Language Models (LLMs) become increasingly integrated into everyday life, their capabilities to understand and emulate human cognition are under steady examination. This study investigates the ability of LLMs to comprehend and interpret linguistic pragmatics, an aspect of communication that considers context and implied meanings. Using Grice's communication principles, LLMs and human subjects (N=76) were evaluated based on their responses to various dialogue-based tasks. The findings revealed the superior performance and speed of LLMs, particularly GPT4, over human subjects in interpreting pragmatics. GPT4 also demonstrated accuracy in the pre-testing of human-written samples, indicating its potential in text analysis. In a comparative analysis of LLMs using human individual and average scores, the models exhibited significant chronological improvement. The models were ranked from lowest to highest score, with GPT2 positioned at 78th place, GPT3 ranking at 23rd, Bard at 10th, GPT3.5 placing 5th, Best Human scoring 2nd, and GPT4 achieving the top spot. The findings highlight the remarkable progress made in the development and performance of these LLMs. Future studies should consider diverse subjects, multiple languages, and other cognitive aspects to fully comprehend the capabilities of LLMs. This research holds significant implications for the development and application of AI-based models in communication-centered sectors.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' ability to interpret linguistic pragmatics and implied meanings
Comparing GPT-4 and human performance using Grice communication principles
Assessing AI models' contextual understanding through dialogue-based tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluated LLMs using Grice communication principles
Tested multiple GPT models and Bard on dialogue tasks
Compared model performance against human subjects directly
🔎 Similar Papers
No similar papers found.
L
Ljubiša Bojić
The Institute for Artificial Intelligence Research and Development of Serbia, Novi Sad, Serbia; University of Belgrade, Institute for Philosophy and Social Theory, Digital Society Lab, Belgrade, Serbia
P
Predrag Kovačević
University of Novi Sad, Faculty of Philosophy, Novi Sad, Serbia
Milan Čabarkapa
Milan Čabarkapa
Faculty of Engineering, University of Kragujevac
Artificial IntelligenceCybersecurity and Privacy ProtectionCommunication systemsSoftware Modelling and Development