Implicature in Interaction: Understanding Implicature Improves Alignment in Human-LLM Interaction

📅 2025-10-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates large language models’ (LLMs) capacity to comprehend pragmatic implicatures and their role in enhancing human-AI alignment. Addressing the limitation of existing prompting methods—which neglect contextual inference and thus yield responses misaligned with users’ true intentions—we systematically integrate formal pragmatic theory of implicature into human-AI interaction for the first time, proposing a context-driven implicature-aware prompting method. We validate our approach through an implicature reasoning evaluation task, multi-model response generation experiments, and a human perception study (N=67; 67.6% of participants significantly preferred responses generated by our method). Results demonstrate substantial improvements in response relevance and naturalness: LLMs approach human-level implicature understanding, while smaller models exhibit particularly pronounced performance gains. Our core contribution is establishing implicature modeling as a novel paradigm for advancing semantic alignment between humans and AI systems.

Technology Category

Application Category

📝 Abstract
The rapid advancement of Large Language Models (LLMs) is positioning language at the core of human-computer interaction (HCI). We argue that advancing HCI requires attention to the linguistic foundations of interaction, particularly implicature (meaning conveyed beyond explicit statements through shared context) which is essential for human-AI (HAI) alignment. This study examines LLMs' ability to infer user intent embedded in context-driven prompts and whether understanding implicature improves response generation. Results show that larger models approximate human interpretations more closely, while smaller models struggle with implicature inference. Furthermore, implicature-based prompts significantly enhance the perceived relevance and quality of responses across models, with notable gains in smaller models. Overall, 67.6% of participants preferred responses with implicature-embedded prompts to literal ones, highlighting a clear preference for contextually nuanced communication. Our work contributes to understanding how linguistic theory can be used to address the alignment problem by making HAI interaction more natural and contextually grounded.
Problem

Research questions and friction points this paper is trying to address.

Enhancing human-AI alignment through linguistic implicature understanding
Improving LLMs' ability to infer user intent from contextual prompts
Making human-AI interaction more natural and contextually grounded
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leveraging implicature theory to enhance AI alignment
Using context-driven prompts for intent inference
Improving response quality through linguistic pragmatics
🔎 Similar Papers
No similar papers found.