Do Large Language Models Defend Inferentialist Semantics?: On the Logical Expressivism and Anti-Representationalism of LLMs

📅 2024-12-19
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) pose a foundational challenge to philosophy of language: if LLMs generate coherent meaning without external world reference, do traditional semantic externalism and strong compositionality remain tenable? Method: Through rigorous philosophical analysis and comparative examination of semantic theories—specifically inferentialism, distributional semantics, and externalism—the paper systematically demonstrates that LLMs intrinsically align with Robert Brandom’s inferentialist semantics, characterized by anti-representationalism, logical expressivism, and quasi-compositionality. Contribution/Results: The study introduces a consensus-theoretic account of truth tailored to LLMs, furnishing novel empirical support for anti-representationalism. It establishes that LLMs fundamentally undermine semantic externalism and strict compositionality, catalyzing a post-anthropocentric turn in philosophy of language and enabling a reconceptualization of interdisciplinary frameworks at the intersection of philosophy and AI.

Technology Category

Application Category

📝 Abstract
The philosophy of language, which has historically been developed through an anthropocentric lens, is now being forced to move towards post-anthropocentrism due to the advent of large language models (LLMs) like ChatGPT (OpenAI), Claude (Anthropic), which are considered to possess linguistic abilities comparable to those of humans. Traditionally, LLMs have been explained through distributional semantics as their foundational semantics. However, recent research is exploring alternative foundational semantics beyond distributional semantics. This paper proposes Robert Brandom's inferentialist semantics as an suitable foundational semantics for LLMs, specifically focusing on the issue of linguistic representationalism within this post-anthropocentric trend. Here, we show that the anti-representationalism and logical expressivism of inferential semantics, as well as quasi-compositionality, are useful in interpreting the characteristics and behaviors of LLMs. Further, we propose a emph{consensus theory of truths} for LLMs. This paper argues that the characteristics of LLMs challenge mainstream assumptions in philosophy of language, such as semantic externalism and compositionality. We believe the argument in this paper leads to a re-evaluation of antihyphen{}representationalist views of language, potentially leading to new developments in the philosophy of language.
Problem

Research questions and friction points this paper is trying to address.

Exploring inferential semantics for LLMs' meaning generation
Assessing anti-representationalist properties in LLM language processing
Developing consensus truth theory for LLMs' interactive norms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses inferential semantics for LLM understanding
Applies ISA approach to show anti-representationalism
Develops consensus truth theory via RLHF
🔎 Similar Papers
No similar papers found.
Y
Yuzuki Arai
College of Media Arts, Science and Technology, School of Informatics, University of Tsukuba
Sho Tsugawa
Sho Tsugawa
University of Tsukuba
Network ScienceSocial NetworksComputational Social Science