🤖 AI Summary
This paper addresses the relationship between large language models (LLMs) and the symbol grounding problem, arguing that LLMs do not solve but systematically *bypass* it. Method: We develop a unified semantic framework grounded in category theory to formally characterize the distinction between human and LLM meaning-making: specifically, how each maps content into truth-conditional propositions over a possible-world state space *W*. The framework models the absence of perceptual–symbol coupling in LLMs, showing they rely solely on statistical associations to generate higher-order propositions without intrinsic reference. Contribution/Results: First, we provide the first category-theoretic formalization of the full semantic generation process. Second, we advance the novel theoretical claim that LLMs *bypass*, rather than resolve, symbol grounding. Third, we rigorously prove that LLM inference is independent of real-world anchoring. This framework establishes a formal foundation for delineating the semantic boundaries of LLMs.
📝 Abstract
This paper presents a formal, categorical framework for analysing how humans and large language models (LLMs) transform content into truth-evaluated propositions about a state space of possible worlds W , in order to argue that LLMs do not solve but circumvent the symbol grounding problem.