Can structural correspondences ground real world representational content in Large Language Models?

📅 2025-06-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the fundamental question of whether large language models (LLMs), trained exclusively on ungrounded text, can achieve semantic grounding. Method: Drawing on representational philosophy, cognitive science, and AI interpretability frameworks, the study conducts conceptual modeling and feasibility-oriented critical analysis to propose and defend “exploitable structural correspondence” as the key criterion for genuine representation: mere formal isomorphism is insufficient; structural correspondence must causally explain task success to constitute authentic semantic representation. Contribution/Results: The work establishes, for the first time, functional explanatory power as a necessary condition for the validity of structural correspondence—thereby challenging the prevailing assumption that textual closure inherently precludes grounding. It introduces a novel theoretical standard and empirically testable criterion for semantic grounding in LLMs, advancing foundational understanding of representation in foundation models.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) such as GPT-4 produce compelling responses to a wide range of prompts. But their representational capacities are uncertain. Many LLMs have no direct contact with extra-linguistic reality: their inputs, outputs and training data consist solely of text, raising the questions (1) can LLMs represent anything and (2) if so, what? In this paper, I explore what it would take to answer these questions according to a structural-correspondence based account of representation, and make an initial survey of this evidence. I argue that the mere existence of structural correspondences between LLMs and worldly entities is insufficient to ground representation of those entities. However, if these structural correspondences play an appropriate role - they are exploited in a way that explains successful task performance - then they could ground real world contents. This requires overcoming a challenge: the text-boundedness of LLMs appears, on the face of it, to prevent them engaging in the right sorts of tasks.
Problem

Research questions and friction points this paper is trying to address.

Can LLMs represent real-world entities without direct experience?
Do structural correspondences alone suffice for grounding representation?
How to overcome text-boundedness for real-world task performance?
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structural correspondences link LLMs to reality
Task performance exploits structural correspondences
Overcoming text-boundedness enables real-world grounding
🔎 Similar Papers
No similar papers found.