🤖 AI Summary
This work identifies a fundamental error floor in large language models that cannot be eliminated through scaling alone, arising from an information bottleneck inherent in human supervision—constrained by annotation noise, subjective preferences, and limited linguistic bandwidth. For the first time, human supervision is formally modeled as an information bottleneck, revealing the structural origin of this irreducible error. The paper unifies six theoretical frameworks—operator theory, PAC-Bayes, information theory, causal inference, category theory, and game-theoretic analysis of human feedback in reinforcement learning—to systematically explain the emergence of this limitation and delineate pathways to overcome it. Empirical validation across real-world preference data, synthetic tasks, and verifiable benchmarks confirms the existence of the error floor and demonstrates that incorporating informative non-human auxiliary signals can substantially mitigate or even eliminate it.
📝 Abstract
Large language models are trained primarily on human-generated data and feedback, yet they exhibit persistent errors arising from annotation noise, subjective preferences, and the limited expressive bandwidth of natural language. We argue that these limitations reflect structural properties of the supervision channel rather than model scale or optimization. We develop a unified theory showing that whenever the human supervision channel is not sufficient for a latent evaluation target, it acts as an information-reducing channel that induces a strictly positive excess-risk floor for any learner dominated by it. We formalize this Human-Bounded Intelligence limit and show that across six complementary frameworks (operator theory, PAC-Bayes, information theory, causal inference, category theory, and game-theoretic analyses of reinforcement learning from human feedback), non-sufficiency yields strictly positive lower bounds arising from the same structural decomposition into annotation noise, preference distortion, and semantic compression. The theory explains why scaling alone cannot eliminate persistent human-aligned errors and characterizes conditions under which auxiliary non-human signals (e.g., retrieval, program execution, tools) increase effective supervision capacity and collapse the floor by restoring information about the latent target. Experiments on real preference data, synthetic known-target tasks, and externally verifiable benchmarks confirm the predicted structural signatures: human-only supervision exhibits a persistent floor, while sufficiently informative auxiliary channels strictly reduce or eliminate excess error.