🤖 AI Summary
This study investigates whether artificial general intelligence (AGI) can be formalized as an absolute and self-verifiable theoretical construct. By modeling AGI as a distributional semantic predicate dependent on task families, task distributions, performance functionals, and resource budgets, the work employs axiomatic methods, distributional perturbation analysis, bounded transfer theory, and Rice-type and Gödel–Tarski undecidability arguments to systematically examine its existence, robustness, and verifiability limits. The analysis demonstrates that AGI is inherently relative to specific task distributions and cannot be defined independently of them. Crucially, it establishes—through computational and logical reasoning—that AGI cannot self-certify its own correctness. The study further identifies fundamental limitations of AGI, including distributional dependence, fragility, bounded transferability, and undecidability, thereby refuting the theoretical feasibility of unconditionally self-improving AGI systems.
📝 Abstract
We study whether Artificial General Intelligence (AGI) admits a coherent theoretical definition that supports absolute claims of existence, robustness, or self-verification. We formalize AGI axiomatically as a distributional, resource-bounded semantic predicate, indexed by a task family, a task distribution, a performance functional, and explicit resource budgets. Under this framework, we derive four classes of results. First, we show that generality is inherently relational: there is no distribution-independent notion of AGI. Second, we prove non-invariance results demonstrating that arbitrarily small perturbations of the task distribution can invalidate AGI properties via cliff sets, precluding universal robustness. Third, we establish bounded transfer guarantees, ruling out unbounded generalization across task families under finite resources. Fourth, invoking Rice-style and G\"odel--Tarski arguments, we prove that AGI is a nontrivial semantic property and therefore cannot be soundly and completely certified by any computable procedure, including procedures implemented by the agent itself. Consequently, recursive self-improvement schemes that rely on internal self-certification of AGI are ill-posed. Taken together, our results show that strong, distribution-independent claims of AGI are not false but undefined without explicit formal indexing, and that empirical progress in AI does not imply the attainability of self-certifying general intelligence.