Understanding the Nature of Generative AI as Threshold Logic in High-Dimensional Space

📅 2026-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work challenges the conventional view that overlooks the pivotal role of single-layer threshold logic in high-dimensional spaces. It proposes a novel paradigm—integrating threshold units, dimensionality, and depth—by replacing deep architectures with high-dimensional single-layer threshold units, reframing neural computation as navigation within high-dimensional geometry. Drawing upon Cover’s theorem, linear programming, high-dimensional geometry, and Peircean semiotics—particularly the notion of indexicality—the study develops an interdisciplinary model of perceptron behavior. The analysis reveals that in sufficiently high dimensions, a single hyperplane almost always suffices to separate data, and that the essence of deep networks lies in iteratively deforming data to conform to favorable high-dimensional geometric structures. This insight offers a unified explanation for the expressive power of generative AI models.
📝 Abstract
This paper examines the role of threshold logic in understanding generative artificial intelligence. Threshold functions, originally studied in the 1960s in digital circuit synthesis, provide a structurally transparent model of neural computation: a weighted sum of inputs compared to a threshold, geometrically realized as a hyperplane partitioning a space. The paper shows that this operation undergoes a qualitative transition as dimensionality increases. In low dimensions, the perceptron acts as a determinate logical classifier, separating classes when possible, as decided by linear programming. In high dimensions, however, a single hyperplane can separate almost any configuration of points (Cover, 1965); the space becomes saturated with potential classifiers, and the perceptron shifts from a logical device to a navigational one, functioning as an indexical indicator in the sense of Peirce. The limitations of the perceptron identified by Minsky and Papert (1969) were historically addressed by introducing multilayer architectures. This paper considers an alternative path: increasing dimensionality while retaining a single threshold element. It argues that this shift has equally significant implications for understanding neural computation. The role of depth is reinterpreted as a mechanism for the sequential deformation of data manifolds through iterated threshold operations, preparing them for linear separability already afforded by high-dimensional geometry. The resulting triadic account - threshold function as ontological unit, dimensionality as enabling condition, and depth as preparatory mechanism - provides a unified perspective on generative AI grounded in established mathematics.
Problem

Research questions and friction points this paper is trying to address.

generative AI
threshold logic
high-dimensional space
perceptron
neural computation
Innovation

Methods, ideas, or system contributions that make the work stand out.

threshold logic
high-dimensional space
generative AI
perceptron
linear separability
🔎 Similar Papers
No similar papers found.