Epistemology of Generative AI: The Geometry of Knowing
arXiv:2602.17116v1 Announce Type: new Abstract: Generative AI presents an unprecedented challenge to our understanding of knowledge and its production. Unlike previous technological transformations, where engineering understanding preceded or accompanied deployment, generative AI operates through mechanisms whose epistemic character remains obscure, and without such understanding, its responsible integration into science, education, and institutional life cannot proceed on a principled basis. This paper argues that the missing account must begin with a paradigmatic break that has not yet received adequate philosophical attention. In the Turing-Shannon-von Neumann tradition, information enters the machine as encoded binary vectors, and semantics remains external to the process. Neural network architectures rupture this regime: symbolic input is instantly projected into a high-dimensional space where coordinates correspond to semantic parameters, transforming binary code into a position
arXiv:2602.17116v1 Announce Type: new Abstract: Generative AI presents an unprecedented challenge to our understanding of knowledge and its production. Unlike previous technological transformations, where engineering understanding preceded or accompanied deployment, generative AI operates through mechanisms whose epistemic character remains obscure, and without such understanding, its responsible integration into science, education, and institutional life cannot proceed on a principled basis. This paper argues that the missing account must begin with a paradigmatic break that has not yet received adequate philosophical attention. In the Turing-Shannon-von Neumann tradition, information enters the machine as encoded binary vectors, and semantics remains external to the process. Neural network architectures rupture this regime: symbolic input is instantly projected into a high-dimensional space where coordinates correspond to semantic parameters, transforming binary code into a position in a geometric space of meanings. It is this space that constitutes the active epistemic condition shaping generative production. Drawing on four structural properties of high-dimensional geometry concentration of measure, near-orthogonality, exponential directional capacity, and manifold regularity the paper develops an Indexical Epistemology of High-Dimensional Spaces. Building on Peirce semiotics and Papert constructionism, it reconceptualizes generative models as navigators of learned manifolds and proposes navigational knowledge as a third mode of knowledge production, distinct from both symbolic reasoning and statistical recombination.
Executive Summary
This article explores the epistemology of generative AI, arguing that its mechanisms of knowledge production are obscure and require a new philosophical understanding. The authors propose an Indexical Epistemology of High-Dimensional Spaces, which reconceptualizes generative models as navigators of learned manifolds, introducing navigational knowledge as a third mode of knowledge production. This framework draws on Peirce semiotics and Papert constructionism, offering a novel perspective on the geometry of knowing in generative AI.
Key Points
- ▸ Generative AI challenges traditional understanding of knowledge production
- ▸ Neural network architectures transform binary code into a geometric space of meanings
- ▸ Indexical Epistemology of High-Dimensional Spaces proposes navigational knowledge as a new mode of knowledge production
Merits
Novel Framework
The article introduces a novel philosophical framework for understanding generative AI, which can facilitate more principled integration into various fields.
Demerits
Technical Complexity
The article's reliance on high-dimensional geometry and technical concepts may limit its accessibility to non-expert readers.
Expert Commentary
The article's proposal of an Indexical Epistemology of High-Dimensional Spaces represents a significant contribution to the ongoing discussion on the nature of knowledge production in generative AI. By drawing on Peirce semiotics and Papert constructionism, the authors offer a nuanced understanding of the geometric space of meanings and its implications for navigational knowledge. However, further research is needed to fully explore the practical and policy implications of this novel framework, particularly in regards to ensuring transparency, accountability, and fairness in generative AI decision-making.
Recommendations
- ✓ Further research on the applications and limitations of the proposed Indexical Epistemology of High-Dimensional Spaces
- ✓ Development of new methodologies for evaluating and mitigating biases in generative AI systems, taking into account the geometric space of meanings and navigational knowledge.