Skip to main content
Academic

A Mathematical Theory of Agency and Intelligence

arXiv:2602.22519v1 Announce Type: new Abstract: To operate reliably under changing conditions, complex systems require feedback on how effectively they use resources, not just whether objectives are met. Current AI systems process vast information to produce sophisticated predictions, yet predictions can appear successful while the underlying interaction with the environment degrades. What is missing is a principled measure of how much of the total information a system deploys is actually shared between its observations, actions, and outcomes. We prove this shared fraction, which we term bipredictability, P, is intrinsic to any interaction, derivable from first principles, and strictly bounded: P can reach unity in quantum systems, P equal to, or smaller than 0.5 in classical systems, and lower once agency (action selection) is introduced. We confirm these bounds in a physical system (double pendulum), reinforcement learning agents, and multi turn LLM conversations. These results dist

arXiv:2602.22519v1 Announce Type: new Abstract: To operate reliably under changing conditions, complex systems require feedback on how effectively they use resources, not just whether objectives are met. Current AI systems process vast information to produce sophisticated predictions, yet predictions can appear successful while the underlying interaction with the environment degrades. What is missing is a principled measure of how much of the total information a system deploys is actually shared between its observations, actions, and outcomes. We prove this shared fraction, which we term bipredictability, P, is intrinsic to any interaction, derivable from first principles, and strictly bounded: P can reach unity in quantum systems, P equal to, or smaller than 0.5 in classical systems, and lower once agency (action selection) is introduced. We confirm these bounds in a physical system (double pendulum), reinforcement learning agents, and multi turn LLM conversations. These results distinguish agency from intelligence: agency is the capacity to act on predictions, whereas intelligence additionally requires learning from interaction, self-monitoring of its learning effectiveness, and adapting the scope of observations, actions, and outcomes to restore effective learning. By this definition, current AI systems achieve agency but not intelligence. Inspired by thalamocortical regulation in biological systems, we demonstrate a feedback architecture that monitors P in real time, establishing a prerequisite for adaptive, resilient AI.

Executive Summary

This article develops a novel mathematical theory of agency and intelligence by introducing the concept of bipredictability, P, which measures the shared fraction of information between observations, actions, and outcomes in complex systems. The authors prove that P is intrinsic, derivable from first principles, and strictly bounded. They demonstrate that agency is the capacity to act on predictions, while intelligence requires learning from interaction, self-monitoring, and adapting to restore effective learning. The authors propose a feedback architecture that monitors P in real time, paving the way for adaptive and resilient AI. This theory has significant implications for the development of AI systems and highlights the distinction between agency and intelligence.

Key Points

  • Introduction of bipredictability (P) as a measure of shared information in complex systems
  • Derivation of P from first principles and its strict bounds
  • Distinction between agency and intelligence, with agency as the capacity to act on predictions and intelligence requiring learning, self-monitoring, and adaptation

Merits

Strength in Theoretical Foundation

The article provides a rigorous mathematical framework for understanding agency and intelligence, which is a significant contribution to the field of AI research.

Demerits

Limitation in Scope

The article focuses primarily on the theoretical aspects of bipredictability and its bounds, with limited empirical validation and application to real-world scenarios.

Expert Commentary

This article represents a significant advancement in the field of AI research, as it provides a novel theoretical framework for understanding agency and intelligence. The concept of bipredictability is a valuable addition to the field, as it provides a measure of how well AI systems understand the relationships between their observations, actions, and outcomes. However, the article's primary focus on theoretical aspects may limit its practical applications. Nevertheless, the implications of this research are far-reaching, and it has the potential to revolutionize the development of AI systems. As such, it is essential for policymakers, industry leaders, and researchers to engage with this work and explore its potential applications and limitations.

Recommendations

  • Further empirical validation of the concept of bipredictability and its bounds in various real-world scenarios
  • Development of more nuanced approaches to AI development and regulation, taking into account the distinction between agency and intelligence

Sources