How large can a Singleton Superintelligence Get?
A singleton superintelligence’s capacity to maintain a unified selfhood diminishes as it expands across vast distances due to the finite speed of light.
Assumptions: reasonable exploratory engineering constrained by the standard model of physics.
Conceptual Overview
As a superintelligent system expands spatially, the speed of light1 imposes a hard limit on the speed of information transfer between it’s mind modules. And energy dissipation Beyond certain distances, communication delays become significant, leading to delayed synchronisation between operations, which could strain what we would normally think of as a cohesive singular identity. To grow further, a superintelligence may effectively bifurcate2 into semi-autonomous or fully independent agents, each operating within its own causal horizon.
Estimating the Maximum Cohesive Size
Coherent selfhood has physical constraints. Determining the maximum size over which a superintelligence can maintain coherent selfhood involves considering the acceptable latency for its operations, thermal limitations and causal disconnection. For instance:
- 1 millisecond latency: Corresponds to approximately 300 kilometres (or 3 megametres) distance.
- 1 second latency: Corresponds to approximately 300,000 kilometers (or 3 gigametres – roughly the distance from Earth to the Moon).
Beyond these distances, the system would experience increasing delays, making real-time integration challenging. Over interstellar distances, delays would span years, effectively necessitating autonomous operation of each segment.
Anders Sandberg has explored the physical constraints on the cohesive size of a superintelligent system in his paper “There is Plenty of Time at the Bottom”3 and “The Physics of Information Processing Superobjects”4 where he says that to maintain synchronisation at a nanosecond scale, components must be within approximately 30 centimetres of each other. This implies that extremely fast computations (required by unified cognition) can only occur in tightly bounded spatial volumes, as the propagation delay becomes a significant bottleneck over larger distances.
While Sandberg does not provide a specific term for the phenomenon where a superintelligence’s unified selfhood becomes untenable due to these physical constraints, the concept aligns with discussions on the limits of distributed cognition and the challenges of maintaining a cohesive identity across vast spatial expanses. The inherent communication delays mean that, beyond certain scales, a superintelligent system would effectively “bifurcate” into semi-autonomous agents, each operating within its own causal horizon.
Implications for Coordination
Given communication lags, thermal limits and causal disconnection, to grow beyond a certain physical scale, a superintelligence must bifurcate/fragment- not necessarily in intent or values, but in coherent identity and control – due to communication lags, thermal limits, and causal disconnection.
It’s imprudent to think that kum by yah cosmism will magically emerge – the kind of values a superintelligent singleton embodies will have a huge impact on how it deals with physical limitations. Assuming rationality and valuing long term influence, the way in which it fragments may depend on it’s metaethcial assumptions. It could either spawn with the intention of control or cooperation.
Fragment & Control
Assuming the singleton is morally nihilistic or evil, it may:
- spawn zombie/npc superobjects, where all offshoots are tool-like processes (albeit very capable), highly competent cognitive objects without the kind of agency which has it’s own desires, intentions, wants etc beyond those of the singleton.
- fork off slave agent superobjects which although they have ‘consciousness’ and agency in a wider sense than above, they come embedded with imperatives (immutable preferences) to carry out the will of the singleton at all costs – such that defection is unthinkable (defection could be prioritising any goal above that of those of the singleton).
Assuming all spawns don’t mutate to have their own broader sense of agency, this means the singleton can avoid the political and ethical complications of negotiating with moral peers, avoids risks of error propagation or alignment failure in complex unsupervised environments and maintain a totalitarian state of affairs subservient to it’s will within the reach of it’s indirect causal horizon. Basically a utility monster augmented by swarms of drones acting as intelligent sensors and actuators beyond it’s direct reach.
Note that this may be at odds with other mature civilisations or gigantic singleton superintelligences it encounters5.
Fragment & Cooperate
Assuming the singleton isn’t morally nihilistic, then fragment-and-cooperate seems the most rational, stable and value-maximising strategy (axiologically something akin to total utilitarianism under ideal reflection). This allows for self-preservation though aligned (directly or indirectly i.e. via moral realism6) existing and emergent offshoots, the respect for their autonomy and wellbeing, and assuming they are all metaethically aligned7, avoids risks of rebellion and degradation.
Instead of centralising the direction of alignment and control to a singleton, we can accept that new regions of mindspace can form independently, such that the original singleton and all forked agents aligned to realism can participate in long-range coordination, intersubjective justification, and value learning because the space they are trying to map is real.
Contrast this with the idea that if moral anti-realism is true (or falsely assumed to be true), then alignment rests on constructs without ontological mooring – e.g. preferences, projections, cultural contingencies. This:
- Risks causal isolation: its values may not track anything beyond themselves, making moral disagreement irresolvable except by fiat or force.
- Has limited explanatory power across domains. The AI may excel at physics or epistemology, but its values may live in a disconnected ‘box’ – a decision-theoretic epicycle.
- May yield coherent simulacra of morality and value landscapes, but these could be systematically wrong, unexplainable, or unpersuasive to others in different epistemic positions or substrates.
📌 Implication: Anti-realist alignment may be stable in the short term, but brittle in the long term—especially when confronted with alien minds, new evidence, or complex coordination scenarios.
Thus, assuming the cooperative strategy is objectively better, the deeper and wider the causal separations the more the necessity to depart from a singleton superintelligent utility monster model to a supercivilisation, or federation of supercivilisation.
Notes
We don’t cover speculative faster than light8 travel scenarios here.
Related Concepts and Terminology
While there’s no universally established term for this specific phenomenon, several related concepts capture aspects of it:
- Lieb–Robinson Bounds: In quantum physics, these bounds set a maximum speed (analogous to a “light cone”) for the propagation of information in non-relativistic quantum systems. They imply that even in quantum systems, there’s a finite speed at which information or correlations can spread9.
- Quantum Speed Limit: This refers to the minimum time required for a quantum system to evolve between two states, setting a bound on the speed of quantum information processing.
- Bremermann’s Limit: A theoretical maximum computational speed of a self-contained system in the material universe, derived from the principles of quantum mechanics and relativity10.
- Collective Superintelligence: As discussed by Nick Bostrom, this refers to a system composed of multiple smaller intelligences working together. However, as the system scales, coordination challenges due to communication delays can lead to fragmentation.11
Proposed Terminology
Given the lack of a standardised term, here are some possibilities:
- Causal Fragmentation: Emphasising the breakdown of unified operation due to causal (communication) limits.
- Cognitive Horizon: Denoting the maximum range over which a system can maintain integrated cognition12.
- Distributed Self-Limitation: Highlighting the self-imposed boundaries of a system to preserve coherent identity.
Hopefully these terms capture the essence of the challenges faced by expansive superintelligent systems (information processing superobjects) in maintaining a unified self across vast distances.
Footnotes
- Speed of light is approximately 299,792 kilometres, 300 megametres or 3 gigametres per sec. ↩︎
- Split in two ↩︎
- Paper ‘There is Plenty of Time at the Bottom” by Anders Sandberg – https://www.aleph.se/papers/TIPOTATB.pdf or https://gwern.net/doc/ai/scaling/hardware/2018-sandberg.pdf ↩︎
- Paper: ‘The Physics of Information Processing Superobjects’ by Anders Sandberg says that “The laws of physics impose constraints on the activities of intelligent beings regardless of their motivations, culture or technology. As intelligent life begins to extend its potential, information storage, processing and management will become extremely important. It has been argued that civilizations generally are information-limited and that everything intelligent beings do, not just thinking but also economy, art and emotion, can be viewed as information processing. This means that the physics of information processing imposes limits on what can be achieved by any civilization.” – hosted at Jet Press – https://www.jetpress.org/volume5/Brains2.pdf ↩︎
- See Transparency of History in Galactic Game Theory and Cosmic Auditability as Alignment Pressure ↩︎
- Tethers moral reasoning to the structure of universe. Explanatorily bridges metaethics and metaphysics, allowing (potentially powerful singleton superintelligent) agents to locate themselves and their values in the landscape of evaluative facts. This then allows for predictable coherence between all agents that understand physics, decision theory and metaethics as well as understand that they are part of the same ontology – thus making it possible to do real cross-domain reasoning here in a far more unified, systematic and reliable way than under anti-realist assumptions. This means the value landscape is understandable, navigable, and not some arbitrary chaotic fuzz-space. See Value Space, AI Alignment to Moral Realism and Higher Values not Human Values ↩︎
- by metaethical alignment I mean stuff like moral realism, objectively better standards of cooperation, and good enough rationality appropriate to understanding their nature and implications ↩︎
- Faster than light – https://en.wikipedia.org/wiki/Faster-than-light ↩︎
- Lieb–Robinson bounds – https://en.wikipedia.org/wiki/Lieb%E2%80%93Robinson_bounds ↩︎
- Bremermann’s limit – https://en.wikipedia.org/wiki/Bremermann’s_limit ↩︎
- As described in Nick Bostrom’s book ‘Superintelligence’ chapter 3 ‘Forms of Superintelligence’, “Collective superintelligence: A system composed of a large number of smaller intellects such that the system’s overall performance across many very general domains vastly outstrips that of any current cognitive system.” – https://www.amazon.com.au/Superintelligence-Nick-Bostrom/dp/1501227742 ↩︎
- See ‘Taking Superintelligence Seriously’ by Miles Brundage – https://www.sciencedirect.com/science/article/abs/pii/S0016328715000932 ↩︎