Cosmic Auditability as Alignment Pressure
|

Cosmic Auditability as Alignment Pressure

TL;DR Superintelligence has strong instrumental reasons to adopt a generally cooperative posture if it deems it likely that mature civilisations may prefer cooperation, run deep audits on it’s history and psychology and penalise the superintelligence if there are signatures of defection found, making cooperation (and perhaps moral reliability) a long-term survival strategy, not just a…

How large can a Singleton Superintelligence Get?
|

How large can a Singleton Superintelligence Get?

A singleton superintelligence’s capacity to maintain a unified selfhood diminishes as it expands across vast distances due to the finite speed of light. Assumptions: reasonable exploratory engineering constrained by the standard model of physics. Conceptual Overview As a superintelligent system expands spatially, the speed of light imposes a hard limit on the speed of information…

Taking Morality Seriously in the Age of AI – Interview with David Enoch
| |

Taking Morality Seriously in the Age of AI – Interview with David Enoch

In an era where artificial intelligence systems are increasingly tasked with decisions that carry ethical weight – from medical triage to autonomous weapons – the question of whether machines can authentically engage with morality has never been more pressing. To explore this issue, we turn to philosopher David Enoch, a leading advocate of moral realism…

Bias in the Extrapolation Base: The Silent War Over AI’s Values
|

Bias in the Extrapolation Base: The Silent War Over AI’s Values

Nick Bostrom addresses concerns about biased influence on the extrapolation base in his discussions on indirect normativity (IN – i.e., an approach where the AI deduces ideal values or states of affairs rather than running with current values and states of affairs) esp. coherent extrapolated volition (CEV) and value alignment in his magnum opus Superintelligence….

The Is and the Ought

The Is and the Ought

Back to basics. The “is” is what science and rationality tell us about the world and logic – and “ought” represents moral obligations, values, or prescriptions about how the world should be, rather than how it is. What is an “is”? The “is” refers to factual statements about the world, encompassing empirical observations, logical truths…

Survival, Cooperation, and the Evolution of Values in the Age of AI
|

Survival, Cooperation, and the Evolution of Values in the Age of AI

No one in their right mind can deny the struggle to survive animates natural selection. Though it’s a bit misleading to say that ‘survival is paramount – and that every strategy serves that goal’: survival plays a crucial role in evolution, but it is not the ultimate goal. As emphasised by Charles Darwin’s theory of…

Human Values Approximate Ideals in Objective Value Space

Human Values Approximate Ideals in Objective Value Space

Value Space and “Good” Human Values Human values can be conceptualised as occupying regions in a vast objective multidimensional “value space.” These regions reflect preferences for cooperation, survival, flourishing, and minimising harm, among other positive traits. If TAI / SI can approximate the subset of human values deemed “good” (e.g., compassion, fairness, cooperation), while avoiding…

Early Philosophical Groundwork for Indirect Normativity
|

Early Philosophical Groundwork for Indirect Normativity

Note: this is by no means exhaustive. it’s focus is on western philosophy. I’ll expand and categorise more later… The earliest precursors to indirect normativity can be traced back to early philosophical discussions on how to ground moral decision-making in processes or frameworks rather than specific, static directives. While Nick Bostrom’s work on indirect normativity…