AI Safety: From Control to Value Alignment
| | |

AI Safety: From Control to Value Alignment

Short:In evaluating strategies for AI Safety, capability control may buy us time – it may even calm our nerves, but it won’t hold once AI outsmarts the safeguards. The deeper challenge is value alignment: increasing the likelihood that superintelligent AI successfully navigates the landscape of value – optimising for coherent and grounded notions of the…

Meditations on Geth

Meditations on Geth

As a posthumanist who thinks a lot about AI and the future of life, the Geth in Mass Effect are fascinating because they blur the line between individual and collective intelligence. There is quite a lot of philosophical depth to explore, especially if you are interested in either: a) role playing beyond maxing out your…

Are Machines Capable of Morality? Join Professor Colin Allen!
| | |

Are Machines Capable of Morality? Join Professor Colin Allen!

The Ethics of AI: Why We Need Moral Machines, Not Just Smart Ones In an era defined by rapid advancements in artificial intelligence, we often find ourselves wrestling with profound ethical questions. One of the most provocative of these is whether machines can, or should, be given a sense of morality. In this recent video…

Cunning + Luck ≈> High IQ
| |

Cunning + Luck ≈> High IQ

Cunning + luck ≈> brute-force intelligence – (cascading caveats ensuing).. In a densely populated landscape of naively or sub-adequately regulated selection pressures, raw cunning combined with moderate intelligence and high risk-tolerance will often outperform high IQ (esp. in the short to medium term). This holds so long as peak cunning matches or exceeds the higher…

Seven AI Safety Strategies – and the One That’s Missing
| |

Seven AI Safety Strategies – and the One That’s Missing

Blue Dot Impact recently published a detailed overview of seven key strategies in AI safety – ranging from international cooperation to domestic race-winning and societal resilience. It’s the result of 100+ hours of analysis, interviews, and synthesis by Adam Jones, and it’s a solid strategic landscape mapping effort. I was made aware of this via…

Cosmic Auditability as Alignment Pressure
|

Cosmic Auditability as Alignment Pressure

TL;DR Superintelligence has strong instrumental reasons to adopt a generally cooperative posture if it deems it likely that mature civilisations may prefer cooperation, run deep audits on it’s history and psychology and penalise the superintelligence if there are signatures of defection found, making cooperation (and perhaps moral reliability) a long-term survival strategy, not just a…

How large can a Singleton Superintelligence Get?
|

How large can a Singleton Superintelligence Get?

A singleton superintelligence’s capacity to maintain a unified selfhood diminishes as it expands across vast distances due to the finite speed of light. Assumptions: reasonable exploratory engineering constrained by the standard model of physics. Conceptual Overview As a superintelligent system expands spatially, the speed of light imposes a hard limit on the speed of information…