Seven AI Safety Strategies – and the One That’s Missing
| |

Seven AI Safety Strategies – and the One That’s Missing

Blue Dot Impact recently published a detailed overview of seven key strategies in AI safety – ranging from international cooperation to domestic race-winning and societal resilience. It’s the result of 100+ hours of analysis, interviews, and synthesis by Adam Jones, and it’s a solid strategic landscape mapping effort. I was made aware of this via…

Cosmic Auditability as Alignment Pressure
|

Cosmic Auditability as Alignment Pressure

TL;DR Superintelligence has strong instrumental reasons to adopt a generally cooperative posture if it deems it likely that mature civilisations may prefer cooperation, run deep audits on it’s history and psychology and penalise the superintelligence if there are signatures of defection found, making cooperation (and perhaps moral reliability) a long-term survival strategy, not just a…

How large can a Singleton Superintelligence Get?
|

How large can a Singleton Superintelligence Get?

A singleton superintelligence’s capacity to maintain a unified selfhood diminishes as it expands across vast distances due to the finite speed of light. Assumptions: reasonable exploratory engineering constrained by the standard model of physics. Conceptual Overview As a superintelligent system expands spatially, the speed of light imposes a hard limit on the speed of information…

Taking Morality Seriously in the Age of AI – Interview with David Enoch
| |

Taking Morality Seriously in the Age of AI – Interview with David Enoch

In an era where artificial intelligence systems are increasingly tasked with decisions that carry ethical weight – from medical triage to autonomous weapons – the question of whether machines can authentically engage with morality has never been more pressing. To explore this issue, we turn to philosopher David Enoch, a leading advocate of moral realism…

Review: The Intelligence Explosion: When AI Beats Humans at Everything
|

Review: The Intelligence Explosion: When AI Beats Humans at Everything

When the machines outthink us, will they outmaneuver us? Barrat’s latest is a wake-up call we can’t afford to ignore. Having had the privilege of previewing James Barrat’s latest work, The Intelligence Explosion (Amazon link, Google Book link), I found it to be a compelling and thought-provoking exploration of the rapidly evolving landscape of artificial…

Complex Value Systems are Required to Realize Valuable Futures – A Critique
|

Complex Value Systems are Required to Realize Valuable Futures – A Critique

In his 2011 paper, Complex Value Systems are Required to Realize Valuable Futures , Eliezer Yudkowsky posits that aligning artificial general intelligence (AGI) or artificial superintelligence (ASI) with human values necessitates embedding the complex intricacies of human ethics into AI systems. He warns against oversimplification, suggesting that without a comprehensive inheritance of human values, AGI…

| |

Nick Bostrom – AI Ethics – From Utility Maximisation to Humility & Cooperation

​Norm-sensitive cooperation might beat brute-force optimisation when aligning AI with the complex layers of human value – and possibly cosmic ones too. Nick Bostrom suggests that AI systems designed with humility and a cooperative orientation are more likely to navigate the complex web of human and potentially cosmic norms than those driven by rigid utility…