| |

Nick Bostrom – AI Ethics – From Utility Maximisation to Humility & Cooperation

​Norm-sensitive cooperation might beat brute-force optimisation when aligning AI with the complex layers of human value – and possibly cosmic ones too. Nick Bostrom suggests that AI systems designed with humility and a cooperative orientation are more likely to navigate the complex web of human and potentially cosmic norms than those driven by rigid utility…

Nick Bostrom: Failure to Develop Superintelligence Would Be a Catastrophe

Nick Bostrom: Failure to Develop Superintelligence Would Be a Catastrophe

Nick Bostrom dropped a bombshell that cuts through some of the haze around AI doom-and-gloom: “I think ultimately this transition to the superintelligence era is one we should do.” That’s right—should. Not “might” or “could,” but a firm nudge toward a future where humanity doesn’t just survive AI but thrives with it. Even more striking?…

AI Values: Satiable vs Insatiable
| |

AI Values: Satiable vs Insatiable

In short: Satiable values have diminishing returns with more resources, while insatiable values always want more, potentially leading to risky AI behaviour – it seems likely that AI with insatiable values could make existential trades, like gambling the world for a chance to double resources. Therefore potentially design AI with satiable values to ensure stability,…

AI Alignment to Moral Realism
| |

AI Alignment to Moral Realism

The principles behind the development of powerful AI carry exceptionally high stakes. This post considers AI alignment, the difficulties of aligning it to human values especially if humans aren’t aligned (inconsistent, in conflict or ethically questionable), and to what degree we should inform alignment efforts from impartial perspectives or the ‘point of view of the…

Vulnerable World Hypothesis

Vulnerable World Hypothesis

Nick Bostrom’s “Vulnerable World Hypothesis” (VWH) explores the idea that technological development could expose vulnerabilities, making it extraordinarily easy for individuals or small groups to cause widespread harm. The hypothesis presents various scenarios, categorized into different “balls” from a hypothetical “urn of invention“, symbolizing different types of technological developments: Bostrom suggests that as humanity pulls…

The Simulation Argument – How likely is it that we are living in a simulation?
| | | | |

The Simulation Argument – How likely is it that we are living in a simulation?

The simulation hypothesis doesn’t seem to be a terse parsimonious explanation for the universe we live in. If what is most important is to simulate ancestors, what’s the motivation for all the hugely detailed rendering of space? Why not just simulate earth or our solar system or our galaxy? People often jump to the conclusions…

Nick Bostrom: Why Focus on Existential Risk related to Machine Intelligence?
| |

Nick Bostrom: Why Focus on Existential Risk related to Machine Intelligence?

One can think of Existential Risk as a subcategory of a Global Catastrophic Risk – while GCR’s are really bad, civilization has the potential to recover from such a global catastrophic disaster. An existential Risk is one in which there is no chance of recoverability. An example of the sort of disaster that fits the…