Review: The Intelligence Explosion: When AI Beats Humans at Everything
|

Review: The Intelligence Explosion: When AI Beats Humans at Everything

When the machines outthink us, will they outmaneuver us? Barrat’s latest is a wake-up call we can’t afford to ignore. Having had the privilege of previewing James Barrat’s latest work, The Intelligence Explosion (Amazon link, Google Book link), I found it to be a compelling and thought-provoking exploration of the rapidly evolving landscape of artificial…

Complex Value Systems are Required to Realize Valuable Futures – A Critique
|

Complex Value Systems are Required to Realize Valuable Futures – A Critique

This post will be updated over the coming weeks. In his 2011 paper, Complex Value Systems are Required to Realize Valuable Futures, Eliezer Yudkowsky posits that aligning artificial general intelligence (AGI) with human values necessitates embedding the complex intricacies of human ethics into AI systems. He warns against oversimplification, suggesting that without a comprehensive inheritance…

AI Values: Satiable vs Insatiable
| |

AI Values: Satiable vs Insatiable

In short: Satiable values have diminishing returns with more resources, while insatiable values always want more, potentially leading to risky AI behaviour – it seems likely that AI with insatiable values could make existential trades, like gambling the world for a chance to double resources. Therefore potentially design AI with satiable values to ensure stability,…

AI Mural by Clarote & AI4Media
|

Taking AI Welfare Seriously – Future Day talk with Jeff Sebo

About “I argue that there is a realistic possibility that some AI systems will be conscious and/or robustly agentic in the near future. That means that the prospect of AI welfare and moral patienthood, i.e. of AI systems with their own interests and moral significance, is no longer an issue only for sci-fi or the…

Bias in the Extrapolation Base: The Silent War Over AI’s Values
|

Bias in the Extrapolation Base: The Silent War Over AI’s Values

Nick Bostrom addresses concerns about biased influence on the extrapolation base in his discussions on indirect normativity (IN – i.e., an approach where the AI deduces ideal values or states of affairs rather than running with current values and states of affairs) esp. coherent extrapolated volition (CEV) and value alignment in his magnum opus Superintelligence….

Our Big Oops: We Broke Humanity’s Superpower
| |

Our Big Oops: We Broke Humanity’s Superpower

About Robin Hanson will give a talk at Future Day 2025 on cultural drift. Abstract Humanity’s superpower is cultural evolution. Which still goes great for behaviors that can easily vary locally, like most tech and business practices. But modernity has plausibly broken our evolution of shared norms and values, as that needs natural selection of…

Crash Landing: Wages, Wealth, and the Case for Universal Basics
| |

Crash Landing: Wages, Wealth, and the Case for Universal Basics

A panel between James Hughes, James Newton-Thomas, Lev Lafayette and Adam Ford (chair) at Future Day 2025. About If AI drives human wages to the ground, should everything else crash too? Venture capitalist Marc Andreessen envisions a future where AI-induced wage collapse leads to abundance—but what about land prices, investor returns, corporate profits, and living…