Complex Value Systems are Required to Realize Valuable Futures – A Critique
|

Complex Value Systems are Required to Realize Valuable Futures – A Critique

This post will be updated over the coming weeks. In his 2011 paper, Complex Value Systems are Required to Realize Valuable Futures, Eliezer Yudkowsky posits that aligning artificial general intelligence (AGI) with human values necessitates embedding the complex intricacies of human ethics into AI systems. He warns against oversimplification, suggesting that without a comprehensive inheritance…

| |

Nick Bostrom – AI Ethics – From Utility Maximisation to Humility & Cooperation

​Norm-sensitive cooperation might beat brute-force optimisation when aligning AI with the complex layers of human value – and possibly cosmic ones too. Nick Bostrom suggests that AI systems designed with humility and a cooperative orientation are more likely to navigate the complex web of human and potentially cosmic norms than those driven by rigid utility…

AI Values: Satiable vs Insatiable
| |

AI Values: Satiable vs Insatiable

In short: Satiable values have diminishing returns with more resources, while insatiable values always want more, potentially leading to risky AI behaviour – it seems likely that AI with insatiable values could make existential trades, like gambling the world for a chance to double resources. Therefore potentially design AI with satiable values to ensure stability,…

AI Mural by Clarote & AI4Media
|

Taking AI Welfare Seriously – Future Day talk with Jeff Sebo

About “I argue that there is a realistic possibility that some AI systems will be conscious and/or robustly agentic in the near future. That means that the prospect of AI welfare and moral patienthood, i.e. of AI systems with their own interests and moral significance, is no longer an issue only for sci-fi or the…

Bias in the Extrapolation Base: The Silent War Over AI’s Values
|

Bias in the Extrapolation Base: The Silent War Over AI’s Values

Nick Bostrom addresses concerns about biased influence on the extrapolation base in his discussions on indirect normativity (IN – i.e., an approach where the AI deduces ideal values or states of affairs rather than running with current values and states of affairs) esp. coherent extrapolated volition (CEV) and value alignment in his magnum opus Superintelligence….

The Is and the Ought

The Is and the Ought

Back to basics. The “is” is what science and rationality tell us about the world and logic – and “ought” represents moral obligations, values, or prescriptions about how the world should be, rather than how it is. What is an “is”? The “is” refers to factual statements about the world, encompassing empirical observations, logical truths…