Review: The Intelligence Explosion: When AI Beats Humans at Everything
|

Review: The Intelligence Explosion: When AI Beats Humans at Everything

When the machines outthink us, will they outmaneuver us? Barrat’s latest is a wake-up call we can’t afford to ignore. Having had the privilege of previewing James Barrat’s latest work, The Intelligence Explosion (Amazon link, Google Book link), I found it to be a compelling and thought-provoking exploration of the rapidly evolving landscape of artificial…

Complex Value Systems are Required to Realize Valuable Futures – A Critique
|

Complex Value Systems are Required to Realize Valuable Futures – A Critique

This post will be updated over the coming weeks. In his 2011 paper, Complex Value Systems are Required to Realize Valuable Futures, Eliezer Yudkowsky posits that aligning artificial general intelligence (AGI) with human values necessitates embedding the complex intricacies of human ethics into AI systems. He warns against oversimplification, suggesting that without a comprehensive inheritance…

AI Values: Satiable vs Insatiable
| |

AI Values: Satiable vs Insatiable

In short: Satiable values have diminishing returns with more resources, while insatiable values always want more, potentially leading to risky AI behaviour – it seems likely that AI with insatiable values could make existential trades, like gambling the world for a chance to double resources. Therefore potentially design AI with satiable values to ensure stability,…

Bias in the Extrapolation Base: The Silent War Over AI’s Values
|

Bias in the Extrapolation Base: The Silent War Over AI’s Values

Nick Bostrom addresses concerns about biased influence on the extrapolation base in his discussions on indirect normativity (IN – i.e., an approach where the AI deduces ideal values or states of affairs rather than running with current values and states of affairs) esp. coherent extrapolated volition (CEV) and value alignment in his magnum opus Superintelligence….

On the Emergence of Biased Coherent Value Systems in AI as Value Risk
|

On the Emergence of Biased Coherent Value Systems in AI as Value Risk

Firstly, values are important because they are the fundamental principles that guide decisions and actions in humans. They shape our understanding of right and wrong, and they influence how we interact with the world around us. In the context of AI, values are particularly important because they determine how AI systems will behave, what goals…

Survival, Cooperation, and the Evolution of Values in the Age of AI
|

Survival, Cooperation, and the Evolution of Values in the Age of AI

No one in their right mind can deny the struggle to survive animates natural selection. Though it’s a bit misleading to say that ‘survival is paramount – and that every strategy serves that goal’: survival plays a crucial role in evolution, but it is not the ultimate goal. As emphasised by Charles Darwin’s theory of…

Early Philosophical Groundwork for Indirect Normativity
|

Early Philosophical Groundwork for Indirect Normativity

Note: this is by no means exhaustive. it’s focus is on western philosophy. I’ll expand and categorise more later… The earliest precursors to indirect normativity can be traced back to early philosophical discussions on how to ground moral decision-making in processes or frameworks rather than specific, static directives. While Nick Bostrom’s work on indirect normativity…