Bias in the Extrapolation Base: The Silent War Over AI’s Values
|

Bias in the Extrapolation Base: The Silent War Over AI’s Values

Nick Bostrom addresses concerns about biased influence on the extrapolation base in his discussions on indirect normativity (IN – i.e., an approach where the AI deduces ideal values or states of affairs rather than running with current values and states of affairs) esp. coherent extrapolated volition (CEV) and value alignment in his magnum opus Superintelligence….

Early Philosophical Groundwork for Indirect Normativity
|

Early Philosophical Groundwork for Indirect Normativity

Note: this is by no means exhaustive. it’s focus is on western philosophy. I’ll expand and categorise more later… The earliest precursors to indirect normativity can be traced back to early philosophical discussions on how to ground moral decision-making in processes or frameworks rather than specific, static directives. While Nick Bostrom’s work on indirect normativity…

Indirect Normativity
|

Indirect Normativity

Alignment Challenges It’s hardly a week goes by and there has been some new breakthrough in AI, it’s vaulting in capability to new heights all the time – it’s increasingly hard to deny how critical a role ethics has to play in shaping AI safety. Therefore it’s deeply concerning to see such significant disagreement among…

AI Alignment to Moral Realism
| |

AI Alignment to Moral Realism

The principles behind the development of powerful AI carry exceptionally high stakes. This post considers AI alignment, the difficulties of aligning it to human values especially if humans aren’t aligned (inconsistent, in conflict or ethically questionable), and to what degree we should inform alignment efforts from impartial perspectives or the ‘point of view of the…