Nick Bostrom: Failure to Develop Superintelligence Would Be a Catastrophe

Nick Bostrom: Failure to Develop Superintelligence Would Be a Catastrophe

Nick Bostrom dropped a bombshell that cuts through some of the haze around AI doom-and-gloom: “I think ultimately this transition to the superintelligence era is one we should do.” That’s right—should. Not “might” or “could,” but a firm nudge toward a future where humanity doesn’t just survive AI but thrives with it. Even more striking?…

Bias in the Extrapolation Base: The Silent War Over AI’s Values
|

Bias in the Extrapolation Base: The Silent War Over AI’s Values

Nick Bostrom addresses concerns about biased influence on the extrapolation base in his discussions on indirect normativity (IN – i.e., an approach where the AI deduces ideal values or states of affairs rather than running with current values and states of affairs) esp. coherent extrapolated volition (CEV) and value alignment in his magnum opus Superintelligence….

Human Values Approximate Ideals in Objective Value Space

Human Values Approximate Ideals in Objective Value Space

Value Space and “Good” Human Values Human values can be conceptualised as occupying regions in a vast objective multidimensional “value space.” These regions reflect preferences for cooperation, survival, flourishing, and minimising harm, among other positive traits. If TAI / SI can approximate the subset of human values deemed “good” (e.g., compassion, fairness, cooperation), while avoiding…

Open-Ended vs. Closed-Minded Conceptions of Superintelligence
| | |

Open-Ended vs. Closed-Minded Conceptions of Superintelligence

This talk is part of the ‘Stepping Into the Future‘ conference. Abstract: Superintelligence, the next phase beyond today’s narrow AI and tomorrow’s AGI, almost intrinsically evades our attempts at detailed comprehension. Yet very different perspectives on superintelligence exist today and have concrete influence on thinking about matters ranging from AGI architectures to technology regulation.One paradigm…

Nick Bostrom: Why Focus on Existential Risk related to Machine Intelligence?
| |

Nick Bostrom: Why Focus on Existential Risk related to Machine Intelligence?

One can think of Existential Risk as a subcategory of a Global Catastrophic Risk – while GCR’s are really bad, civilization has the potential to recover from such a global catastrophic disaster. An existential Risk is one in which there is no chance of recoverability. An example of the sort of disaster that fits the…

Vernor Vinge on the Turing Test, Artificial Intelligence
|

Vernor Vinge on the Turing Test, Artificial Intelligence

On the coat-tails of a the blockbuster film “The Imitation Game” I saw quite a bit of buzz on the internet about Alan Turing, and the Turing Test.  The title of the movie refers to the idea of the Turing Test may someday show that machines would ostensibly be (at least in controlled circumstances) indistinguishable…