Can we build AI without losing control over it? – Sam Harris
|

Can we build AI without losing control over it? – Sam Harris

San Harris (author of The Moral Landscape and host of the Waking Up podcast) discusses the need for AI Safety – while fun to think about, we are unable to “martial an appropriate emotional response” to improvements in AI and automation and the prospect of dangerous AI – it’s a failure of intuition to respond…

Anders Sandberg -The Technological Singularity

Anders Sandberg -The Technological Singularity

Anders gives a short tutorial on the Singularity – clearing up confusion and highlighting important aspects of the Technological Singularity and related ideas, such as accelerating change, horizons of predictability, self-improving artificial intelligence, and the intelligence explosion. Tutorial Video: Points covered in the tutorial: The Mathematical Singularity The Technological Singularity: A Horizon of predictability Confusion…

Sam Harris on AI Implications -The Ruben Report
| | |

Sam Harris on AI Implications -The Ruben Report

A transcription of Sam Harris’ discussion on the Implications of Strong AI during recent appearance on the Ruben Report. Sam contrasts narrow AI with strong AI, AI Safety, the possibility of rapid AI self-improvement, the idea of AI superintelligence may seem alien to us, and he also brings up the idea that it is important…

Is there a Meaningful Future for Non-Optimal Moral Agents?
| | | |

Is there a Meaningful Future for Non-Optimal Moral Agents?

In an interview last year, I had a discussion with John Danaher on the Hedonistic Imperative & Superintelligence – a concern he has with HI is that it denies or de-emphasises some kind of moral agency – in moral theory there is a distinction between moral agents (being a responsible actor able to make moral…

The Singularity & Prediction – Can there be an Intelligence Explosion? – Interview with Marcus Hutter
| | |

The Singularity & Prediction – Can there be an Intelligence Explosion? – Interview with Marcus Hutter

Can there be an Intelligence Explosion?  Can Intelligence Explode? The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. The most popular scenario is the creation of super-intelligent algorithms that recursively create ever higher intelligences. What could it mean for intelligence to explode? We need to provide more careful treatment of…

Can Intelligence Explode? – Marcus Hutter at Singularity Summit Australia 2012
| | |

Can Intelligence Explode? – Marcus Hutter at Singularity Summit Australia 2012

Abstract: The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. The most popular scenario is the creation of super-intelligent algorithms that recursively create ever higher intelligences. After a short introduction to this intriguing potential future, I will elaborate on what it could mean for intelligence to explode. In this course,…

Vernor Vinge on the Technological Singularity
| | |

Vernor Vinge on the Technological Singularity

What is the Singularity? Vernor Vinge speaks about technological change, offloading cognition from minds into the environment, and the potential of Strong Artificial Intelligence. Vernor Vinge popularised and coined the term “Technological Singularity” in his 1993 essay “The Coming Technological Singularity“, in which he argues that the creation of superhuman artificial intelligence will mark the…

Vernor Vinge on the Turing Test, Artificial Intelligence
|

Vernor Vinge on the Turing Test, Artificial Intelligence

On the coat-tails of a the blockbuster film “The Imitation Game” I saw quite a bit of buzz on the internet about Alan Turing, and the Turing Test.  The title of the movie refers to the idea of the Turing Test may someday show that machines would ostensibly be (at least in controlled circumstances) indistinguishable…