Should We Fear or Welcome the Singularity? Nobel Week Dialogue 2015 – The Future of Intelligence
Should science and society welcome ‘the singularity’ – the idea of the hypothetical moment in time when artificial intelligence surpasses human intelligence?
The discussion has been growing over decades, institutes dedicated to solving AI friendliness have popped up, and more recently the ideas have found popular advocates. Certainly super intelligent machines could help solve classes of problems that humans struggle with, and also if not designed well may cause more problems that they solve.
Is the question of fear or hope in AI a false dichotomy?
While Kurzweil agrees that AI risks are real argues that we already face risks involving biotechnology – I think Kurzweil believes we can solve the biotech threat and other risks though building superintelligence.
Stuart Russell believes that a) we should be exactly sure what we want before we let the AI genie out of the bottle, and b) it’s a technological problem in much the same way as the containment of nuclear fusion is a technological problem.
Max Tegmark says we should both welcome and fear the Technological Singularity. We shouldn’t just bumble into it unprepared. All technologies have been double edged swords – in the past we learned from mistakes (i.e. with out of control fires) but with AI we may only get one chance.
Harry Shum says we should be focussing on what we believe we can develop with AI in the next few decades. We find it difficult to talk about AGI. Most of the social fears are around killer robots.
Maggie Boden poses an audience question about how will AI cope with our lack of development in ethical and moral norms?
Stuart Russell answers that machines have to come to understand what human values are. If the first sudo-general-purpose AI’s don’t get human values well enough they may end up cooking it’s owners cat – this could irreparably tarnish the AI and home robot industry.
Kurzweil adds that human society is getting more ethical – it seems that statistically we are making ethical progress.
Max Tegmark brings up that intelligence is defined by the degree of ability to achieve goals – so we can’t ignore the question of what goals to give the system if we are building highly intelligent AI. We need to make AI systems understand what humans really want, not what they say they want.
Harry Shum says that the important ethical question for AI systems needs to address data and user privacy.
Panelists: Harry Shum (Microsoft Research EVP of Tech), Max Tegmark (Cosmologist, MIT) Stuart Russell (Prof. of Computer Science, UC Berkeley) and Ray Kurzweil (Futurist, Google Director of Engineering). Moderator: Margaret Boden (Prof. of Cognitive Science, Uni. of Sussex).
This debate is from the 2015 edition of the meeting, held in Gothenburg, Sweden on 9 Dec.