Posts

AI: The Story So Far – Stuart Russell

stuart russell - redAwesome to have Stuart Russell discussing AI Safety – a very important topic. Too long have people been associating the idea of AI safety issues with Terminator – unfortunately the human condition seems such that people often don’t give themselves permission to take seriously non-mainstream ideas unless they see a tip of the hat from an authority figure.

During the presentation Stuart brings up a nice quote by Norbert Wiener:

If we use, to achieve our purposes, a mechanical agency with whose operation we cannot efficiently interfere once we have started it, because the action is so fast and irrevocable that we have not the data to intervene before the action is complete, then we had better be quite sure that the purpose put into the machine is the purpose which we really desire and not merely a colorful imitation of it.Norbert Wiener

P.s. Stuart Russell co-authored AI A Modern Approach with Peter Norvig – arguably the most popular textbook on AI theory.

The lecture was presented at the 2016 Colloquium Series on Robust and Beneficial AI (CSRBAI) hosted by the Machine Intelligence Research Institute (MIRI) and Oxford’s Future of Humanity Institute (FHI).

What I’m finding is that senior people in the field who have never publicly evinced any concern before are privately thinking that we do need to take this issue very seriously, and the sooner we take it seriously the better.Stuart Russell

Video of presentation:

 

The field [of AI] has operated for over 50 years on one simple assumption: the more intelligent, the better. To this must be conjoined an overriding concern for the benefit of humanity. The argument is very simple:

1. AI is likely to succeed.
2. Unconstrained success brings huge risks and huge benefits.
3. What can we do now to improve the chances of reaping the benefits and avoiding the risks?

Some organizations are already considering these questions, including the Future of Humanity Institute at Oxford, the Centre for the Study of Existential Risk at Cambridge, the Machine Intelligence Research Institute in Berkeley, and the Future of Life Institute at Harvard/MIT. I serve on the Advisory Boards of CSER and FLI.

Just as nuclear fusion researchers consider the problem of containment of fusion reactions as one of the primary problems of their field, it seems inevitable that issues of control and safety will become central to AI as the field matures. The research questions are beginning to be formulated and range from highly technical (foundational issues of rationality and utility, provable properties of agents, etc.) to broadly philosophical.

– Stuart Russell (Quote Source)

Should We Fear or Welcome the Singularity? Nobel Week Dialogue 2015 – The Future of Intelligence

Panel - Ray Kurzweil Stuart Russell Max Tegmark Harry Shum - mod Margaret BodenShould science and society welcome ‘the singularity’ – the idea of the hypothetical moment in time when artificial intelligence surpasses human intelligence?
The discussion has been growing over decades, institutes dedicated to solving AI friendliness have popped up, and more recently the ideas have found popular advocates. Certainly super intelligent machines could help solve classes of problems that humans struggle with, and also if not designed well may cause more problems that they solve.

Is the question of fear or hope in AI a false dichotomy?

Ray Kurzweil

Ray Kurzweil

While Kurzweil agrees that AI risks are real argues that we already face risks involving biotechnology – I think Kurzweil believes we can solve the biotech threat and other risks though building superintelligence.

Stuart Russell believes that a) we should be exactly sure what we want before we let the AI genie out of the bottle, and b) it’s a technological problem in much the same way as the containment of nuclear fusion is a technological problem.

Max Tegmark says we should both welcome and fear the Technological Singularity. We shouldn’t just bumble into it unprepared. All technologies have been double edged swords – in the past we learned from mistakes (i.e. with out of control fires) but with AI we may only get one chance.

Harry Shum says we should be focussing on what we believe we can develop with AI in the next few decades. We find it difficult to talk about AGI. Most of the social fears are around killer robots.

Maggie Boden

Maggie Boden

Maggie Boden poses an audience question about how will AI cope with our lack of development in ethical and moral norms?

Stuart Russell answers that machines have to come to understand what human values are. If the first sudo-general-purpose AI’s don’t get human values well enough they may end up cooking it’s owners cat – this could irreparably tarnish the AI and home robot industry.

Kurzweil adds that human society is getting more ethical – it seems that statistically we are making ethical progress.

Max Tegmark

Max Tegmark

Max Tegmark brings up that intelligence is defined by the degree of ability to achieve goals – so we can’t ignore the question of what goals to give the system if we are building highly intelligent AI. We need to make AI systems understand what humans really want, not what they say they want.

Harry Shum says that the important ethical question for AI systems needs to address data and user privacy.

Panelists: Harry Shum (Microsoft Research EVP of Tech), Max Tegmark (Cosmologist, MIT) Stuart Russell (Prof. of Computer Science, UC Berkeley) and Ray Kurzweil (Futurist, Google Director of Engineering). Moderator: Margaret Boden (Prof. of Cognitive Science, Uni. of Sussex).

This debate is from the 2015 edition of the meeting, held in Gothenburg, Sweden on 9 Dec.