AI: The Story So Far – Stuart Russell

stuart russell - redAwesome to have Stuart Russell discussing AI Safety – a very important topic. Too long have people been associating the idea of AI safety issues with Terminator – unfortunately the human condition seems such that people often don’t give themselves permission to take seriously non-mainstream ideas unless they see a tip of the hat from an authority figure.

During the presentation Stuart brings up a nice quote by Norbert Wiener:

If we use, to achieve our purposes, a mechanical agency with whose operation we cannot efficiently interfere once we have started it, because the action is so fast and irrevocable that we have not the data to intervene before the action is complete, then we had better be quite sure that the purpose put into the machine is the purpose which we really desire and not merely a colorful imitation of it.Norbert Wiener

P.s. Stuart Russell co-authored AI A Modern Approach with Peter Norvig – arguably the most popular textbook on AI theory.

The lecture was presented at the 2016 Colloquium Series on Robust and Beneficial AI (CSRBAI) hosted by the Machine Intelligence Research Institute (MIRI) and Oxford’s Future of Humanity Institute (FHI).

What I’m finding is that senior people in the field who have never publicly evinced any concern before are privately thinking that we do need to take this issue very seriously, and the sooner we take it seriously the better.Stuart Russell

Video of presentation:


The field [of AI] has operated for over 50 years on one simple assumption: the more intelligent, the better. To this must be conjoined an overriding concern for the benefit of humanity. The argument is very simple:

1. AI is likely to succeed.
2. Unconstrained success brings huge risks and huge benefits.
3. What can we do now to improve the chances of reaping the benefits and avoiding the risks?

Some organizations are already considering these questions, including the Future of Humanity Institute at Oxford, the Centre for the Study of Existential Risk at Cambridge, the Machine Intelligence Research Institute in Berkeley, and the Future of Life Institute at Harvard/MIT. I serve on the Advisory Boards of CSER and FLI.

Just as nuclear fusion researchers consider the problem of containment of fusion reactions as one of the primary problems of their field, it seems inevitable that issues of control and safety will become central to AI as the field matures. The research questions are beginning to be formulated and range from highly technical (foundational issues of rationality and utility, provable properties of agents, etc.) to broadly philosophical.

– Stuart Russell (Quote Source)

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *