AI: The Story So Far – Stuart Russell
Awesome to have Stuart Russell discussing AI Safety – a very important topic. Too long have people been associating the idea of AI safety issues with Terminator – unfortunately the human condition seems such that people often don’t give themselves permission to take seriously non-mainstream ideas unless they see a tip of the hat from an authority figure.
During the presentation Stuart brings up a nice quote by Norbert Wiener:
P.s. Stuart Russell co-authored AI A Modern Approach with Peter Norvig – arguably the most popular textbook on AI theory.
The lecture was presented at the 2016 Colloquium Series on Robust and Beneficial AI (CSRBAI) hosted by the Machine Intelligence Research Institute (MIRI) and Oxford’s Future of Humanity Institute (FHI).
The field [of AI] has operated for over 50 years on one simple assumption: the more intelligent, the better. To this must be conjoined an overriding concern for the benefit of humanity. The argument is very simple:
1. AI is likely to succeed.
2. Unconstrained success brings huge risks and huge benefits.
3. What can we do now to improve the chances of reaping the benefits and avoiding the risks?Some organizations are already considering these questions, including the Future of Humanity Institute at Oxford, the Centre for the Study of Existential Risk at Cambridge, the Machine Intelligence Research Institute in Berkeley, and the Future of Life Institute at Harvard/MIT. I serve on the Advisory Boards of CSER and FLI.
Just as nuclear fusion researchers consider the problem of containment of fusion reactions as one of the primary problems of their field, it seems inevitable that issues of control and safety will become central to AI as the field matures. The research questions are beginning to be formulated and range from highly technical (foundational issues of rationality and utility, provable properties of agents, etc.) to broadly philosophical.
– Stuart Russell (Quote Source)
UPDATE – Interview
I got to meet Stuart Russell at IJCAI in 2017, he agreed to do an interview which turned out very nicely. Here is the results: