Posts

Can we build AI without losing control over it? – Sam Harris

San Harris (author of The Moral Landscape and host of the Waking Up podcast) discusses the need for AI Safety – while fun to think about, we are unable to “martial an appropriate emotional response” to improvements in AI and automation and the prospect of dangerous AI – it’s a failure of intuition to respond to it like one would a sci-fi like doom scenario.

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris — and not just in some theoretical way. We’re going to build superhuman machines, says Harris, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.

Elon Musk on the Future of AI

elon-musk-sml2Elon Musk discusses possible best of alternative AI futures – in that the advanced AI tech is democratized – no one company has complete control over the AI tech – it could become a very unstable situation if powerful AI is concentrated in the hands of a few.
Elon also discusses improving the neural link between humans and AI – because humans are so slow – and also believes that merging with the AI will solve the control problem with AI.
Open AI seems to have a good team – as a 501c non-profit (unlike many non-profits) does have a sense of urgency in increasing the odds of a friendly AI outcome.

Transcript of the section of the interview where Elon Musk discusses Artificial Intelligence:
Interviewer: Speaking of really important problems, AI. You have been outspoken about AI. Could you talk about what you think the positive future for AI looks like and how we get there?
Elon: Okay, I mean I do want to emphasize that this is not really something that I advocate or this is not prescriptive. This is simply, hopefully, predictive. Because you will hear some say, well, like this is something that I want to occur instead of this is something I think that probably is the best of the available alternatives. The best of the available alternatives that I can come up with, and maybe someone else can come up with a better approach or better outcome, is that we achieve democratization of AI technology. Meaning that no one company or small set of individuals has control over advanced AI technology. I think that’s very dangerous. It could also get stolen by somebody bad, like some evil dictator or country could send their intelligence agency to go steal it and gain control. It just becomes a very unstable situation, I think, if you’ve got any incredibly powerful AI. You just don’t know who’s going to control that. So it’s not that I think that the risk is that the AI would develop a will of its own right off the bat. I think the concern is that someone may use it in a way that is bad. Or even if they weren’t going to use it in a way that’s bad but somebody could take it from them and use it in a way that’s bad, that, I think, is quite a big danger. So I think we must have democratization of AI technology to make it widely available. And that’s the reason that obviously you, me, and the rest of the team created OpenAI was to help spread out AI technology so it doesn’t get concentrated in the hands of a few. But then, of course, that needs to be combined with solving the high-bandwidth interface to the cortex.
Interviewer: Humans are so slow.
Elon: Humans are so slow. Yes, exactly. But we already have a situation in our brain where we’ve got the cortex and the limbic system… The limbic system is kind of a…I mean, that’s the primitive brain. That’s kind of like your instincts and whatnot. And the cortex is the thinking upper part of the brain. Those two seem to work together quite well. Occasionally, your cortex and limbic system will disagree, but they…
Interviewer: It generally works pretty well.
Elon: Generally works pretty well, and it’s like rare to find someone who…I’ve not found someone wishes to either get rid of the cortex or get rid of the limbic system.
Interviewer: Very true.
Elon: Yeah, that’s unusual. So I think if we can effectively merge with AI by improving the neural link between your cortex and your digital extension of yourself, which already, like I said, already exists, just has a bandwidth issue. And then effectively you become an AI-human symbiote. And if that then is widespread, with anyone who wants it can have it, then we solve the control problem as well, we don’t have to worry about some evil dictator AI because we are the AI collectively. That seems like the best outcome I can think of.
Interviewer: So, you’ve seen other companies in their early days that start small and get really successful. I hope I never get this asked on camera, but how do you think OpenAI is going as a six-month-old company?
Elon: I think it’s going pretty well. I think we’ve got a really talented group at OpenAI.
Interviewer: Seems like it.
Elon: Yeah, a really talented team and they’re working hard. OpenAI is structured as a 501(c)(3) non-profit. But many non-profits do not have a sense of urgency. It’s fine, they don’t have to have a sense of urgency, but OpenAI does because I think people really believe in the mission. I think it’s important. And it’s about minimizing the risk of existential harm in the future. And so I think it’s going well. I’m pretty impressed with what people are doing and the talent level. And obviously, we’re always looking for great people to join in the mission.

The full interview is available in video/audio and text format at Y Combinator as part of the How to Build the Future series : https://www.ycombinator.com/future/elon/

elon-musk_future-of-ai

Anders Sandberg -The Technological Singularity

Anders Sandberg.00_23_53_16.Still031Anders gives a short tutorial on the Singularity – clearing up confusion and highlighting important aspects of the Technological Singularity and related ideas, such as accelerating change, horizons of predictability, self-improving artificial intelligence, and the intelligence explosion.

Tutorial Video:

Points covered in the tutorial:

  • The Mathematical Singularity
  • The Technological Singularity: A Horizon of predictability
  • Confusion Around The Technological Singularity
  • Drivers of Accelerated Growth
  • Technology Feedback Loops
  • A History of Coordination
  • Technological Inflection Points
  • Difficult of seeing what happens after an Inflection Point
  • The Intelligence Explosion
  • An Optimisation Power Applied To Itself
  • Group Minds
  • The HIVE Singularity: A Networked Global Mind
  • The Biointelligence explosion
  • Humans are difficult to optimise

An Overview of Models of the Technological Singularity

anders-sandberg-technology-feedback-loopsSee Anders’ paper ‘An overview of models of technological singularity
This paper reviews different definitions and models of technological singularity. The models range from conceptual sketches to detailed endogenous growth models, as well as attempts to fit empirical data to quantitative models. Such models are useful for examining the dynamics of the world-system and possible types of future crisis points where fundamental transitions are likely to occur. Current models suggest that, generically, even small increasing returns tends to produce radical growth. If mental capital becomes copyable (such as would be the case for AI or brain emulation) extremely rapid growth would also become likely.
http://agi-conf.org/2010/wp-content/uploads/2009/06/agi10singmodels2.pdf

[The] Technological singularity is of increasing interest among futurists both as a predicted possibility in the midterm future and as subject for methodological debate. The concept is used in a variety of contexts, and has acquired an unfortunately large number of meanings. Some versions stress the role of artificial intelligence, others refer to more general technological change. These multiple meanings can overlap, and many writers use combinations of meanings: even Vernor Vinge’s seminal essay that coined the term uses several meanings. Some of these meanings may imply each other but often there is a conflation of different elements that likely (but not necessarily) occur in parallel. This causes confusion and misunderstanding to the extent that some critics argue that the term should be avoided altogether. At the very least the term ‘singularity’ has led to many unfortunate assumptions that technological singularity involves some form of mathematical singularity and can hence be ignored as unphysical.Anders Sandberg

A list of models described in the paper:

A. Accelerating change

Exponential or superexponential technological growth (with linked economical growth and social change) (Ray Kurzweil (Kur05), John Smart (Smang))

B. Self improving technology

Better technology allows faster development of new and better technology. (Flake (Fla06))

C. Intelligence explosion

Smarter systems can improve themselves, producing even more intelligence in a strong feedback loop. (I.J. Good (Goo65), Eliezer Yudkowsky)

D. Emergence of superintelligence

(Singularity Institute) 1

E. Prediction horizon

Rapid change or the emergence of superhuman intelligence makes the future impossible to predict from our current limited knowledge and experience. (Vinge, (Vin93))

F. Phase transition

The singularity represents a shift to new forms of organisation. This could be a fundamental difference in kind such as humanity being succeeded by posthuman or artificial intelligences,
a punctuated equilibrium transition or the emergence of a new meta-system level. (Teilhard de Chardin, Valentin Turchin (Tur77), Heylighen (Hey07))

G. Complexity disaster

Increasing complexity and interconnectedness causes increasing payoffs, but increases instability. Eventually this produces a crisis, beyond which point the dynamics must be different.
(Sornette (JS01), West (BLH+07))

H. Inflexion point

Large-scale growth of technology or economy follows a logistic growth curve. The singularity represents the inflexion point where change shifts from acceleration to de-acceleration. (Extropian
FAQ, T. Modis (Mod02))

I. Infinite progress

The rate of progress in some domain goes to infinity in nite time. (Few, if any, hold this to be plausible 2 )

anders-sandberg-the-technological-singularity-predictability-horizon

Many thanks for watching!

Consider supporting SciFuture by:
a) Subscribing to the YouTube channel:
b) Donating via Patreon: https://www.patreon.com/scifuture and/or
c) Sharing the media SciFuture creates

Science, Technology & the Future: http://scifuture.org

Sam Harris on AI Implications -The Ruben Report

A transcription of Sam Harris’ discussion on the Implications of Strong AI during recent appearance on the Ruben Report. Sam contrasts narrow AI with strong AI, AI Safety, the possibility of rapid AI self-improvement, the idea of AI superintelligence may seem alien to us, and he also brings up the idea that it is important to solve consciousness before superintelligence (especially if superintelligence wipes us out) in hope for a future inclusive of the value that consciousness experience entails – instead of a mechanized future with no consciousness to experience it.
I explored the idea of a consciousness in artificial intelligence in ‘The Knowledge Argument Applied to Ethics‘ – which deals with whether an AI will act differently if it can experience ‘raw feels’ – and this seems to me to be of importance to AI Safety and (if we are ethically serious, and also assume value in ‘raw feels’ or) about preserving the future of value.

Dave Rubin asks the question: “If we get to a certain point with Artificial Intelligence and robots become aware and all that stuff… this can only end horribly right? …it will be pretty good for a while, but then at some point, by their own self-preservation basically, they will have to turn on their masters… I want the answer right now…”

Sam Harris responds: “..I worry about it [AI] to that degree but not quite in those terms. The concern for me is not that we will build superintelligent AI or superintelligent robots which initially seem to work really well and then by some process we don’t understand will become malevolent; and kill us – you know – the terminator movies. That’s not the concern…. Most people who are really worried about this – that’s not really what they are worried about. Although that’s not inconceivable – it’s almost worse than that. What’s more reasonable is that will.. As we’re building right now… we’re building machines that embody intelligence to increasing degree.. But it’s narrow AI.. so the best chess player on earth is a computer but it can’t play tic-tac-toe – it’s narrowly focused on a specific kind of goal – and that’s broadening more and more as we get machines that can play many different kinds of games for instance well. So we’re creeping up on what is now called ‘general intelligence’ – the ability to think flexibly in multiple domains – and we’re you’re learning in one domain doesn’t cancel you’re learning in another – and so it’s something more like how human beings can acquire many different skills and engage in many different modes of cognition and not have everything fall apart – that’s the Holy Grail of artificial intelligence – we want ‘general intelligence’ and something that’s robust – it’s not brittle…it’s something that if parts of it fail it’s not catastrophic to the whole enterprise… and I think there is no question that we will get there, but there are many false assumptions about the path ahead. One is that what we have now is not nearly as powerful as the human mind – and we’re just going to incrementally get to something that is essentially a human equivalent. Now I don’t see that as the path forward at all… all of our narrow intelligence … much of our narrow intelligence insomuch as we find it interesting is already superhuman, right, so like we have your calculator on your phone and it’s superhuman for arithmetic – and the chess playing program is superhuman – it’s not almost as good as a human – it’s better than any human on earth and will always be better than any human on earth right? Um, and more and more we will get that piecemeal effort of superhuman narrow AIs and when this is ever brought together in a general intelligence what you’re going to have is not just another ordinary human level intelligence – you’re going to have something that is in some ways may be radically foreign – in some ways it’s not going to be everything about us emulated in the system – but whatever is intelligent there is not going to be superhuman almost by definition and if it isn’t t=0 it’s going to be the next day – it’s just going to improve so quickly and when you talk about a system that can improve itself – if we ever build intelligent AI that then becomes the best source of it’s own improvement – so something that can improve it’s source code better than any human could improve it’s source code – once we start that process running, and the temptation to do that will be huge, then we have – what has been worried about now for 75 years – the prospect of an intelligence explosion – where the birth of this intelligence could get away from us – it’s now improving itself in a way that is unconstrained.  So people talk about ‘the Singularity’ now which is what happens when that takes off – it’s a horizon line in technological innovation that we can’t see beyond – and we can’t predict beyond because it’s now just escaping – you’re getting 1000’s of years of progress in minutes – right if in fact this process gets initiated – and so it’s not that we have superhuman robots that are just well behaved and it goes on for decades and then all of the sudden they get quirky and they take their interests to heart more than they take ours to heart and … you know the game is over. I think what is more likely is we’ll build intelligent systems that are so much more competent than we are – that even the tiniest misalignment between their goals and our own – will ultimately become completely hostile to our well being and our survival.”

The video of the conversation is here, more of the transcription below the video

Dave Rubin: “That’s scarier, pretty much, than what I laid out right? I laid out sort of a futuristic .. ahh there going to turn on us and start shooting us one day maybe because of an error or something – but you’re laying out really that they would… almost at some point that they would, if they could become aware enough, that they simply wouldn’t need us – because they would become ‘super-humans’ in effect – and what use would we serve for them at some point right? (maybe not because of consciousness…)”

Sam Harris: “I would put consciousness and awareness aside because – I mean it might be that consciousness comes along for the ride – it may be the case that you can’t be as intelligent as a human and not be conscious – but I don’t know if that’s right…”

Dave Rubin: “That’s horizon mind stuff right?”

Sam Harris: “Well I just don’t know if that’s actually true – it’s quite possible that we could build something as intelligent as we are – in a sense that it can meet any kind of cognitive or perceptual challenge or logical challenge we would pose it better than we can – but there is nothing that is like to be that thing – if the lights aren’t on it doesn’t experience happiness, though it might say it experiences happiness right? I think what will happen is that we will definitely – you know the notion of a Turing test?”

Dave Rubin: “This is like, if you type – it seems like it’s responding to you but it’s not actually really…”

Sam Harris: “Well, Allan Turing, the person who is more responsible than anyone else for giving us computers once thought about what it would mean to have intelligent machines – and he proposed what has been come to be known as the ‘Turing Test’.”

Dave Rubin: “It’s like the chat right?”

Sam Harris: “Yeah but .. when you can’t tell whether you’re interacting with a person or a computer – that computer in that case is passing the Turing Test – and as a measure of intelligence – that’s certainly a good proxy for a more detailed analysis of what it would mean to have machine intelligence… if I’m talking to something at length about anything that I want – and I can’t tell it’s not a person, and it turns out it’s somebody’s laptop – that laptop is passing the Turing Test. It may be that you can pass the Turing Test without even the subtlest glimmer of consciousness arising. Right, so that laptop is no more conscious than that glass of water is – right? That may in fact be the case, it may not be though – so I just don’t know there. If that’s the case, for me that’s just the scariest possibility – because what’s happening is .. I even heard at least one computer scientist and it was kind of alarming but I don’t have a deep argument against it – if you assume that consciousness comes along for the ride, if you assume that anything more intelligent than us gives rise to – either intentionally for by happenstance – is more conscious than we are, experiences a greater range of creative states – in well-being and can suffer more – by definition, in my view ethically, it becomes more important… if we’re more important than Cocker Spaniels or ants or anything below us – then if we create something that’s obviously above us in every conceivable way – and it’s conscious – right?”

Dave Ruben: “It would view us in the same way any we view anything that [???] us”

Sam Harris: “It’s more important than us right? And I’d have to grant that even though I’d not be happy about it deciding to annihilate us… I don’t have a deep ethical argument against why… I can’t say from a god’s eye view that it’s bad that we gave birth to super beings that then trampled on us – but then went on to become super in any ways we can’t possibly imagine – just as, you know, bacteria can’t imagine what we’re up to – right. So there are some computer scientists who kind of solve the fears, or silence the fears with this idea – that say just listen, if we build something that’s god like in that respect – we will have given birth to – our descendants will not be apes, they will be gods, and that’s a good thing – it’s the most beautiful thing – I mean what could be more beautiful than us creating the next generation of intelligent systems – that are infinitely profound and wise and knowledgeable from our point of view and are just improving themselves endlessly up to the limit of the resources available in the galaxy – what could be more rewarding than that?”

Dave Ruben: “Sounds pretty good”

Sam Harris: “And the fact that we all destroyed ourselves in the process because we were the bugs that hit their windshield when they were driving off – that’s just the price you pay. Well ok that’s possible but it’s also conceivable that all that could happen without consciousness right? That we could build mere mechanism that is competent in all the ways so as to plow us under – but that there is no huge benefit on the side of deep experience and well being and beauty and all that – it’s all just blind mechanism, which is intelligent mechanism .. in the same way as the best chess playing program – which is highly intelligent with respect to chess but nobody thinks as conscious. So that’s the theory … but on the way there – there is many weird moments where I think we will build machines that will pass the Turing Test – which is to say that they will seem conscious to us, they will seem to be able to detect our emotions and respond to our emotions, you know will say ‘you know what – you look tired, and maybe you should take a nap’ – and it will be right you know, it will be a better judge of your emotions than your friends are – right? And yet at a certain point certainly if you emulate this in a system whether it’s an avatar online or an actual robot that has a face right? That can display it’s own emotion and we get out of the uncanny valley where it just looks creepy and begins to look actually beautiful and rewarding and natural – then our intuitions that we are in dialog with a conscious other will be played upon perfectly right? .. and I think we will lose sight of it being an interesting problem – it will no longer be interesting to wonder whether our computers are conscious because they will be demonstrating as much as any person has ever demonstrated it – and in fact even more right? And unless we understand exactly how consciousness emerges in physical systems, at some point along the way of developing that technology – I don’t think we will actually know that they’re conscious – and that will be interesting – because we will successfully fool ourselves into just assuming – it will seem totally unethical to kill your robot off – it will be a murder worse than you killing a person because at a certain point it will be the most competent person – you know, the wisest person.”

Dave Ruben: “Sam, I don’t know if you’re writing a book about this – but you clearly should write a book about this – I’ll write one of the intros or something – there you go. Well listen we did two hours here – so I’m not going to give you the full Rogen treatment ”

Sam Harris: “We did a half Rogen”

Dave Ruben: “We did a half Rogen – but you know you helped me launch the first season – you’re launching second season – legally you have to now launch every season..”

* Some breaks in conversation (sentences, words, ums and ahs) have been omitted to make it easier to read

Is there a Meaningful Future for Non-Optimal Moral Agents?

In an interview last year, I had a discussion with John Danaher on the Hedonistic Imperative & Superintelligence – a concern he has with HI is that it denies or de-emphasises some kind of moral agency – in moral theory there is a distinction between moral agents (being a responsible actor able to make moral decisions, influence direction of moral progress, shapes its future, and owes duties to others) and moral patients who may be deemed to have limited or no grounds for moral agency/autonomy/responsibility – they are simply a recipient of moral benefits – in contrast to humans, animals could be classified as moral patients – (see Stanford writing on Grounds for Moral Status).

As time goes on, the notion of strong artificial intelligence leading to Superintelligence (which may herald in something like an Intelligence Explosion) and ideas like the hedonistic imperative becomes less sensational sci-fi concepts and more like visions of realizable eventualities. Thinking about moral endpoints comes to me a paradoxical feeling of triumph and disempowerment.

John’s concern is that ensuring the well-being of humans (conscious entities) is consistent with denying their moral agency – minimizing their capacity to act – that there is a danger that the outcome of HI or an Intelligence Explosion may result in sentient life being made very happy forever, but unable to make choices – with a focus on a future entirely based on bliss whilst ignoring other aspects of what makes for a valuable or worthwhile existence.

Artificial Heart chipsSo even if we have a future where a) we are made very happy and b) we are subject to a wide variety of novelty (which I argue for in Novelty Utilitarianism) without some kind of self-determination we may not be able to enjoy part of what arguably makes for a worthwhile existence.

If the argument for moral agency is completely toppled by the argument against free will then I can see why there would be no reason for it – and that bliss/novelty may be enough – though I personally haven’t been convinced that this is the case.

Also the idea that moral agency and novelty should be ranked as auxiliary aspects to the main imperative of reducing suffering/increasing bliss seems problematic – I get the sense that they (agency/novelty) could easily be swapped out for most non-optimal moral agents in the quest for -suffering/+bliss troublesome.
The idea that upon evaluating grounds for moral status, our ethical/moral quotient may not match or even come close to a potential ethical force of a superintelligence is also troubling. If we are serious about the best ethical outcomes, when the time comes, should we be committed to resigning all moral agency to agents that are more adept at producing peek moral outcomes?
ancillary-one-esk-glitchIs it really possible for non-optimal agents to have a meaningful moral input in a universe where they’ve been completely outperformed by moral machines? Is a life of novelty & bliss the most optimal outcome we can hope for?

There probably should be some more discussion on trade-offs between moral agency, peek experience and novelty.

Discussion in this video here starts at 24:02

Below is the whole interview with John Danaher:

The Singularity & Prediction – Can there be an Intelligence Explosion? – Interview with Marcus Hutter

Can there be an Intelligence Explosion?  Can Intelligence Explode?
The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. The most popular scenario is the creation of super-intelligent algorithms that recursively create ever higher intelligences. What could it mean for intelligence to explode?
We need to provide more careful treatment of what intelligence actually is, separate speed from intelligence explosion, compare what super-intelligent participants and classical human observers might experience and do, discuss immediate implications for the diversity and value of life, consider possible bounds on intelligence, and contemplate intelligences right at the singularity.

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. Irving John Good - 'Good Thinking: The Foundations of Probability and Its Applications'

team-marcus-hutterPaper: M.Hutter, Can Intelligence Explode, Journal of Consciousness Studies, Vol.19, Nr 1-2 (2012) pages 143–166.
http://www.hutter1.net/publ/singularity.pdf
http://arxiv.org/abs/1202.6177

See also:
http://2012.singularitysummit.com.au/2012/08/can-intelligence-explode/
http://2012.singularitysummit.com.au/2012/08/universal-artificial-intelligence/

Can Intelligence Explode? – Marcus Hutter at Singularity Summit Australia 2012

Abstract: The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. The most popular scenario is the creation of super-intelligent algorithms that recursively create ever higher intelligences. After a short introduction to this intriguing potential future, I will elaborate on what it could mean for intelligence to explode. In this course, I will (have to) provide a more careful treatment of what intelligence actually is, separate speed from intelligence explosion, compare what super-intelligent participants and classical human observers might experience and do, discuss immediate implications for the diversity and value of life, consider possible bounds on intelligence, and contemplate intelligences right at the singularity.

 


 

Slides (pdf): http://www.hutter1.net/publ/ssingularity.pdf
Slides (PowerPoint): http://www.hutter1.net/publ/ssingularity.ppsx
Paper: M.Hutter, Can Intelligence Explode, Journal of Consciousness Studies, Vol.19, Nr 1-2 (2012) pages 143–166.
http://www.hutter1.net/publ/singularity.pdf

Also see:
http://2012.singularitysummit.com.au/2012/08/can-intelligence-explode/
http://2012.singularitysummit.com.au/2012/08/universal-artificial-intelligence/
http://2012.singularitysummit.com.au/2012/08/panel-intelligence-substrates-computation-and-the-future/
http://2012.singularitysummit.com.au/2012/01/marcus-hutter-to-speak-at-the-singularity-summit-au-2012/
http://2012.singularitysummit.com.au/agenda

Marcus Hutter (born 1967) is a German computer scientist and professor at the Australian National University. Hutter was born and educated in Munich, where he studied physics and computer science at the Technical University of Munich. In 2000 he joined Jürgen Schmidhuber’s group at the Swiss Artificial Intelligence lab IDSIA, where he developed the first mathematical theory of optimal Universal Artificial Intelligence, based on Kolmogorov complexity and Ray Solomonoff’s theory of universal inductive inference. In 2006 he also accepted a professorship at the Australian National University in Canberra.

Hutter’s notion of universal AI describes the optimal strategy of an agent that wants to maximize its future expected reward in some unknown dynamic environment, up to some fixed future horizon. This is the general reinforcement learning problem. Solomonoff/Hutter’s only assumption is that the reactions of the environment in response to the agent’s actions follow some unknown but computable probability distribution.

team-marcus-hutter

Professor Marcus Hutter

Research interests:

Artificial intelligence, Bayesian statistics, theoretical computer science, machine learning, sequential decision theory, universal forecasting, algorithmic information theory, adaptive control, MDL, image processing, particle physics, philosophy of science.

Bio:

Marcus Hutter is Professor in the RSCS at the Australian National University in Canberra, Australia. He received his PhD and BSc in physics from the LMU in Munich and a Habilitation, MSc, and BSc in informatics from the TU Munich. Since 2000, his research at IDSIA and now ANU is centered around the information-theoretic foundations of inductive reasoning and reinforcement learning, which has resulted in 100+ publications and several awards. His book “Universal Artificial Intelligence” (Springer, EATCS, 2005) develops the first sound and complete theory of AI. He also runs the Human Knowledge Compression Contest (50’000€ H-prize).

Vernor Vinge on the Technological Singularity

What is the Singularity? Vernor Vinge speaks about technological change, offloading cognition from minds into the environment, and the potential of Strong Artificial Intelligence.

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” – “The Coming Technological SingularityVernor Vinge 1993

Vernor Vinge popularised and coined the term “Technological Singularity” in his 1993 essay “The Coming Technological Singularity“, in which he argues that the creation of superhuman artificial intelligence will mark the point at which “the human era will be ended,” such that no current models of reality are sufficient to predict beyond it.

courtesy of the Imaginary Foundation

courtesy of the Imaginary Foundation

Vinge published his first short story, “Bookworm, Run!”, in the March 1966 issue of Analog Science Fiction, then edited by John W. Campbell. The story explores the theme of artificially augmented intelligence by connecting the brain directly to computerised data sources. He became a moderately prolific contributor to SF magazines in the 1960s and early 1970s. In 1969, he expanded two related stories, (“The Barbarian Princess”, Analog, 1966 and “Grimm’s Story”, Orbit 4, 1968) into his first novel, Grimm’s World. His second novel, The Witling, was published in 1975.

Vinge came to prominence in 1981 with his novella True Names, perhaps the first story to present a fully fleshed-out concept of cyberspace, which would later be central to cyberpunk stories by William Gibson, Neal Stephenson and others.

 

Vernor Vinge

Image Courtesy – Long Now Foundation

Vernor Vinge on the Turing Test, Artificial Intelligence

Preface

the_imitation_game_bOn the coat-tails of a the blockbuster film “The Imitation Game” I saw quite a bit of buzz on the internet about Alan Turing, and the Turing Test.  The title of the movie refers to the idea of the Turing Test may someday show that machines would ostensibly be (at least in controlled circumstances) indistinguishable from humans.
Vernor Vinge is a mathematician and science fiction author who is well known for many Hugo Award-winning novels and novellas*   and his 1993 essay “The Coming Technological Singularity”, in which he argues that the creation of superhuman artificial intelligence will mark the point at which “the human era will be ended”, such that no current models of reality are sufficient to predict beyond it.

 

Alan Turing and the Computability of Intelligence

Adam Ford: Alan Turing is considered the “Father of Theoretical Computer Science and Artificial Intelligence” – his view about the potential of AI contrasts with much of the skepticism that has subsequently arose.  What is at the root of this skepticism?

Vinge_Singularity_Omni_face250x303Vernor Vinge: The emotional source of the skepticism is the ineffable feeling that many (most?)  people have against the possibility that self-awareness could arise from simple, constructed devices.

 

AF: Many theorists feel that the combined talents of pure machines and humans will always produce more creative and therefore useful output – what are your thoughts?

VV: When it comes to intelligence, biology just doesn’t have legs. _However_ in the near term, teams of people plus machines can be much smarter than either — and this is one of the strongest reasons for being optimistic that we can manage the new era safely, and project that safety into the farther future.

 

AF: Is the human brain essentially a computer?

VV: Probably yes, but if not the lack can very likely be made up for with machine improvements that we humans can devise.

 

AF: Even AI critics John Searle and Hubert Dreyfus (i.e. “What Computers (Still) Can’t Do”) agree that a brain simulation is possible in theory, though they argue that merely mimicking the functioning brain would in itself be an admission of ignorance (concerning intelligence) – what are your thoughts?

VV: The question of whether there is self-awareness behind a mimick may be the most profound issue, but for almost all practical purposes it isn’t relevant: in a few years, I think we will be able to make machines that can run circles around any human mind by all externally measured criteria. So what if no one is really home inside that machine?

Offhand, I can think of only one practical import to the answer, but that _is_ something important: If such minds are self-aware in the human sense, then uploads suddenly become very important to us mortality-challenged beings.

For reductionists interested in _that_ issue, some confidence might be achieved with superintelligence architectures that model those structures in our minds that reductionists come to associate with self-awareness. (I can imagine this argument being carried on by the uploaded supermind children of Searle and Moravec — a trillion years from now when there might not be any biological minds around whatsoever.)

 

AF: Do you think Alan Turing’s reasons for believing in the potential of AI are different from your own and other modern day theorists?  If so in what ways?

VV: My guess is there is not much difference.

 

AF: Has Alan Turing and his work influenced your writing? If it has, how so?

VV: I’m not aware of direct influence. As a child, what chiefly influenced me was the science-fiction I was reading! Of course, those folks were often influenced by what was going in science and math and engineering of the time.

Alan Turing has had a multitude of incarnations in science fiction…   I think that besides being a broadly based math and science genius, Turing created accessible connections between classic philosophical questions and current issues.

 

AF: How do you think Alan Turing would respond to the specific concept of the Technological Singularity as described by you in your paper “The Coming Technological Singularity: How to Survive in the Post-Human Era“?

VV: I’d bet that Turing (and many AI pioneers) had extreme ideas about the consequences of superhuman machine intelligence. I’m not sure if Turing and I would agree about the potential for Intelligence Amplification and human/machine group minds.

I’d be _very_ interested in his reaction to modern analysis such as surveyed in Bostrom’s recent _Superintelligence_ book.

 

AF: In True Names, agents seek to protect their true identity. The guardian of the Coven’s castle is named ‘Alan Turing’ – what was the reason behind this?

It was a tip of the hat in Turing’s direction. By the time I wrote this story I had become quite aware of Alan Turing (contrasting with my childhood ignorance that I mentioned earlier).

 

AF: Your first novella Bookworm Run! was themed around brute forcing simpler-than-human-intelligence to super-intelligence (in it a chimpanzee’s intelligence is amplified).  You also explore the area of intelligence amplification in Marooned in Realtime.
Do you think it is possible for a Singularity to bootstrap from brute forcing simple cognitive models? If so do you think Super-Intelligence will be achieved through brute-forcing simple algorithms?

VV: I view “Intelligence Amplification” (IA) as a finessing of the hardest questions by building on top of what already exists. Thus even UI design lies on the path to the Singularity. One could argue that Intelligence Amplification is the surest way of insuring humanity in the super-intelligence (though some find that a very scary possibility in itself).

 

The Turing Test and Beyond

AF: Is the Turing Test important? If so, why, and how does it’s importance match up to tracking progress in Strong AI?

VV: In its general form, I regard the Turing Test as a marvelous, zen-like, bridge between reductionism and the inner feelings most people have about their own self-awareness.  Bravo Dr. Turing!

 

AF: Is a text conversation is ever a valid test for intelligence? Is blackbox testing enough for a valid test for intelligence?

VV: “Passing the Turing Test” depends very much on the setup:
a) The examining human (child? adult? fixated or afflicted adult? –see Sherry Turkle’s examples of college students who passed a chatbot).
b) The duration of the test.
c) The number of human examiners participating.
d) Restrictions on the examination domain.

In _The Emperor’s New Mind_, Penrose has a (mostly negative) critique of the Turing Test. But at the end he says that if the test was very broad, lasting years, and convincing to him (Penrose), then it might be meaningful to talk about a “pass grade”.

 

AF: The essence of Roger Penrose’s argument (in the Emperor’s New Mind)
–  It is impossible for a Turing machine to enumerate all possible Godel sentences. Such a program will always have a Godel sentence derivable from its program which it can never discover
–  Humans have no problem discovering these sentences and seeing the truth of them
And he concludes that humans are not reducible to turing machines.  Do you agree with Roger’s assessment  – Are humans not reducible to turing machines?

VV: This argument depends on comparing a mathematical object (the Turing Machine) with whatever kind of object the speaker considers a “human mind” to be.  As a logical argument, it leaves me dubious.

 

AF: Are there any existing interpretations of the Turing Test that you favour?

VV: I think Penrose’s version (described above) is the most important.

In conversation, the most important thing is that all sides know which flavor of the test they are talking about 🙂

 

AF: You mentioned it has been fun tracking Turing Test contests, what are your thoughts on attempts at passing the Turing Test so far?

VV: So far, it seems to me that the philosophically important thresholds are still far away. Fooling certain people, or fooling people for short periods of time seems to have been accomplished.

 

AF: Is there any specific type of intelligence we should be testing machines for?

VV: There are intelligence tests that would be very interesting to me, but I rather not call them versions of the Turing Test. For instance, I think we’re already in the territory where more and more [forms->sorts] of superhuman forms of creativity and “intuition” are possible.

I think there well also be performance tests for IA and group mind projects.

 

AF: Some argue that testing for ‘machine consciousness’ is more interesting – what are your thoughts?

VV: Again, I’d keep this possibility separate from Turing Test issues, though I do think that a being that could swiftly duplicate itself and ramp intellect up or down per resource and latency constraints would have a vastly different view of reality compared to the severe and static time/space/mortality restrictions that we humans live with.

 

AF: The Turing Test seems like a competitive sport.  Though some interpretations of the Turing Test have conditions which seem to be quite low.  The competitive nature of how the Turing Test is staged seems to me to select for the cheapest and least sophisticated methods to fool judges on a Turing Test panel.

VV: Yes.

 

AF: Should we be focusing on developing more complex and adaptive Turing style tests (more complex measurement criteria? more complex assessment)? What alternatives to a Turing Test competition (if any) would you suggest to motivate regular testing for machine intelligence?

VV: The answers to these questions may grow out of hard engineering necessity more than from the sport metaphor. Going forward, I imagine that different engineering requirements will acquire various tests, but they may look more like classical benchmark tests.

 

Tracking Progress in Artificial Intelligence

AF: Why is tracking progress towards AI important?

VV: Up to a point it could be important for the sort of safety reasons Bostrom discusses in _Superintelligence_. Such tracking could also provide some guidance for machine/human/society teams that might have the power to guide events along safe paths.

 

AF: What do you see as the most useful mechanisms for tracking progress towards a) human equivalence in AI, b) a Technological Singularity?

VV: The approach to human equivalence might be tracked with a broad range of tests. Such would also apply to the Singularity, but for a soft takeoff, I imagine there would be a lot of economic effects that could be tracked. For example:
–  trends in employment of classic humans, augmented humans, and computer/human teams;
–  trends in what particular jobs still have good employment;
–  changes in the personal characteristics of the most successful CEOs.

Direct tests of automation performance (such as we’ve discussed above) are also important, but as we approach the Singularity, the center of gravity shifts from the programmers to the programs and how the programs are gaming the systems.

 

AF: If you had a tardis and you could bring Alan Turing forward into the 21st century, would he be surprised at progress in AI?  What kinds of progress do you think he would be most interested in?

VV: I don’t have any special knowledge of Turing, but my guess is he would be pleased — and he would want to _understand_ by becoming a super himself.

 

AF: If and when the Singularity becomes imminent – is it likely that the majority of people will be surprised?

VV: A hard takeoff would probably be a surprise to most people. I suspect that a soft takeoff would be widely recognized.

 

Implications

AF: What opportunities could we miss if we are not well prepared (This includes opportunities for risk mitigation)?

VV: Really, the risk mitigation is the serious issue. Other categories of missed opportunities will probably be quickly fixed by the improving tech.  For pure AI, some risk mitigation is the sort of thing MIRI is researching.

For pure AI, IA, and group minds, I think risk mitigation involves making use of the human equivalent minds that already exist in great numbers (namely, the human race). If these teams and early enhancements recognized the issues, they can form a bridge across to the more powerful beings to come.

 

AF: You spoke about an AI Hard Takeoff as being potentially very bad – can you elaborate here?

VV: A hard takeoff is too fast for normal humans to react and accommodate to.  To me, a Hard Takeoff would be more like an explosion than like technological progress. Any failure in mitigation planning is suddenly beyond the ability of normal humans to fix.

 

AF: What stood out for you after reading Nick Bostrom’s book ‘Superintelligence – paths, dangers, strategies’?

VV: Yes. I think it’s an excellent discussion especially of the pure AI path to superintelligence. Even people who have no intense interest in these issues would find the first few chapters interesting, as they sketch out the problematic issues of pure AI superintelligence — including some points that may have been missed back in the twentieth century. The book then proceeds to a fascinating analysis of how to cope with these issues.

My only difference with the analysis presented is that while pure AI is likely the long term important issue, there could well be a period (especially in the case of a Soft Takeoff) where the IA and groupmind trajectories are crucial.

vernor_vinge_LosCon

Vernor Vinge at Los Con 2012

Notes:
* Hugo award winning novels & novellas include: A Fire Upon the Deep (1992), A Deepness in the Sky (1999), Rainbows End (2006), Fast Times at Fairmont High (2002), and The Cookie Monster (2004), and The Peace War (1984).

Also see video interview with Vernor Vinge on the Technological Singularity.