Anders Sandberg -The Technological Singularity

Anders Sandberg.00_23_53_16.Still031Anders gives a short tutorial on the Singularity – clearing up confusion and highlighting important aspects of the Technological Singularity and related ideas, such as accelerating change, horizons of predictability, self-improving artificial intelligence, and the intelligence explosion.

Tutorial Video:

Points covered in the tutorial:

  • The Mathematical Singularity
  • The Technological Singularity: A Horizon of predictability
  • Confusion Around The Technological Singularity
  • Drivers of Accelerated Growth
  • Technology Feedback Loops
  • A History of Coordination
  • Technological Inflection Points
  • Difficult of seeing what happens after an Inflection Point
  • The Intelligence Explosion
  • An Optimisation Power Applied To Itself
  • Group Minds
  • The HIVE Singularity: A Networked Global Mind
  • The Biointelligence explosion
  • Humans are difficult to optimise

An Overview of Models of the Technological Singularity

anders-sandberg-technology-feedback-loopsSee Anders’ paper ‘An overview of models of technological singularity
This paper reviews different definitions and models of technological singularity. The models range from conceptual sketches to detailed endogenous growth models, as well as attempts to fit empirical data to quantitative models. Such models are useful for examining the dynamics of the world-system and possible types of future crisis points where fundamental transitions are likely to occur. Current models suggest that, generically, even small increasing returns tends to produce radical growth. If mental capital becomes copyable (such as would be the case for AI or brain emulation) extremely rapid growth would also become likely.

[The] Technological singularity is of increasing interest among futurists both as a predicted possibility in the midterm future and as subject for methodological debate. The concept is used in a variety of contexts, and has acquired an unfortunately large number of meanings. Some versions stress the role of artificial intelligence, others refer to more general technological change. These multiple meanings can overlap, and many writers use combinations of meanings: even Vernor Vinge’s seminal essay that coined the term uses several meanings. Some of these meanings may imply each other but often there is a conflation of different elements that likely (but not necessarily) occur in parallel. This causes confusion and misunderstanding to the extent that some critics argue that the term should be avoided altogether. At the very least the term ‘singularity’ has led to many unfortunate assumptions that technological singularity involves some form of mathematical singularity and can hence be ignored as unphysical.Anders Sandberg

A list of models described in the paper:

A. Accelerating change

Exponential or superexponential technological growth (with linked economical growth and social change) (Ray Kurzweil (Kur05), John Smart (Smang))

B. Self improving technology

Better technology allows faster development of new and better technology. (Flake (Fla06))

C. Intelligence explosion

Smarter systems can improve themselves, producing even more intelligence in a strong feedback loop. (I.J. Good (Goo65), Eliezer Yudkowsky)

D. Emergence of superintelligence

(Singularity Institute) 1

E. Prediction horizon

Rapid change or the emergence of superhuman intelligence makes the future impossible to predict from our current limited knowledge and experience. (Vinge, (Vin93))

F. Phase transition

The singularity represents a shift to new forms of organisation. This could be a fundamental difference in kind such as humanity being succeeded by posthuman or artificial intelligences,
a punctuated equilibrium transition or the emergence of a new meta-system level. (Teilhard de Chardin, Valentin Turchin (Tur77), Heylighen (Hey07))

G. Complexity disaster

Increasing complexity and interconnectedness causes increasing payoffs, but increases instability. Eventually this produces a crisis, beyond which point the dynamics must be different.
(Sornette (JS01), West (BLH+07))

H. Inflexion point

Large-scale growth of technology or economy follows a logistic growth curve. The singularity represents the inflexion point where change shifts from acceleration to de-acceleration. (Extropian
FAQ, T. Modis (Mod02))

I. Infinite progress

The rate of progress in some domain goes to infinity in nite time. (Few, if any, hold this to be plausible 2 )


Many thanks for watching!

Consider supporting SciFuture by:
a) Subscribing to the YouTube channel:
b) Donating via Patreon: and/or
c) Sharing the media SciFuture creates

Science, Technology & the Future:

Vernor Vinge on the Turing Test, Artificial Intelligence


the_imitation_game_bOn the coat-tails of a the blockbuster film “The Imitation Game” I saw quite a bit of buzz on the internet about Alan Turing, and the Turing Test.  The title of the movie refers to the idea of the Turing Test may someday show that machines would ostensibly be (at least in controlled circumstances) indistinguishable from humans.
Vernor Vinge is a mathematician and science fiction author who is well known for many Hugo Award-winning novels and novellas*   and his 1993 essay “The Coming Technological Singularity”, in which he argues that the creation of superhuman artificial intelligence will mark the point at which “the human era will be ended”, such that no current models of reality are sufficient to predict beyond it.


Alan Turing and the Computability of Intelligence

Adam Ford: Alan Turing is considered the “Father of Theoretical Computer Science and Artificial Intelligence” – his view about the potential of AI contrasts with much of the skepticism that has subsequently arose.  What is at the root of this skepticism?

Vinge_Singularity_Omni_face250x303Vernor Vinge: The emotional source of the skepticism is the ineffable feeling that many (most?)  people have against the possibility that self-awareness could arise from simple, constructed devices.


AF: Many theorists feel that the combined talents of pure machines and humans will always produce more creative and therefore useful output – what are your thoughts?

VV: When it comes to intelligence, biology just doesn’t have legs. _However_ in the near term, teams of people plus machines can be much smarter than either — and this is one of the strongest reasons for being optimistic that we can manage the new era safely, and project that safety into the farther future.


AF: Is the human brain essentially a computer?

VV: Probably yes, but if not the lack can very likely be made up for with machine improvements that we humans can devise.


AF: Even AI critics John Searle and Hubert Dreyfus (i.e. “What Computers (Still) Can’t Do”) agree that a brain simulation is possible in theory, though they argue that merely mimicking the functioning brain would in itself be an admission of ignorance (concerning intelligence) – what are your thoughts?

VV: The question of whether there is self-awareness behind a mimick may be the most profound issue, but for almost all practical purposes it isn’t relevant: in a few years, I think we will be able to make machines that can run circles around any human mind by all externally measured criteria. So what if no one is really home inside that machine?

Offhand, I can think of only one practical import to the answer, but that _is_ something important: If such minds are self-aware in the human sense, then uploads suddenly become very important to us mortality-challenged beings.

For reductionists interested in _that_ issue, some confidence might be achieved with superintelligence architectures that model those structures in our minds that reductionists come to associate with self-awareness. (I can imagine this argument being carried on by the uploaded supermind children of Searle and Moravec — a trillion years from now when there might not be any biological minds around whatsoever.)


AF: Do you think Alan Turing’s reasons for believing in the potential of AI are different from your own and other modern day theorists?  If so in what ways?

VV: My guess is there is not much difference.


AF: Has Alan Turing and his work influenced your writing? If it has, how so?

VV: I’m not aware of direct influence. As a child, what chiefly influenced me was the science-fiction I was reading! Of course, those folks were often influenced by what was going in science and math and engineering of the time.

Alan Turing has had a multitude of incarnations in science fiction…   I think that besides being a broadly based math and science genius, Turing created accessible connections between classic philosophical questions and current issues.


AF: How do you think Alan Turing would respond to the specific concept of the Technological Singularity as described by you in your paper “The Coming Technological Singularity: How to Survive in the Post-Human Era“?

VV: I’d bet that Turing (and many AI pioneers) had extreme ideas about the consequences of superhuman machine intelligence. I’m not sure if Turing and I would agree about the potential for Intelligence Amplification and human/machine group minds.

I’d be _very_ interested in his reaction to modern analysis such as surveyed in Bostrom’s recent _Superintelligence_ book.


AF: In True Names, agents seek to protect their true identity. The guardian of the Coven’s castle is named ‘Alan Turing’ – what was the reason behind this?

It was a tip of the hat in Turing’s direction. By the time I wrote this story I had become quite aware of Alan Turing (contrasting with my childhood ignorance that I mentioned earlier).


AF: Your first novella Bookworm Run! was themed around brute forcing simpler-than-human-intelligence to super-intelligence (in it a chimpanzee’s intelligence is amplified).  You also explore the area of intelligence amplification in Marooned in Realtime.
Do you think it is possible for a Singularity to bootstrap from brute forcing simple cognitive models? If so do you think Super-Intelligence will be achieved through brute-forcing simple algorithms?

VV: I view “Intelligence Amplification” (IA) as a finessing of the hardest questions by building on top of what already exists. Thus even UI design lies on the path to the Singularity. One could argue that Intelligence Amplification is the surest way of insuring humanity in the super-intelligence (though some find that a very scary possibility in itself).


The Turing Test and Beyond

AF: Is the Turing Test important? If so, why, and how does it’s importance match up to tracking progress in Strong AI?

VV: In its general form, I regard the Turing Test as a marvelous, zen-like, bridge between reductionism and the inner feelings most people have about their own self-awareness.  Bravo Dr. Turing!


AF: Is a text conversation is ever a valid test for intelligence? Is blackbox testing enough for a valid test for intelligence?

VV: “Passing the Turing Test” depends very much on the setup:
a) The examining human (child? adult? fixated or afflicted adult? –see Sherry Turkle’s examples of college students who passed a chatbot).
b) The duration of the test.
c) The number of human examiners participating.
d) Restrictions on the examination domain.

In _The Emperor’s New Mind_, Penrose has a (mostly negative) critique of the Turing Test. But at the end he says that if the test was very broad, lasting years, and convincing to him (Penrose), then it might be meaningful to talk about a “pass grade”.


AF: The essence of Roger Penrose’s argument (in the Emperor’s New Mind)
–  It is impossible for a Turing machine to enumerate all possible Godel sentences. Such a program will always have a Godel sentence derivable from its program which it can never discover
–  Humans have no problem discovering these sentences and seeing the truth of them
And he concludes that humans are not reducible to turing machines.  Do you agree with Roger’s assessment  – Are humans not reducible to turing machines?

VV: This argument depends on comparing a mathematical object (the Turing Machine) with whatever kind of object the speaker considers a “human mind” to be.  As a logical argument, it leaves me dubious.


AF: Are there any existing interpretations of the Turing Test that you favour?

VV: I think Penrose’s version (described above) is the most important.

In conversation, the most important thing is that all sides know which flavor of the test they are talking about 🙂


AF: You mentioned it has been fun tracking Turing Test contests, what are your thoughts on attempts at passing the Turing Test so far?

VV: So far, it seems to me that the philosophically important thresholds are still far away. Fooling certain people, or fooling people for short periods of time seems to have been accomplished.


AF: Is there any specific type of intelligence we should be testing machines for?

VV: There are intelligence tests that would be very interesting to me, but I rather not call them versions of the Turing Test. For instance, I think we’re already in the territory where more and more [forms->sorts] of superhuman forms of creativity and “intuition” are possible.

I think there well also be performance tests for IA and group mind projects.


AF: Some argue that testing for ‘machine consciousness’ is more interesting – what are your thoughts?

VV: Again, I’d keep this possibility separate from Turing Test issues, though I do think that a being that could swiftly duplicate itself and ramp intellect up or down per resource and latency constraints would have a vastly different view of reality compared to the severe and static time/space/mortality restrictions that we humans live with.


AF: The Turing Test seems like a competitive sport.  Though some interpretations of the Turing Test have conditions which seem to be quite low.  The competitive nature of how the Turing Test is staged seems to me to select for the cheapest and least sophisticated methods to fool judges on a Turing Test panel.

VV: Yes.


AF: Should we be focusing on developing more complex and adaptive Turing style tests (more complex measurement criteria? more complex assessment)? What alternatives to a Turing Test competition (if any) would you suggest to motivate regular testing for machine intelligence?

VV: The answers to these questions may grow out of hard engineering necessity more than from the sport metaphor. Going forward, I imagine that different engineering requirements will acquire various tests, but they may look more like classical benchmark tests.


Tracking Progress in Artificial Intelligence

AF: Why is tracking progress towards AI important?

VV: Up to a point it could be important for the sort of safety reasons Bostrom discusses in _Superintelligence_. Such tracking could also provide some guidance for machine/human/society teams that might have the power to guide events along safe paths.


AF: What do you see as the most useful mechanisms for tracking progress towards a) human equivalence in AI, b) a Technological Singularity?

VV: The approach to human equivalence might be tracked with a broad range of tests. Such would also apply to the Singularity, but for a soft takeoff, I imagine there would be a lot of economic effects that could be tracked. For example:
–  trends in employment of classic humans, augmented humans, and computer/human teams;
–  trends in what particular jobs still have good employment;
–  changes in the personal characteristics of the most successful CEOs.

Direct tests of automation performance (such as we’ve discussed above) are also important, but as we approach the Singularity, the center of gravity shifts from the programmers to the programs and how the programs are gaming the systems.


AF: If you had a tardis and you could bring Alan Turing forward into the 21st century, would he be surprised at progress in AI?  What kinds of progress do you think he would be most interested in?

VV: I don’t have any special knowledge of Turing, but my guess is he would be pleased — and he would want to _understand_ by becoming a super himself.


AF: If and when the Singularity becomes imminent – is it likely that the majority of people will be surprised?

VV: A hard takeoff would probably be a surprise to most people. I suspect that a soft takeoff would be widely recognized.



AF: What opportunities could we miss if we are not well prepared (This includes opportunities for risk mitigation)?

VV: Really, the risk mitigation is the serious issue. Other categories of missed opportunities will probably be quickly fixed by the improving tech.  For pure AI, some risk mitigation is the sort of thing MIRI is researching.

For pure AI, IA, and group minds, I think risk mitigation involves making use of the human equivalent minds that already exist in great numbers (namely, the human race). If these teams and early enhancements recognized the issues, they can form a bridge across to the more powerful beings to come.


AF: You spoke about an AI Hard Takeoff as being potentially very bad – can you elaborate here?

VV: A hard takeoff is too fast for normal humans to react and accommodate to.  To me, a Hard Takeoff would be more like an explosion than like technological progress. Any failure in mitigation planning is suddenly beyond the ability of normal humans to fix.


AF: What stood out for you after reading Nick Bostrom’s book ‘Superintelligence – paths, dangers, strategies’?

VV: Yes. I think it’s an excellent discussion especially of the pure AI path to superintelligence. Even people who have no intense interest in these issues would find the first few chapters interesting, as they sketch out the problematic issues of pure AI superintelligence — including some points that may have been missed back in the twentieth century. The book then proceeds to a fascinating analysis of how to cope with these issues.

My only difference with the analysis presented is that while pure AI is likely the long term important issue, there could well be a period (especially in the case of a Soft Takeoff) where the IA and groupmind trajectories are crucial.


Vernor Vinge at Los Con 2012

* Hugo award winning novels & novellas include: A Fire Upon the Deep (1992), A Deepness in the Sky (1999), Rainbows End (2006), Fast Times at Fairmont High (2002), and The Cookie Monster (2004), and The Peace War (1984).

Also see video interview with Vernor Vinge on the Technological Singularity.

Into the Wild Blue Yonder with Tim van Gelder

Into the Wild Blue Yonder – Tim van Gelder (Who is speaking at the conference this year) – originally posted at H+ Magazine.
defcon-TIM_VAN_GELDER2-620x0[dropcap]I[/dropcap] recently did a [highlight]series of interviews with Tim van Gelder[/highlight] on Intelligence Amplification, Artificial Intelligence, Argument Mapping and Douglas Engelbart’s contributions to computing and user interface design and collective wisdom.
Below the video interview is the article [highlight]Into the Deep Blue Yonder[/highlight].

Tim van Gelder was a founder of Austhink Software, an Australian software development company, and is the Managing Director of Austhink Consulting. He was born in Australia, educated at the University of Melbourne (BA, 1984), the University of Pittsburgh (PhD, 1989), and held academic positions at Indiana University and the Australian National University before returning to Melbourne as an Australian Research Council QEII Research Fellow. In 1998, he transitioned to part-time academic work allowing him to pursue private training and consulting, and in 2005 began working full-time at Austhink Software. In 2009 he transitioned to Managing Director of Austhink Consulting.

Here is one section on the series of interviews:
[youtube url=””] [heading]

Into the Deep Blue Yonder

[/heading] [note]The original article appeared in the late 90s – but it reads very well – and reflects much of Tim van Gelder’s current thinking on AI. A slightly revised version appeared in Quadrant, Xmas 1997. The video interview above covers some similar topics to the article below.[/note] [heading]



tim-elite-2010-13-headThousands of times every day, humans pit their wits against The Machine. On almost every occasion, they lose. Arcade games, bridge programs, pocket chess machines: the phenomenon is so familiar we no longer notice it. We have grown quite accustomed to being outclassed by electronic gadgets in many activities we find intellectually demanding.

In New York earlier this year, a 34-year old Azerbaijanie man sat down to a six-game match against a chess machine. This event, however, galvanised world attention. Chess enthusiasts followed every move by satellite TV or Internet. Newspaper headlines announced the score to millions more. Pundits the world over pontificated on the significance of the occasion.

Why the interest in this match? The Azerbaijanie was Gary Kasparov, the reigning world chess champion, widely regarded as the greatest player in the history of the game. Kasparov is so good that very few players in the world today can even give him a serious game. To keep his form up, he likes to take on entire national teams in “clock simultaneous” matches. In these matches, every player-including Kasparov-has at most 2.5 hours “thinking time.”

On the other side of the board was the latest version of Deep Blue, IBM’s chess-playing computer. Deep Blue is the most powerful chess-playing device ever constructed. The match was billed as the ultimate confrontation of Mankind against The Machine. At stake was more than just Kasparov’s personal pride or IBM’s reputation in computer technology. At stake was more than just the title of best chess player in the known universe. At stake, apparently, was humanity’s self-image as uniquely or supremely intelligent, and hence as entitled to a central or at least special place in the cosmos. At stake also was humanity’s place on the ladder of power and authority. Machines with superhuman intelligence might eventually be able to enslave humans in relentless and efficient pursuit of their alien designs. We remain safe only as long as there are at least some white knights like Kasparov, humans still smarter than any machine.

Of course, the score is now a matter of historical record. Deep Blue won the match narrowly, 3.5 points to 2.5. Fortunately, humanity’s spin-doctors had already prepared a face-saving interpretation of the entire episode. Deep Blue, they countered, is an a mechanistic idiot savant. Kasparov can shrug off his defeat, for the match was no more an interesting contest than pitching a pole-vaulter against a helicopter. Humanity can also breathe a collective sigh of relief and reassurance: we are still the smartest beings in the universe; we can still respect our unique intellectual capacities; we are not about to be subjugated by a new generation of ruthless machines.

These, then, are the two main interpretations of the Kasparov-Deep Blue clash. On one hand there are the alarmists, who see Deep Blue as the vanguard of an approaching army of superhuman intellects. On the other hand are the deflationists, who see Deep Blue as an overgrown and overhyped cash register. Both interpretations read the confrontation in the context of a world-historical competition between Mankind and The Machine. Alarmists see the match as a pivotal moment, one future historians will designate as the occasion upon which both pride of place and the balance of power were ceded to The Machine. Deflationists insist that The Machine is still stupid and Mankind is still safe.

In fact, both these interpretations are mistaken-or rather, misguided. Any interpretation of what may well be an epochal event is built on a foundation of factual and philosophical assumptions; if these are rotten, the edifice is inherently unstable. The situation is even worse when key structural members are fears and fantasies rather than logical implications.

The real significance of the Kasparov defeat is at once more strange and more comforting than either of these simple stories. We are not being superseded by The Machine, but not because The Machine is still a long way behind. Rather, the very distinction between Mankind and The Machine is under pressure. Long before The Machine could be regarded as having overwhelmed us, it will have become us. Ultimately, the loser in this confrontation is not Mankind or The Machine; it is our conception of ourselves as essentially homo sapiens.



hal 9000Early in Stanley Kubrick’s famous movie 2001: A Space Odyssey, the astronaut Dave plays and loses a game of chess against HAL, the spaceship’s intelligent onboard computer. This event, more than the fact that it can control the ship or converse in normal English, demonstrates HAL’s intellectual superiority. As the plot develops it becomes apparent that HAL is out of control, to the point where it has been killing off human astronauts. Its superior intelligence now makes it a highly dangerous opponent.

HAL is a fictional embodiment of the alarmist interpretation of the Kasparov-Deep Blue confrontation. HAL instantiates what alarmists fear Deep Blue might become: a superhuman, general purpose intelligence, self-interested and pitiless. Standing behind this nightmarish vision is a collection of traditional philosophical ideas. Intelligence is regarded as the operation and outcome of Reason, the ability to make inferences in accordance with the principles of Logic. Reason is a specifically human trait, in the sense that members of Homo Sapiens are uniquely or at least supremely rational. It is Reason, more than anything else, which grants humans a special place in the cosmos; it gives them not only the ability, but also the right and duty to organise the world to their own advantage. Chess is the definitive test of intelligence; the winner is always the one with the greatest ability to apply reason in pursuit of its goals. The best chess player is the most intelligent, and therefore the most rational, powerful and privileged, of all beings.

The letters “HAL” immediately precede the letters “IBM” in the alphabet. Some people believe this is no accident; Kubrick chose those letters in order to highlight the danger IBM and corporations like it poses to humanity. This, however, is a myth. “HAL” is derived from “Heuristically programmed ALgorithmic computer.” When Kubrick, who had assistance from IBM in making the movie, found out about the coincidence, he wanted to change the name and was only prevented from doing so by production costs.

2001-dismantlinghalJust as IBM would not wish to be linked with the homicidal HAL, so it has tried to dispel the alarmist interpretation of Deep Blue’s victory. If Mankind had just been humiliated by the Machine, IBM would have to bear responsibility. Being cast as Dr Frankenstein in the public imagination would hardly benefit their corporate image. For this reason IBM is at the forefront of deflationist counter-reactions to the Deep Blue victory. Despite having invested millions of dollars and dozens of expert-years in the project, they are quick to advertise Deep Blue’s limitations. Kasparov, they said, plays with insight, intuition, finesse, imagination. Deep Blue just cranks out billions of possibilities. According to the IBM counter-hype, the real winners in the Kasparov-Deep Blue confrontation are people like you and me. The RS/6000 SP computer driving Deep Blue will be used in traffic control systems, internet applications, and host of other mundane conveniences.

Chess has usually been regarded as the most intellectually challenging game known to man. It would be surprising indeed if a machine could beat the greatest player in history, and yet be fundamentally stupid. That, one is tempted to say, does not compute. That, however, is the position IBM is taking, and one that was echoed recently by none other than Bill Gates.

Two main lines of thought are used to underpin the interpretation of Deep Blue as harmless idiot-savant. The first is the idea that Deep Blue’s move selection is carried out in an utterly mindless fashion. Whereas Kasparov actually thinks about his options, Deep Blue follows pre-ordained rules specifying vast quantities of simple calculations, none of which require the least bit of understanding. This difference is manifested in the number of possible move sequences the players consider before making their moves. Kasparov, like all human chess players, considers only a few dozen or at most a few hundred sequences. Deep Blue considers literally billions of alternatives in a few seconds.

But if good chess is a matter of selecting the best move, and Deep Blue can examine so many more possibilities, how is it that Kasparov is even in the running? According to this line of thought, intelligence is precisely what makes the difference. Intelligence is the magic ingredient which enables Kasparov to recognize the overall board situation, to zero in on relevant features, to attend only to the most plausible lines of play, to look far ahead in the game, to be creative and daring in his play, and to learn from his opponent’s responses. With none of these abilities, Deep Blue is condemned to witless search of all possibilities, no matter how promising. The fact that Deep Blue can beat Kasparov just shows that brute force can sometimes achieve what would otherwise require real thought.

The second line of support considers Deep Blue’s performance in domains other than chess. This argument can be traced all the way back to René Descartes. In his Discourse on Method, Descartes considered how one might distinguish a real person from a sophisticated automaton imitating a person. He proposed two tests. The first is that one should attempt to engage the candidate in conversation. A machine, he argued, would never be able to “arrange words differently to reply to the sense of all that is said in its presence, as even the most moronic man can do.”

The second test is to explore the range of skills the putative person exhibits. Machines can do certain human-like things exceedingly well-witness the animatronic marvels at a place like Disneyland. However they can only do those things because they were specifically designed and constructed for the job. Their design precludes them from doing anything else. For example, we now have machines which are better than humans at shearing sheep, but don’t expect them to knit a woolly jumper or even make a cup of tea. Humans, by contrast, can do a very wide range of things at least tolerably well. That’s because they don’t rely on dedicated machinery; rather, they control general purpose hardware (hands etc.) by means of thought processes.

Descartes believed that the “universal instrument” of Reason is necessary in order to pass both these tests. It is because we can think about the meanings of words that we can hold conversations, and it is because we can think about our actions that we can do so many different kinds of things.

Deep Blue, of course, immediately fails Descartes’ tests. It cannot even play checkers, let alone walk the dog or hold a conversation. Deflationists conclude that Deep Blue has exactly zero genuine intelligence, even though it plays the best chess in the world. Indeed, the two lines of thought come together: it is because Deep Blue plays chess without really thinking that it can do nothing other than checkmate its opponents.



These deflationary arguments certainly undermine the simple alarmist view that Deep Blue is the first of a new generation of superhuman intellects poised to enslave the human race. They do not, however, establish that Deep Blue is a witless moron. More careful consideration of the nature of chess, and the machines which play it, supports the commonsense view that Deep Blue does indeed have at least some measure of intelligence.

chess worldChess is what is known as a formal system. Every board position and every move is well-defined and unambiguous, as are the starting and finishing positions. Further, chess is completely self-contained; nothing outside the board has any relevance to the game. Playing good chess means making a sequence of moves ending in checkmate for the opponent. The hard part, of course, is picking the right move at any given time. The typical number of moves available from any given position is about 35. Whether a move is a good one depends on what the next move of the opponent might be, your response, and so forth. A good player can tell which of these possible sequences of moves and countermoves is advantageous, and hence which of the 35 moves to select.

All a chess machine needs to do, then, is to examine all the available move-countermove sequences, and select one ending in checkmate for the opponent. Unfortunately, this simple strategy is completely out of the question (at least, for any technology currently imaginable). The fundamental problem is that of combinatorial explosion. It is illustrated by the following puzzle. Imagine folding a normal sheet of paper in half. The remaining “pile” is twice as thick as the original sheet. Continue until you have folded it 100 times. How thick is the pile now? Most people estimate a few yards. In fact, the pile would stretch eight hundred thousand billion times the distance from the earth to the sun (give or take a few trillion miles).

Combinatorial explosion affects chess just as dramatically. The number of possible move sequences increases exponentially with each “ply” (move), and before long exceeds such familiar measures of enormity as the number of particles in the universe or the number of seconds since the beginning of time. This prevents any conceivable machine from playing good chess simply by mindlessly searching the branching tree of move sequences.

The real secret to good chess is not being able to consider vast quantities of move sequences (though that helps). Rather, the secret is being able to ignore the overwhelming majority of sequences, and focus attention on those relatively few which have some real promise. But how do you tell in advance which sequences to ignore? How do you prune from the search tree branches you haven’t even looked at?

The answer, basically, is that you use what computer scientists call “heuristics”-rules of thumb providing reliable, though not infallible guides. For example, a handy rule in finding checkmates is to examine first those moves that permit the opponent the fewest replies. Heuristics are distillations of considerable experience with the domain. At one level, a computer must always be programmed to “blindly” follow algorithms telling it exactly what to do and how to do it. At another level, however, those algorithms can embody heuristics guiding the computer in producing sophisticated-even “thoughtful”-behaviour.

Deep Blue, like all chess computers, operates by means of heuristically-guided search. Its power results from two factors. On one hand, it is an enormously fast search engine. Its 256 specially-designed processors can consider almost a quarter of a billion moves every second; in a game it will examine trillions of possibilities before making a move. On the other hand, and even more importantly, its software embodies a vast amount of real chess knowledge encoded in the form of heuristics. The team of experts who spent years refining Deep Blue’s understanding of chess included an international grandmaster. Almost every match Kasparov has played in the last twenty years has been recorded; Deep Blue is intimately familiar with Kasparov’s game.

Therefore, the image of Deep Blue as a prodigiously powerful but essentially stupid “number cruncher” is seriously deficient. Deep Blue embodies a great deal of human-derived chess knowledge, and puts that knowledge to good use in choosing intelligently. Indeed, Deep Blue has to be that way; the problem of combinatorial explosion prevents any simple brute-force machine from playing good chess, at least for the foreseeable future.

An interesting consequence is that, as computers have reached the very top levels, their style of play has become more “human.” For example, “trappy” moves-ones that gently coax an opponent into an apparent position of strength, but hold a sting many plies down the road-were once a human specialty. These days, with real chess knowledge guiding their search patterns, computers not only avoid traps, they set them themselves. Kasparov himself is no longer able to say, reliably, whether an opponent is human or machine just by looking at the moves. (HAL, by the way, played chess that was quite “human” in style. This was no coincidence; the game in the movie was transcribed from an obscure match played in Hamburg in 1913 .)

Deep Blue, then, does have intelligence. It plays a mean game of chess, and does so by thinking about its moves. There are still, to be sure, some significant differences between Kasparov’s thought processes and those of Deep Blue. Both, however, are thinking, and the outcome is the same.



If this is right, Descartes’ tests cannot be regarded as decisive. There can be genuine intelligence even in the absence of conversation or a wide range of skills. However, Descartes was clearly onto something important. If Deep Blue is so smart, why is it restricted to chess? Why can’t it talk about the football?

The deep reason-one of the most important discoveries of cognitive science-is that there are in fact many kinds of intelligence: diverse domains in which intelligence can be achieved, and various ways to achieve it. Some theorists have distinguished as many as seven different categories of intelligence, but the most important distinction for current purposes is that between what we can call formal intelligence, on one hand, and common sense on the other.

11119747-human-brain-intelligence-grunge-machine-medical-symbol-with-old-texture-made-of-cogs-and-gears-repreFormal intelligence is that required for domains which, like chess, are formal systems. Such domains might be hugely complex, but they are fundamentally well-defined and self-contained. Common sense is intelligence in domains not satisfying these conditions. Here there is no simple way to specify what the options are, and no way to draw boundaries around what might be relevant. Conversing is the classic example. What do you say when someone says “How are you doing”? Well, that depends-on who said it, in what tone of voice, where they were, what time it was. Try writing a complete set of rules for just the second line of a perfectly ordinary conversation and you’ll find out just how much common sense ordinary people actually exhibit.

The difference between formal intelligence and common sense is illustrated by the contrast between formal logic and its informal counterpart. Formal logic is manipulation of symbolic structures in accordance with strict rules. At elementary levels it is a dull, even “mindless” activity (though still a difficult skill for many people to pick up); at advanced levels, it is quite creative. It has been relatively easy to program computers to perform in this domain, though the best logicians are currently still humans.

Informal logic, on the other hand, is a matter of determining when somebody is justified in making some assertion. Would further reductions in tariff barriers lead to further unemployment? A great deal of evidence can be brought to bear, but there are no algorithmic procedures for determining whether the conclusion follows. For centuries, philosophers harboured the misconception that formal and informal logic are, deep down, the same thing-that all informal reasoning is just a complicated version of predicate calculus. More recently it has become apparent that informal logic requires a great deal of “nous,” and there is no easy way to translate that into rule-governed symbol manipulation.

Formal intelligence and common sense are both varieties of intelligence; they are both a matter figuring out what you should do to achieve your goals within a certain domain. However, they are very different, and they do not easily adapt to each other’s roles. On one hand, ordinary people have buckets of common sense (well, most of them, most of the time), but they are inept at chess, mathematics, formal logic, etc.. On the other hand, formal intelligence doesn’t automatically provide common sense. There is, of course, the stereotype of the absent-minded physics professor. More seriously, Deep Blue can’t do the weekly shopping and there is no simple way to adapt its prodigious formal intelligence to that apparently elementary task.

Traditional artificial intelligence-the science and engineering of smart computers-has grappled with both kinds of intelligence. Its successes in formal domains has been matched by a notable lack of success at reproducing common sense. The standard approach has been to attempt to translate the informal domain into an approximately commensurate formal system. Unfortunately, this enterprise is at least extraordinarily difficult, and perhaps impossible. There are some research projects around the world grappling with the problem, but don’t hold your breath.

From this perspective, Deep Blue’s victory does signify something important about artificial intelligence: namely that, as one expert put it, the easy (formal) part is now almost over, and the real work is just beginning. Computers are reaching superiority in a kind of intelligence which is rather difficult for humans to achieve. However, they are barely at first base with regard to the kind of intelligence humans find entirely natural-negotiating their way around the everyday world.



KASPAROVThus far, I have argued that neither the simple alarmist interpretation, nor the simple deflationist reaction, can be sustained. Deep Blue is not a superhuman intellect, but neither is it just a cash-register on steroids. It is an enormously sophisticated machine exhibiting a significant measure of intelligence in one formal domain, and none in all others. Until computer scientists can solve the far more difficult problem of common sense intelligence, machines will remain our intellectual inferiors and subject to our dominion.

Is this likely to happen, and if so, when? Some philosophers have claimed will always be impossible for digital computers to exhibit any significant degree of common sense. Hubert Dreyfus of the University of California at Berkeley is the most important of this group. He has provided powerful arguments that common sense depends upon vast quantities of everyday knowledge and know-how which can never be fully articulated in a form useable by digital computers.

Such predictions, however, are inherently risky, for they depend on our current levels of understanding of the nature of the problem and the limits of technology. Meanwhile, many researchers are tackling various aspects of the problem and making what counts as, at the very least, piecemeal progress on the fringes. The most famous of these efforts is the “CYC” project pioneered by Doug Lenat. The goal here is to “upload” the entirety of human commonsense knowledge into a vast electronic encyclopedia ready for use by other programs. The CYC people claim to already have commercial applications up and running.

My own opinion is that researchers in artificial intelligence will, most likely, eventually succeed in solving the problem of commonsense intelligence. It will not be anytime soon. Cracking the chess nut took about four decades longer than originally predicted. In the meantime, we’ve come to understand that chess was the easy problem. Common sense may well take centuries. Alan Turing, the father of artificial intelligence, predicted in 1950 that by the end of the century-that is, by around now-we would have machines able to converse at pretty convincing levels. No such luck. You can, if you like, interact over the internet with the best “conversation” machines in the world today. The experience is sure to impress upon you the difficulty of programming a computer with common sense. Nevertheless, progress is being made. The goal-genuine intelligence on tap-is so valuable that vast resources and ingenuity will be thrown at it over the next few hundred years. My money, for what it is worth, is on the side of the computer engineers.

kasparov-deepblueIn the case of chess, truly excellent levels of play were only achieved once scientists had developed sufficient understanding of how humans manage to play the game so well, and figured out how to transfer some of that understanding into the computer’s design. Deep Blue’s intelligence was thus largely a matter of human intelligence, abstracted out and reimplemented in digital hardware. The same will be true in the case of common sense. Constructing computers which hold conversations will only be possible once we understand much better what it is that an ordinary person knows, and how that knowledge is organised, accessed and updated. Once these problems in cognitive science have been solved, the computer scientists will face the challenge of building electronic instantiations of the same principles.

In other words, artificial intelligence succeeds in part through mimicry. It produces silicon simulacra of the basic principles underlying human intelligence. This is because the fundamental requirements of intelligent performance are universal; what varies are their implementations in different kinds of hardware. Evolution developed in humans a neurobiological implementation of the solution to the problem of common sense intelligence. Artificial intelligence will develop an alternative implementation of what is, at the relevant abstract level, the same solution.



Suppose this is correct. Suppose that in fifty years or so computer scientists have succeeded in producing, say, an automatic personal banker. You dial the bank on your videophone and are connected to a virtual “talking head,” a kind of supersmooth version of Max Headroom. You interact with this artificial persona just as you would with an ordinary human being. The conversation is quite intimate; your banker has a name, a personality, and knows quite a bit about you from the bank’s files and your previous interactions. As long as you don’t stray too far from the world of deposits, balances, and mortgages, the illusion that you are interacting with a flesh-and-blood human will be overwhelming.

15_robottyper_lglNow for the critical question-is this personal banker human or machine? More generally, will artificial intelligence be producing artificial humans, or just machine intelligence? At one extreme there is the hard-line view that nothing can really be human unless it is homo sapiens, i.e., shares our own evolutionary ancestry and our biological incarnation. According to this position, no matter how sophisticated these systems become, they will always be mere machines, imitating but never instantiating human nature. At the other extreme there is the ultra-liberal view that membership of homo sapiens is at best an accident of history, and has no essential connection to one’s social and ethical status as human. It took many centuries, but in the West at least we finally arrived at the enlightened view that the borders of human kind have nothing to do with those of gender and skin colour. Some people now argue that we should extend these borders even further to include dolphins and other putative intelligentsia. The point is that recognition as “one of us,” with attendant rights and responsibilities, should depend not on arbitrary details of one’s history or embodiment but on one’s capacity to participate in human forms of life. Taken to its logical limit, this view would extend the privilege of human status even to programmed computers.

The philosophical choice between hard-line biologism and a more catholic liberalism is not an easy one, and I don’t intend to adjudicate the matter here. The point of interest is that artificially intelligent machines participating in human forms of life are the kind of case which put pressure on the seemingly simple distinction between Mankind and The Machine. For most of the industrial age, the distinction was obvious enough-people were flesh and blood, born of woman, rational and emotional, social and spiritual. Machines were metal and electricity, born of the workshop, cold and insensitive. The utterly alien character of traditional machines made it easy to see the relationship between Man and Machine as one of opposition and perhaps competition. This attitude of “them against us” is still with us even in the age of information technology. Thus the Kasparov-Deep Blue match is cast as a critical episode in a kind of cosmic struggle to the death between humanity and the emerging machine.

By the time computers have been programmed with common sense, the contrast between Mankind and Machine will have become blurred, if not entirely overthrown. Computers which match our everyday forms of intelligence, and achieve this precisely because they recapitulate the basic principles underlying our own intelligent behaviour, have become very much like us. It will not be easy, either psychologically or philosophically, to draw a rigid distinction between people and PCs. Of course, it will always be possible to doggedly maintain that human nature is essentially a matter of lineage or embodiment, and to distribute rights and privileges accordingly. As philosopher Robert Brandom remarked, “”We” is said in many ways.” There is an unavoidable element of arbitrariness in deciding that “we” will stop at the boundary of our species. Many will choose to draw the boundaries somewhat differently, and in the process revise the very concept of humanity.

I am suggesting that machines will never outperform humans in an intelligence contest. By the time any such confrontation could conceivably come about, the conceptual contrast between human and machine, upon which the apparent interest of the contest depends, will have been drastically revised. Computers with common sense will not be humans, in the ordinary sense of today. Neither, however, will they be just machines, in the ordinary sense of today. They will be a wholly new entrant onto the ontological stage, displacing forever the current constellation of concepts in terms of which we contemplate our place in the world. The irresistible onwards march of information technology will not produce machines superior to humans. Rather, it will overhaul our understanding of what we are and what machines are. It will replace a binary opposition with a rich spectrum of manifestations of intelligence, and a correspondingly rich range of ways of determining who or what counts as one of “us.”

Deep Blue’s victory over Kasparov was the first major public triumph of artificial, programmed intelligence over evolved biological intelligence. It was indeed an event of world-historical significance. Not, as the alarmist fears, because it signifies the arrival of intelligent machines as potential competitors to humanity. Rather, it is significant because it is the first major milestone in a long process of transformation of human self-understanding-and hence human being. If we see history in Hegelian terms, as a series of stages in the evolution of the spirit or self-consciousness, Deep Blue’s victory lies at the cusp of a new era. Our own mastery of technology, and our level of scientific self-understanding, is reaching the stage where we can recreate aspects of ourselves in non-biological form, and in the process dramatically transform our understanding of what we essentially are. As Kasparov himself put it:

[box title=”Video Interviews”]

For more video interviews please Subscribe to Adam Ford’s YouTube Channel