7th Annual Conference of the Australasian Bayesian Network Modelling Society (ABNMS2015)

November 23 – 24, 2015: Pre-Conference Workshop
November 25 – 26, 2015: Conference

[Official Website Here]

Location: Monash University, Caulfield, Melbourne (Australia)
Promo vid | Contact:

Keynote Speakers: The conference organisers are pleased to announce that Dr Bruce Marcot of the US Forest Service, Dan Ababei from Lighttwist Software, Netherlands and Assoc Prof Jonathan Keith from Monash University will deliver the keynote address.

You will be able to register for the tutorials and the conference separately or together.

Bayesian Intelligence blog about the conf

– Dr. Kevin B. Korb is a Director and co-founder of Bayesian Intelligence, and a reader at Monash University. He specializes in the theory and practice of causal discovery of Bayesian networks (aka data mining with BNs), machine learning, evaluation theory, the philosophy of scientific method and informal logic. Email: kevin.korb (at)

Seventh Annual Conference of the Australasian Bayesian Network Modelling Society - Ann E Nicholson– Prof. Ann E. Nicholson is a Director and co-founder of Bayesian Intelligence and a professor at Monash University who specializes in Bayesian network modelling. She is an expert in dynamic Bayesian networks (BNs), planning under uncertainty, user modelling, Bayesian inference methods and knowledge engineering BNs. Email: ann (dot) nicholson (at) bayesian-intelligence (dot) com

Many thanks for watching!
– Support me via Patreon:
– Please Subscribe to this Channel:
– Science, Technology & the Future website:

Vernor Vinge on the Technological Singularity

What is the Singularity? Vernor Vinge speaks about technological change, offloading cognition from minds into the environment, and the potential of Strong Artificial Intelligence.

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” – “The Coming Technological SingularityVernor Vinge 1993

Vernor Vinge popularised and coined the term “Technological Singularity” in his 1993 essay “The Coming Technological Singularity“, in which he argues that the creation of superhuman artificial intelligence will mark the point at which “the human era will be ended,” such that no current models of reality are sufficient to predict beyond it.

courtesy of the Imaginary Foundation

courtesy of the Imaginary Foundation

Vinge published his first short story, “Bookworm, Run!”, in the March 1966 issue of Analog Science Fiction, then edited by John W. Campbell. The story explores the theme of artificially augmented intelligence by connecting the brain directly to computerised data sources. He became a moderately prolific contributor to SF magazines in the 1960s and early 1970s. In 1969, he expanded two related stories, (“The Barbarian Princess”, Analog, 1966 and “Grimm’s Story”, Orbit 4, 1968) into his first novel, Grimm’s World. His second novel, The Witling, was published in 1975.

Vinge came to prominence in 1981 with his novella True Names, perhaps the first story to present a fully fleshed-out concept of cyberspace, which would later be central to cyberpunk stories by William Gibson, Neal Stephenson and others.


Vernor Vinge

Image Courtesy – Long Now Foundation

Automating Science: Panel – Stephen Ames, John Wilkins, Greg Restall, Kevin Korb

A discussion among philosophers, mathematicians and AI experts on whether science can be automated, what it means to automate science, and the implications of automating science – including discussion on the technological singularity.

– implementing science in a computer – Bayesian methods – most promising normative standard for doing inductive inference
– vehicle : causal Bayesian networks – probability distributions over random variables showing causal relationships
– probabilifying relationships – tests whose evidence can raise the probability

05:23 does Bayesianism misrepresent the majority of what people do in science?

07:05 How to automate the generation of new hypotheses?
– Is there a clean dividing line between discovery and justification? (Popper’s view on the difference between the context of discovery and context of justification) Sure we discuss the difference between the concepts – but what is the difference between the implementation?

08:42 Automation of Science from beginning to end: concept formation, discovery of hypotheses, developing experiments, testing hypotheses, making inferences … hypotheses testing has been done – through concept formation is an interestingly difficult problem

Panel---Automating-Science-and-Artificial-Intelligence---Kevin-Korb,-Greg-Restall,-John-Wilkins,-Stephen-Ames-1920x10839:38 – does everyone on the panel agree that automation of science is possible? Stephen Ames: not yet, but the goal is imminent, until it’s done it’s an open question – Kevin/John: logically possible, question is will we do it – Greg Restall: Don’t know, can there be one formal system that can generate anything classed as science? A degree of open-endedness may be required, the system will need to represent itself etc (Godel!=mysticism, automation!=representing something in a formal deductive theory)

13:04 There is a Godel theorem that applies to a formal representation for automating science – that means that the formal representation can’t do everything – therefore what’s the scope of a formal system that can automate science? What will the formal representation and automated science implementation look like?

14:20 Going beyond formal representations to automate science (John Searle objects to AI on the basis of formal representations not being universal problem solvers)

15:45 Abductive inference (inference to the best explanation) – & Popper’s pessimism about a logic of discovery has no foundation – where does it come from? Calling it logic (if logic means deduction) is misleading perhaps – abduction is not deductive, but it can be formalised.

17:10 Some classified systems fall out of neural networks or clustering programs – Google’s concept of a cat is not deductive (IFAIK)

19:29 Map & territory – Turing Test – ‘if you can’t tell the difference between the model and the real system – then in practice there is no difference’ – the behavioural test is probably a pretty good one for intelligence

22:03 Discussion on IBM Watson on Jeopardy – a lot of natural language processing but not natural language generation

24:09 Bayesianism – in mathematics and in humans reasoning probabilistically – it introduced the concept of not seeing everything in black and white. People get statistical problems wrong often when they are asked to answer intuitively. Is the technology likely to have a broad impact?

26:26 Human thinking, subjective statistical reasoning – and the mismatch between the public communicative act often sounding like Boolean logic – a mismatch between our internal representation and the tools we have for externally representing likelihoods
29:08 Low hanging fruit in human communication probabilistic reasoning – Bayesian nets and argument maps (Bayesian nets strengths between premises and conclusions)

29:41 Human inquiry, wondering and asking questions – how do we automate asking questions (as distinct from making statements)? Scientific abduction is connected to asking questions – there is no reason why asking questions can’t be automated – there is contrasted explanations and conceptual space theory where you can characterise a question – causal explanation using causal Bayesian networks (and when proposing an explanation it must be supported some explanatory context)

32:29 Automating Philosophy – if you can automate science you can automate philosophy –

34:02 Stanford Computational Metaphysics project (colleagues with Greg Restall) – Stanford Computational Metaphysics project – formalization of representations of relationships between concepts – going back to Leibniz – complex notions can be boiled down to simpler primitive notions and grinding out these primitive notions computationally – they are making genuine discoveries
Weak Reading: can some philosophy be automated – yes
Strong Reading of q: can All of philosophy be automated? – there seem to be some things that count as philosophy that don’t look like they will be automated in the next 10 years

35:41 If what we’re is interested in is to represent and automate the production of reasoning formally (not only to evaluate), as long as the domain is such that we are making claims and we are interested in the inferential connections between the claims, then a lot of the properties of reasoning are subject matter agnostic.

36:46 (Rohan McLeod) Regarding Creationism is it better to think of it as a poor hypothesis or non-science? – not an exclusive disjunct, can start as a poor hypothesis and later become not-science or science – it depends on the stage at the time – science rules things out of contention – and at some point creationism had not been ruled out

38:16 (Rohan McLeod) Is economics a science or does it have the potential to be (or is it intrinsically not possible for it to be a science) and why?
Are there value judgements in science? And if there are how do you falsify a hypothesis that conveys a value judgement? physicists make value judgements on hypothesis “h1 is good, h2 is bad” – economics may have reducible normative components but physics doesn’t (electrons aren’t the kinds of things that economies are) – Michael ??? paper on value judgements – “there is no such thing as a factual judgement that does not involve value” – while there are normative components to economics, it is studied from at least one remove – problem is economists try to make normative judgements like “a good economy/market/corporation will do X”

42:22 Problems with economics – incredibly complex, it’s hard to model, without a model exists a vacuum that gets filled with ideology – (are ideologies normative?)

42:56 One of the problems with economics is it gets treated like a natural system (in physics or chemistry) which hides all the values which are getting smuggled in – commitments and values which are operative and contribute to the configuration of the system – a contention is whether economics should be a science (Kevin: Yes, Stephen: No) – perhaps economics could be called a nascent science (in the process of being born)

44:28 (James Fodor) Well known scientists have thought that their theories were implicit in nature before they found them – what’s the role of intuition in automating science & philosophy? – need intuitions to drive things forward – intuition in the abduction area – to drive inspiration for generating hypothesis – though a lot of what get’s called intuition is really the unconscious processing of a trained mind (an experienced driver doesn’t have to process how to drive a car) – Louis Pasteur’s prepared mind – trained prior probabilities

46:55 The Singularity – disagreement? John Wilkins suspects it’s not physically possible – Where does Moore’s Law (or its equivalents in other hardware paradigms) peter out? The software problem could be solved near or far. Kevin agrees with I.J. Good – recursively improving abilities without (obvious) end (within thermodynamic limits). Kevin Korb explains the intelligence explosion.

50:31 Stephen Ames discusses his view of the singularity – but disagrees with uploading on the grounds of needing to commit to philosophical naturalism

51:52 Greg Restall mistrusts IT corporations to get uploading right – Kevin expresses concerns about using star-trek transporters – the lack of physical continuity. Greg discusses theories of intelligence – planes fly as do birds, but planes are not birds – they are differing

54:07 John Wilkins – way too much emphasis is put on propositional knowledge and communication in describing intelligence – each human has roughly the same amount of processing power – too much rests on academic pretense and conceit.

54:57 The Harvard Rule – under conditions of consistent lighting, feeding etc – the organism will do as it damn well pleases. But biology will defeat simple models.. Also Hulls rule – no matter what the law in biology is there is an exception (inc Hull’s law) – so simulated biology may be difficult. We won’t simulate an entire organism – we can’t simulate a cell. Kevin objects

58:30 Greg R. says simulations and models do give us useful information – even if we isolate certain properties in simulation that are not isolated in the real world – John Wilkins suggests that there will be a point where it works until it doesn’t

1:00:08 One of the biggest differences between humans and mice is 40 million years of evolution in both directions – the problem is in evo biol is your inductive projectability – we’ve observed it in these cases, therefore we expect it in this – it fades out relatively rapidly in direct disproportion to the degree of relatedness

1:01:35 Colin Kline – PSYCHE – and other AI programs making discoveries – David Chalmers have proposed the Hard Problem of Consciousness – pZombies – but we are all pZombies, so we will develop systems that are conscious because there is to such thing as consciousness. Kevin is with Dennet – info processing functioning is what consciousness supervenes upon
Greg – concept formation in systems like PSYCHE – but this milestone might be very early in the development of what we think of as agency – if the machine is worried about being turned off or complains about getting board, then we are onto something

On Artificial Intelligence – Tim Josling

Tim Josling discusses AI, the Singularity, the way the public might react, whether they would be prepared, John Searle’s Chinese Room thought experiment, and consciousness.

Filmed in the majestic Blue Mountains a couple of hours out of Sydney in Australia. Here are some photos I took while I was there.

Also see Tim’s talk at H+ @Melbourne 2012

Tim’s Bio

Tim Josling - On Artificial IntelligenceTim Josling studied Law, Anthopology, Philosophy and Mathematics before switching to Computer Science at the dawn of the computer era. He worked on implementing some of the first transactional systems in Australia, later worked on the first ATM networks and was the chief architect for one of the first Internet Banking applications in Australia, and designed an early message switching (“middleware”) application in the USA. During his career he specialised in making large scale applications reliable and fast, saving several major projects from being cancelled due to poor performance and excessive running costs. This led to an interest in the progress of computer hardware and in Moore’s Law, which states that the power of computers grows roughly 10-fold every 5 years. In his spare time he contributed to various open source projects such as the GNU Compiler Collection. After attending the first Singularity Summit in Australia, he decided to retire so he could devote himself full-time to researching Artificial Intelligence, the Technological Singularity and Trans-humanism. He is currently working on applying AI techniques to financial and investment applications.
Talk: The Surprising Rate of Progress in Artificial Intelligence Research

Subscribe to the SciFuture Youtube Channel:

Science, Technology & the Future

Understanding the New Statistics

Geoff discusses statistics, confidence intervals, Bayesian approaches, meta-analysis, and problems with the use of ‘P’ values in significance testing.

Geoff Cumming v2.00_00_19_07.Still003Discussion points:
– Describe your background and involvement in statistics.
– How have orthodox statistics helped psychology (& science)? How has it harmed the science?
– What methods, models and tools do you commonly use in data analysis and why do you choose them?
– What is the dance of the p values? How do you cope with dancing p’s?
– What is meta-analysis & how is it done? How have meta-analysts coped with the bias in publishing data and results? What has the profession done about it?
– Confidence intervals help compared to p’s, by providing info about variation. Do they help enough? Why not credible intervals? Do you see a role for Bayesian statistics in day-to-day science?
– Where is statistical inference heading? Is there a next big thing and, if so, what is it?
– Does every student need to learn computer programming (“coding”) nowadays?

Interviewed by Kevin Korb and Adam Ford at Monash University Clayton.

Geoff’s YouTube Channel can be found here.
About the book:
Cumming, G. (2012). Understanding The New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis. New York: Routledge

–    Explains estimation, with many examples.
–    Designed for any discipline that uses statistical significance testing.
–    For advanced undergraduate and graduate students, and researchers.
–    Comes with free ESCI software.
–    May be the first evidence-based statistics textbook.
–    Assumes only prior completion of any intro statistics course.
–    See the dance of the confidence intervals, and many other intriguing things.

The main message of the book is summarised in two short magazine articles, in The Conversation, and InPsych.
Here is an interview on ABC Radio.

Buy ‘Understanding the New Statistics’ from Amazon

his is the first book to introduce the new statistics – effect sizes, confidence intervals, and meta-analysis – in an accessible way. It is chock full of practical examples and tips on how to analyze and report research results using these techniques. The book is invaluable to readers interested in meeting the new APA Publication Manual guidelines by adopting the new statistics – which are more informative than null hypothesis significance testing, and becoming widely used in many disciplines.

Geoff Cumming - The New StatisticsAccompanying the book is the Exploratory Software for Confidence Intervals (ESCI) package, free software that runs under Excel and is accessible at The book’s exercises use ESCI’s simulations, which are highly visual and interactive, to engage users and encourage exploration. Working with the simulations strengthens understanding of key statistical ideas. There are also many examples, and detailed guidance to show readers how to analyze their own data using the new statistics, and practical strategies for interpreting the results. A particular strength of the book is its explanation of meta-analysis, using simple diagrams and examples. Understanding meta-analysis is increasingly important, even at undergraduate levels, because medicine, psychology and many other disciplines now use meta-analysis to assemble the evidence needed for evidence-based practice.

The book’s pedagogical program, built on cognitive science principles, reinforces learning:

  • Boxes provide “evidence-based” advice on the most effective statistical techniques.
  • Numerous examples reinforce learning, and show that many disciplines are using the new statistics.
  • Graphs are tied in with ESCI to make important concepts vividly clear and memorable.
  • Opening overviews and end of chapter take-home messages summarize key points.
  • Exercises encourage exploration, deep understanding, and practical applications.

This highly accessible book is intended as the core text for any course that emphasizes the new statistics, or as a supplementary text for graduate and/or advanced undergraduate courses in statistics and research methods in departments of psychology, education, human development , nursing, and natural, social, and life sciences. Researchers and practitioners interested in understanding the new statistics, and future published research, will also appreciate this book. A basic familiarity with introductory statistics is assumed.

Many thanks for watching!
Support this website via Patreon
Please Subscribe to the YouTube Channel
Science, Technology & the Future

Vernor Vinge on the Turing Test, Artificial Intelligence


the_imitation_game_bOn the coat-tails of a the blockbuster film “The Imitation Game” I saw quite a bit of buzz on the internet about Alan Turing, and the Turing Test.  The title of the movie refers to the idea of the Turing Test may someday show that machines would ostensibly be (at least in controlled circumstances) indistinguishable from humans.
Vernor Vinge is a mathematician and science fiction author who is well known for many Hugo Award-winning novels and novellas*   and his 1993 essay “The Coming Technological Singularity”, in which he argues that the creation of superhuman artificial intelligence will mark the point at which “the human era will be ended”, such that no current models of reality are sufficient to predict beyond it.


Alan Turing and the Computability of Intelligence

Adam Ford: Alan Turing is considered the “Father of Theoretical Computer Science and Artificial Intelligence” – his view about the potential of AI contrasts with much of the skepticism that has subsequently arose.  What is at the root of this skepticism?

Vinge_Singularity_Omni_face250x303Vernor Vinge: The emotional source of the skepticism is the ineffable feeling that many (most?)  people have against the possibility that self-awareness could arise from simple, constructed devices.


AF: Many theorists feel that the combined talents of pure machines and humans will always produce more creative and therefore useful output – what are your thoughts?

VV: When it comes to intelligence, biology just doesn’t have legs. _However_ in the near term, teams of people plus machines can be much smarter than either — and this is one of the strongest reasons for being optimistic that we can manage the new era safely, and project that safety into the farther future.


AF: Is the human brain essentially a computer?

VV: Probably yes, but if not the lack can very likely be made up for with machine improvements that we humans can devise.


AF: Even AI critics John Searle and Hubert Dreyfus (i.e. “What Computers (Still) Can’t Do”) agree that a brain simulation is possible in theory, though they argue that merely mimicking the functioning brain would in itself be an admission of ignorance (concerning intelligence) – what are your thoughts?

VV: The question of whether there is self-awareness behind a mimick may be the most profound issue, but for almost all practical purposes it isn’t relevant: in a few years, I think we will be able to make machines that can run circles around any human mind by all externally measured criteria. So what if no one is really home inside that machine?

Offhand, I can think of only one practical import to the answer, but that _is_ something important: If such minds are self-aware in the human sense, then uploads suddenly become very important to us mortality-challenged beings.

For reductionists interested in _that_ issue, some confidence might be achieved with superintelligence architectures that model those structures in our minds that reductionists come to associate with self-awareness. (I can imagine this argument being carried on by the uploaded supermind children of Searle and Moravec — a trillion years from now when there might not be any biological minds around whatsoever.)


AF: Do you think Alan Turing’s reasons for believing in the potential of AI are different from your own and other modern day theorists?  If so in what ways?

VV: My guess is there is not much difference.


AF: Has Alan Turing and his work influenced your writing? If it has, how so?

VV: I’m not aware of direct influence. As a child, what chiefly influenced me was the science-fiction I was reading! Of course, those folks were often influenced by what was going in science and math and engineering of the time.

Alan Turing has had a multitude of incarnations in science fiction…   I think that besides being a broadly based math and science genius, Turing created accessible connections between classic philosophical questions and current issues.


AF: How do you think Alan Turing would respond to the specific concept of the Technological Singularity as described by you in your paper “The Coming Technological Singularity: How to Survive in the Post-Human Era“?

VV: I’d bet that Turing (and many AI pioneers) had extreme ideas about the consequences of superhuman machine intelligence. I’m not sure if Turing and I would agree about the potential for Intelligence Amplification and human/machine group minds.

I’d be _very_ interested in his reaction to modern analysis such as surveyed in Bostrom’s recent _Superintelligence_ book.


AF: In True Names, agents seek to protect their true identity. The guardian of the Coven’s castle is named ‘Alan Turing’ – what was the reason behind this?

It was a tip of the hat in Turing’s direction. By the time I wrote this story I had become quite aware of Alan Turing (contrasting with my childhood ignorance that I mentioned earlier).


AF: Your first novella Bookworm Run! was themed around brute forcing simpler-than-human-intelligence to super-intelligence (in it a chimpanzee’s intelligence is amplified).  You also explore the area of intelligence amplification in Marooned in Realtime.
Do you think it is possible for a Singularity to bootstrap from brute forcing simple cognitive models? If so do you think Super-Intelligence will be achieved through brute-forcing simple algorithms?

VV: I view “Intelligence Amplification” (IA) as a finessing of the hardest questions by building on top of what already exists. Thus even UI design lies on the path to the Singularity. One could argue that Intelligence Amplification is the surest way of insuring humanity in the super-intelligence (though some find that a very scary possibility in itself).


The Turing Test and Beyond

AF: Is the Turing Test important? If so, why, and how does it’s importance match up to tracking progress in Strong AI?

VV: In its general form, I regard the Turing Test as a marvelous, zen-like, bridge between reductionism and the inner feelings most people have about their own self-awareness.  Bravo Dr. Turing!


AF: Is a text conversation is ever a valid test for intelligence? Is blackbox testing enough for a valid test for intelligence?

VV: “Passing the Turing Test” depends very much on the setup:
a) The examining human (child? adult? fixated or afflicted adult? –see Sherry Turkle’s examples of college students who passed a chatbot).
b) The duration of the test.
c) The number of human examiners participating.
d) Restrictions on the examination domain.

In _The Emperor’s New Mind_, Penrose has a (mostly negative) critique of the Turing Test. But at the end he says that if the test was very broad, lasting years, and convincing to him (Penrose), then it might be meaningful to talk about a “pass grade”.


AF: The essence of Roger Penrose’s argument (in the Emperor’s New Mind)
–  It is impossible for a Turing machine to enumerate all possible Godel sentences. Such a program will always have a Godel sentence derivable from its program which it can never discover
–  Humans have no problem discovering these sentences and seeing the truth of them
And he concludes that humans are not reducible to turing machines.  Do you agree with Roger’s assessment  – Are humans not reducible to turing machines?

VV: This argument depends on comparing a mathematical object (the Turing Machine) with whatever kind of object the speaker considers a “human mind” to be.  As a logical argument, it leaves me dubious.


AF: Are there any existing interpretations of the Turing Test that you favour?

VV: I think Penrose’s version (described above) is the most important.

In conversation, the most important thing is that all sides know which flavor of the test they are talking about 🙂


AF: You mentioned it has been fun tracking Turing Test contests, what are your thoughts on attempts at passing the Turing Test so far?

VV: So far, it seems to me that the philosophically important thresholds are still far away. Fooling certain people, or fooling people for short periods of time seems to have been accomplished.


AF: Is there any specific type of intelligence we should be testing machines for?

VV: There are intelligence tests that would be very interesting to me, but I rather not call them versions of the Turing Test. For instance, I think we’re already in the territory where more and more [forms->sorts] of superhuman forms of creativity and “intuition” are possible.

I think there well also be performance tests for IA and group mind projects.


AF: Some argue that testing for ‘machine consciousness’ is more interesting – what are your thoughts?

VV: Again, I’d keep this possibility separate from Turing Test issues, though I do think that a being that could swiftly duplicate itself and ramp intellect up or down per resource and latency constraints would have a vastly different view of reality compared to the severe and static time/space/mortality restrictions that we humans live with.


AF: The Turing Test seems like a competitive sport.  Though some interpretations of the Turing Test have conditions which seem to be quite low.  The competitive nature of how the Turing Test is staged seems to me to select for the cheapest and least sophisticated methods to fool judges on a Turing Test panel.

VV: Yes.


AF: Should we be focusing on developing more complex and adaptive Turing style tests (more complex measurement criteria? more complex assessment)? What alternatives to a Turing Test competition (if any) would you suggest to motivate regular testing for machine intelligence?

VV: The answers to these questions may grow out of hard engineering necessity more than from the sport metaphor. Going forward, I imagine that different engineering requirements will acquire various tests, but they may look more like classical benchmark tests.


Tracking Progress in Artificial Intelligence

AF: Why is tracking progress towards AI important?

VV: Up to a point it could be important for the sort of safety reasons Bostrom discusses in _Superintelligence_. Such tracking could also provide some guidance for machine/human/society teams that might have the power to guide events along safe paths.


AF: What do you see as the most useful mechanisms for tracking progress towards a) human equivalence in AI, b) a Technological Singularity?

VV: The approach to human equivalence might be tracked with a broad range of tests. Such would also apply to the Singularity, but for a soft takeoff, I imagine there would be a lot of economic effects that could be tracked. For example:
–  trends in employment of classic humans, augmented humans, and computer/human teams;
–  trends in what particular jobs still have good employment;
–  changes in the personal characteristics of the most successful CEOs.

Direct tests of automation performance (such as we’ve discussed above) are also important, but as we approach the Singularity, the center of gravity shifts from the programmers to the programs and how the programs are gaming the systems.


AF: If you had a tardis and you could bring Alan Turing forward into the 21st century, would he be surprised at progress in AI?  What kinds of progress do you think he would be most interested in?

VV: I don’t have any special knowledge of Turing, but my guess is he would be pleased — and he would want to _understand_ by becoming a super himself.


AF: If and when the Singularity becomes imminent – is it likely that the majority of people will be surprised?

VV: A hard takeoff would probably be a surprise to most people. I suspect that a soft takeoff would be widely recognized.



AF: What opportunities could we miss if we are not well prepared (This includes opportunities for risk mitigation)?

VV: Really, the risk mitigation is the serious issue. Other categories of missed opportunities will probably be quickly fixed by the improving tech.  For pure AI, some risk mitigation is the sort of thing MIRI is researching.

For pure AI, IA, and group minds, I think risk mitigation involves making use of the human equivalent minds that already exist in great numbers (namely, the human race). If these teams and early enhancements recognized the issues, they can form a bridge across to the more powerful beings to come.


AF: You spoke about an AI Hard Takeoff as being potentially very bad – can you elaborate here?

VV: A hard takeoff is too fast for normal humans to react and accommodate to.  To me, a Hard Takeoff would be more like an explosion than like technological progress. Any failure in mitigation planning is suddenly beyond the ability of normal humans to fix.


AF: What stood out for you after reading Nick Bostrom’s book ‘Superintelligence – paths, dangers, strategies’?

VV: Yes. I think it’s an excellent discussion especially of the pure AI path to superintelligence. Even people who have no intense interest in these issues would find the first few chapters interesting, as they sketch out the problematic issues of pure AI superintelligence — including some points that may have been missed back in the twentieth century. The book then proceeds to a fascinating analysis of how to cope with these issues.

My only difference with the analysis presented is that while pure AI is likely the long term important issue, there could well be a period (especially in the case of a Soft Takeoff) where the IA and groupmind trajectories are crucial.


Vernor Vinge at Los Con 2012

* Hugo award winning novels & novellas include: A Fire Upon the Deep (1992), A Deepness in the Sky (1999), Rainbows End (2006), Fast Times at Fairmont High (2002), and The Cookie Monster (2004), and The Peace War (1984).

Also see video interview with Vernor Vinge on the Technological Singularity.

Simulating for Computational Biology – Arun Konagurthu

Arun Konagurthu - Simulating for Computational Biology v2Arun Konagurthu is a Senior Lecturer at the Clayton School of Computer Science and Information Technology, Faculty of Information Technology, Monash University. Between 2011-2013, Arun was additionally a Larkins Fellow at this faculty.

Arun leads a small research group that researches mainly in computational biology and bioinformatics. His other research interests include data structures and algorithms, computational modeling and simulation, combinatorial optimization, and, since joining Monash in 2011, statistical learning using Minimum Message Length inference.

Points of discussion:
– What’s your overall research problem? If you solved it, how would things change?
– What is ‘stringology’ and how is it relevant to your research problem?
– Describe your use of simulation methods in bioinformatics. What problems do they overcome and how?
– Why do you prefer Bayesian statistics? What difference does it make?
– How do simulation and scoring work together? What kind of scores do you use?
– What’s been the impact of simulation on bioinformatics generally?
– What’s the future of sampling in data science? What’s coming around the corner?

#bayesian #artificialintelligence #datascience



Many thanks for watching!
Support me via Patreon
Please Subscribe to this YouTube Channel
Science, Technology & the Future

Arun Konagurthu Simulating for Computational Biology v1

Tim Josling – Progress in AI – Humanity+ @Melbourne 2012

Filmed at Humanity+ @Melbourne 2012Abstract here

The Surprising Rate of Progress in Artificial Intelligence Research

Artificial Intelligence is one of the foundations of Transhumanism, along with nanotechnology, biotechnology, and robotics. This talk will survey the rapidly accelerating progress in building machine intelligence, particularly over the last 10 years and the prospects for the next one to three decades. We cover advances in hardware such as the single molecule transistor and the first computer with processing power comparable to the human brain as well as the continuation exponential growth in processing power courtesy of Moore’s Law. Accessible descriptions of breakthroughs in software and algorithms such as self-learning machines, reinforcement learning, Support Vector machines, and hierarchical learning networks illustrate how the “software bottleneck” is being overcome. The talk includes video footage of applications of Artificial Intelligence technology.

Tim-Josling---9Tim Josling studied Law, Anthopology, Philosophy and Mathematics before switching to Computer Science at the dawn of the computer era. He worked on implementing some of the first transactional systems in Australia, later worked on the first ATM networks and was the chief architect for one of the first Internet Banking applications in Australia, and designed an early message switching (“middleware”) application in the USA. During his career he specialised in making large scale applications reliable and fast, saving several major projects from being cancelled due to poor performance and excessive running costs. This led to an interest in the progress of computer hardware and in Moore’s Law, which states that the power of computers grows roughly 10-fold every 5 years. In his spare time he contributed to various open source projects such as the GNU Compiler Collection. After attending the first Singularity Summit in Australia, he decided to retire so he could devote himself full-time to researching Artificial Intelligence, the Technological Singularity and Trans-humanism. He is currently working on applying AI techniques to financial and investment applications.



Into the Wild Blue Yonder with Tim van Gelder

Into the Wild Blue Yonder – Tim van Gelder (Who is speaking at the conference this year) – originally posted at H+ Magazine.
defcon-TIM_VAN_GELDER2-620x0[dropcap]I[/dropcap] recently did a [highlight]series of interviews with Tim van Gelder[/highlight] on Intelligence Amplification, Artificial Intelligence, Argument Mapping and Douglas Engelbart’s contributions to computing and user interface design and collective wisdom.
Below the video interview is the article [highlight]Into the Deep Blue Yonder[/highlight].

Tim van Gelder was a founder of Austhink Software, an Australian software development company, and is the Managing Director of Austhink Consulting. He was born in Australia, educated at the University of Melbourne (BA, 1984), the University of Pittsburgh (PhD, 1989), and held academic positions at Indiana University and the Australian National University before returning to Melbourne as an Australian Research Council QEII Research Fellow. In 1998, he transitioned to part-time academic work allowing him to pursue private training and consulting, and in 2005 began working full-time at Austhink Software. In 2009 he transitioned to Managing Director of Austhink Consulting.

Here is one section on the series of interviews:
[youtube url=””] [heading]

Into the Deep Blue Yonder

[/heading] [note]The original article appeared in the late 90s – but it reads very well – and reflects much of Tim van Gelder’s current thinking on AI. A slightly revised version appeared in Quadrant, Xmas 1997. The video interview above covers some similar topics to the article below.[/note] [heading]



tim-elite-2010-13-headThousands of times every day, humans pit their wits against The Machine. On almost every occasion, they lose. Arcade games, bridge programs, pocket chess machines: the phenomenon is so familiar we no longer notice it. We have grown quite accustomed to being outclassed by electronic gadgets in many activities we find intellectually demanding.

In New York earlier this year, a 34-year old Azerbaijanie man sat down to a six-game match against a chess machine. This event, however, galvanised world attention. Chess enthusiasts followed every move by satellite TV or Internet. Newspaper headlines announced the score to millions more. Pundits the world over pontificated on the significance of the occasion.

Why the interest in this match? The Azerbaijanie was Gary Kasparov, the reigning world chess champion, widely regarded as the greatest player in the history of the game. Kasparov is so good that very few players in the world today can even give him a serious game. To keep his form up, he likes to take on entire national teams in “clock simultaneous” matches. In these matches, every player-including Kasparov-has at most 2.5 hours “thinking time.”

On the other side of the board was the latest version of Deep Blue, IBM’s chess-playing computer. Deep Blue is the most powerful chess-playing device ever constructed. The match was billed as the ultimate confrontation of Mankind against The Machine. At stake was more than just Kasparov’s personal pride or IBM’s reputation in computer technology. At stake was more than just the title of best chess player in the known universe. At stake, apparently, was humanity’s self-image as uniquely or supremely intelligent, and hence as entitled to a central or at least special place in the cosmos. At stake also was humanity’s place on the ladder of power and authority. Machines with superhuman intelligence might eventually be able to enslave humans in relentless and efficient pursuit of their alien designs. We remain safe only as long as there are at least some white knights like Kasparov, humans still smarter than any machine.

Of course, the score is now a matter of historical record. Deep Blue won the match narrowly, 3.5 points to 2.5. Fortunately, humanity’s spin-doctors had already prepared a face-saving interpretation of the entire episode. Deep Blue, they countered, is an a mechanistic idiot savant. Kasparov can shrug off his defeat, for the match was no more an interesting contest than pitching a pole-vaulter against a helicopter. Humanity can also breathe a collective sigh of relief and reassurance: we are still the smartest beings in the universe; we can still respect our unique intellectual capacities; we are not about to be subjugated by a new generation of ruthless machines.

These, then, are the two main interpretations of the Kasparov-Deep Blue clash. On one hand there are the alarmists, who see Deep Blue as the vanguard of an approaching army of superhuman intellects. On the other hand are the deflationists, who see Deep Blue as an overgrown and overhyped cash register. Both interpretations read the confrontation in the context of a world-historical competition between Mankind and The Machine. Alarmists see the match as a pivotal moment, one future historians will designate as the occasion upon which both pride of place and the balance of power were ceded to The Machine. Deflationists insist that The Machine is still stupid and Mankind is still safe.

In fact, both these interpretations are mistaken-or rather, misguided. Any interpretation of what may well be an epochal event is built on a foundation of factual and philosophical assumptions; if these are rotten, the edifice is inherently unstable. The situation is even worse when key structural members are fears and fantasies rather than logical implications.

The real significance of the Kasparov defeat is at once more strange and more comforting than either of these simple stories. We are not being superseded by The Machine, but not because The Machine is still a long way behind. Rather, the very distinction between Mankind and The Machine is under pressure. Long before The Machine could be regarded as having overwhelmed us, it will have become us. Ultimately, the loser in this confrontation is not Mankind or The Machine; it is our conception of ourselves as essentially homo sapiens.



hal 9000Early in Stanley Kubrick’s famous movie 2001: A Space Odyssey, the astronaut Dave plays and loses a game of chess against HAL, the spaceship’s intelligent onboard computer. This event, more than the fact that it can control the ship or converse in normal English, demonstrates HAL’s intellectual superiority. As the plot develops it becomes apparent that HAL is out of control, to the point where it has been killing off human astronauts. Its superior intelligence now makes it a highly dangerous opponent.

HAL is a fictional embodiment of the alarmist interpretation of the Kasparov-Deep Blue confrontation. HAL instantiates what alarmists fear Deep Blue might become: a superhuman, general purpose intelligence, self-interested and pitiless. Standing behind this nightmarish vision is a collection of traditional philosophical ideas. Intelligence is regarded as the operation and outcome of Reason, the ability to make inferences in accordance with the principles of Logic. Reason is a specifically human trait, in the sense that members of Homo Sapiens are uniquely or at least supremely rational. It is Reason, more than anything else, which grants humans a special place in the cosmos; it gives them not only the ability, but also the right and duty to organise the world to their own advantage. Chess is the definitive test of intelligence; the winner is always the one with the greatest ability to apply reason in pursuit of its goals. The best chess player is the most intelligent, and therefore the most rational, powerful and privileged, of all beings.

The letters “HAL” immediately precede the letters “IBM” in the alphabet. Some people believe this is no accident; Kubrick chose those letters in order to highlight the danger IBM and corporations like it poses to humanity. This, however, is a myth. “HAL” is derived from “Heuristically programmed ALgorithmic computer.” When Kubrick, who had assistance from IBM in making the movie, found out about the coincidence, he wanted to change the name and was only prevented from doing so by production costs.

2001-dismantlinghalJust as IBM would not wish to be linked with the homicidal HAL, so it has tried to dispel the alarmist interpretation of Deep Blue’s victory. If Mankind had just been humiliated by the Machine, IBM would have to bear responsibility. Being cast as Dr Frankenstein in the public imagination would hardly benefit their corporate image. For this reason IBM is at the forefront of deflationist counter-reactions to the Deep Blue victory. Despite having invested millions of dollars and dozens of expert-years in the project, they are quick to advertise Deep Blue’s limitations. Kasparov, they said, plays with insight, intuition, finesse, imagination. Deep Blue just cranks out billions of possibilities. According to the IBM counter-hype, the real winners in the Kasparov-Deep Blue confrontation are people like you and me. The RS/6000 SP computer driving Deep Blue will be used in traffic control systems, internet applications, and host of other mundane conveniences.

Chess has usually been regarded as the most intellectually challenging game known to man. It would be surprising indeed if a machine could beat the greatest player in history, and yet be fundamentally stupid. That, one is tempted to say, does not compute. That, however, is the position IBM is taking, and one that was echoed recently by none other than Bill Gates.

Two main lines of thought are used to underpin the interpretation of Deep Blue as harmless idiot-savant. The first is the idea that Deep Blue’s move selection is carried out in an utterly mindless fashion. Whereas Kasparov actually thinks about his options, Deep Blue follows pre-ordained rules specifying vast quantities of simple calculations, none of which require the least bit of understanding. This difference is manifested in the number of possible move sequences the players consider before making their moves. Kasparov, like all human chess players, considers only a few dozen or at most a few hundred sequences. Deep Blue considers literally billions of alternatives in a few seconds.

But if good chess is a matter of selecting the best move, and Deep Blue can examine so many more possibilities, how is it that Kasparov is even in the running? According to this line of thought, intelligence is precisely what makes the difference. Intelligence is the magic ingredient which enables Kasparov to recognize the overall board situation, to zero in on relevant features, to attend only to the most plausible lines of play, to look far ahead in the game, to be creative and daring in his play, and to learn from his opponent’s responses. With none of these abilities, Deep Blue is condemned to witless search of all possibilities, no matter how promising. The fact that Deep Blue can beat Kasparov just shows that brute force can sometimes achieve what would otherwise require real thought.

The second line of support considers Deep Blue’s performance in domains other than chess. This argument can be traced all the way back to René Descartes. In his Discourse on Method, Descartes considered how one might distinguish a real person from a sophisticated automaton imitating a person. He proposed two tests. The first is that one should attempt to engage the candidate in conversation. A machine, he argued, would never be able to “arrange words differently to reply to the sense of all that is said in its presence, as even the most moronic man can do.”

The second test is to explore the range of skills the putative person exhibits. Machines can do certain human-like things exceedingly well-witness the animatronic marvels at a place like Disneyland. However they can only do those things because they were specifically designed and constructed for the job. Their design precludes them from doing anything else. For example, we now have machines which are better than humans at shearing sheep, but don’t expect them to knit a woolly jumper or even make a cup of tea. Humans, by contrast, can do a very wide range of things at least tolerably well. That’s because they don’t rely on dedicated machinery; rather, they control general purpose hardware (hands etc.) by means of thought processes.

Descartes believed that the “universal instrument” of Reason is necessary in order to pass both these tests. It is because we can think about the meanings of words that we can hold conversations, and it is because we can think about our actions that we can do so many different kinds of things.

Deep Blue, of course, immediately fails Descartes’ tests. It cannot even play checkers, let alone walk the dog or hold a conversation. Deflationists conclude that Deep Blue has exactly zero genuine intelligence, even though it plays the best chess in the world. Indeed, the two lines of thought come together: it is because Deep Blue plays chess without really thinking that it can do nothing other than checkmate its opponents.



These deflationary arguments certainly undermine the simple alarmist view that Deep Blue is the first of a new generation of superhuman intellects poised to enslave the human race. They do not, however, establish that Deep Blue is a witless moron. More careful consideration of the nature of chess, and the machines which play it, supports the commonsense view that Deep Blue does indeed have at least some measure of intelligence.

chess worldChess is what is known as a formal system. Every board position and every move is well-defined and unambiguous, as are the starting and finishing positions. Further, chess is completely self-contained; nothing outside the board has any relevance to the game. Playing good chess means making a sequence of moves ending in checkmate for the opponent. The hard part, of course, is picking the right move at any given time. The typical number of moves available from any given position is about 35. Whether a move is a good one depends on what the next move of the opponent might be, your response, and so forth. A good player can tell which of these possible sequences of moves and countermoves is advantageous, and hence which of the 35 moves to select.

All a chess machine needs to do, then, is to examine all the available move-countermove sequences, and select one ending in checkmate for the opponent. Unfortunately, this simple strategy is completely out of the question (at least, for any technology currently imaginable). The fundamental problem is that of combinatorial explosion. It is illustrated by the following puzzle. Imagine folding a normal sheet of paper in half. The remaining “pile” is twice as thick as the original sheet. Continue until you have folded it 100 times. How thick is the pile now? Most people estimate a few yards. In fact, the pile would stretch eight hundred thousand billion times the distance from the earth to the sun (give or take a few trillion miles).

Combinatorial explosion affects chess just as dramatically. The number of possible move sequences increases exponentially with each “ply” (move), and before long exceeds such familiar measures of enormity as the number of particles in the universe or the number of seconds since the beginning of time. This prevents any conceivable machine from playing good chess simply by mindlessly searching the branching tree of move sequences.

The real secret to good chess is not being able to consider vast quantities of move sequences (though that helps). Rather, the secret is being able to ignore the overwhelming majority of sequences, and focus attention on those relatively few which have some real promise. But how do you tell in advance which sequences to ignore? How do you prune from the search tree branches you haven’t even looked at?

The answer, basically, is that you use what computer scientists call “heuristics”-rules of thumb providing reliable, though not infallible guides. For example, a handy rule in finding checkmates is to examine first those moves that permit the opponent the fewest replies. Heuristics are distillations of considerable experience with the domain. At one level, a computer must always be programmed to “blindly” follow algorithms telling it exactly what to do and how to do it. At another level, however, those algorithms can embody heuristics guiding the computer in producing sophisticated-even “thoughtful”-behaviour.

Deep Blue, like all chess computers, operates by means of heuristically-guided search. Its power results from two factors. On one hand, it is an enormously fast search engine. Its 256 specially-designed processors can consider almost a quarter of a billion moves every second; in a game it will examine trillions of possibilities before making a move. On the other hand, and even more importantly, its software embodies a vast amount of real chess knowledge encoded in the form of heuristics. The team of experts who spent years refining Deep Blue’s understanding of chess included an international grandmaster. Almost every match Kasparov has played in the last twenty years has been recorded; Deep Blue is intimately familiar with Kasparov’s game.

Therefore, the image of Deep Blue as a prodigiously powerful but essentially stupid “number cruncher” is seriously deficient. Deep Blue embodies a great deal of human-derived chess knowledge, and puts that knowledge to good use in choosing intelligently. Indeed, Deep Blue has to be that way; the problem of combinatorial explosion prevents any simple brute-force machine from playing good chess, at least for the foreseeable future.

An interesting consequence is that, as computers have reached the very top levels, their style of play has become more “human.” For example, “trappy” moves-ones that gently coax an opponent into an apparent position of strength, but hold a sting many plies down the road-were once a human specialty. These days, with real chess knowledge guiding their search patterns, computers not only avoid traps, they set them themselves. Kasparov himself is no longer able to say, reliably, whether an opponent is human or machine just by looking at the moves. (HAL, by the way, played chess that was quite “human” in style. This was no coincidence; the game in the movie was transcribed from an obscure match played in Hamburg in 1913 .)

Deep Blue, then, does have intelligence. It plays a mean game of chess, and does so by thinking about its moves. There are still, to be sure, some significant differences between Kasparov’s thought processes and those of Deep Blue. Both, however, are thinking, and the outcome is the same.



If this is right, Descartes’ tests cannot be regarded as decisive. There can be genuine intelligence even in the absence of conversation or a wide range of skills. However, Descartes was clearly onto something important. If Deep Blue is so smart, why is it restricted to chess? Why can’t it talk about the football?

The deep reason-one of the most important discoveries of cognitive science-is that there are in fact many kinds of intelligence: diverse domains in which intelligence can be achieved, and various ways to achieve it. Some theorists have distinguished as many as seven different categories of intelligence, but the most important distinction for current purposes is that between what we can call formal intelligence, on one hand, and common sense on the other.

11119747-human-brain-intelligence-grunge-machine-medical-symbol-with-old-texture-made-of-cogs-and-gears-repreFormal intelligence is that required for domains which, like chess, are formal systems. Such domains might be hugely complex, but they are fundamentally well-defined and self-contained. Common sense is intelligence in domains not satisfying these conditions. Here there is no simple way to specify what the options are, and no way to draw boundaries around what might be relevant. Conversing is the classic example. What do you say when someone says “How are you doing”? Well, that depends-on who said it, in what tone of voice, where they were, what time it was. Try writing a complete set of rules for just the second line of a perfectly ordinary conversation and you’ll find out just how much common sense ordinary people actually exhibit.

The difference between formal intelligence and common sense is illustrated by the contrast between formal logic and its informal counterpart. Formal logic is manipulation of symbolic structures in accordance with strict rules. At elementary levels it is a dull, even “mindless” activity (though still a difficult skill for many people to pick up); at advanced levels, it is quite creative. It has been relatively easy to program computers to perform in this domain, though the best logicians are currently still humans.

Informal logic, on the other hand, is a matter of determining when somebody is justified in making some assertion. Would further reductions in tariff barriers lead to further unemployment? A great deal of evidence can be brought to bear, but there are no algorithmic procedures for determining whether the conclusion follows. For centuries, philosophers harboured the misconception that formal and informal logic are, deep down, the same thing-that all informal reasoning is just a complicated version of predicate calculus. More recently it has become apparent that informal logic requires a great deal of “nous,” and there is no easy way to translate that into rule-governed symbol manipulation.

Formal intelligence and common sense are both varieties of intelligence; they are both a matter figuring out what you should do to achieve your goals within a certain domain. However, they are very different, and they do not easily adapt to each other’s roles. On one hand, ordinary people have buckets of common sense (well, most of them, most of the time), but they are inept at chess, mathematics, formal logic, etc.. On the other hand, formal intelligence doesn’t automatically provide common sense. There is, of course, the stereotype of the absent-minded physics professor. More seriously, Deep Blue can’t do the weekly shopping and there is no simple way to adapt its prodigious formal intelligence to that apparently elementary task.

Traditional artificial intelligence-the science and engineering of smart computers-has grappled with both kinds of intelligence. Its successes in formal domains has been matched by a notable lack of success at reproducing common sense. The standard approach has been to attempt to translate the informal domain into an approximately commensurate formal system. Unfortunately, this enterprise is at least extraordinarily difficult, and perhaps impossible. There are some research projects around the world grappling with the problem, but don’t hold your breath.

From this perspective, Deep Blue’s victory does signify something important about artificial intelligence: namely that, as one expert put it, the easy (formal) part is now almost over, and the real work is just beginning. Computers are reaching superiority in a kind of intelligence which is rather difficult for humans to achieve. However, they are barely at first base with regard to the kind of intelligence humans find entirely natural-negotiating their way around the everyday world.



KASPAROVThus far, I have argued that neither the simple alarmist interpretation, nor the simple deflationist reaction, can be sustained. Deep Blue is not a superhuman intellect, but neither is it just a cash-register on steroids. It is an enormously sophisticated machine exhibiting a significant measure of intelligence in one formal domain, and none in all others. Until computer scientists can solve the far more difficult problem of common sense intelligence, machines will remain our intellectual inferiors and subject to our dominion.

Is this likely to happen, and if so, when? Some philosophers have claimed will always be impossible for digital computers to exhibit any significant degree of common sense. Hubert Dreyfus of the University of California at Berkeley is the most important of this group. He has provided powerful arguments that common sense depends upon vast quantities of everyday knowledge and know-how which can never be fully articulated in a form useable by digital computers.

Such predictions, however, are inherently risky, for they depend on our current levels of understanding of the nature of the problem and the limits of technology. Meanwhile, many researchers are tackling various aspects of the problem and making what counts as, at the very least, piecemeal progress on the fringes. The most famous of these efforts is the “CYC” project pioneered by Doug Lenat. The goal here is to “upload” the entirety of human commonsense knowledge into a vast electronic encyclopedia ready for use by other programs. The CYC people claim to already have commercial applications up and running.

My own opinion is that researchers in artificial intelligence will, most likely, eventually succeed in solving the problem of commonsense intelligence. It will not be anytime soon. Cracking the chess nut took about four decades longer than originally predicted. In the meantime, we’ve come to understand that chess was the easy problem. Common sense may well take centuries. Alan Turing, the father of artificial intelligence, predicted in 1950 that by the end of the century-that is, by around now-we would have machines able to converse at pretty convincing levels. No such luck. You can, if you like, interact over the internet with the best “conversation” machines in the world today. The experience is sure to impress upon you the difficulty of programming a computer with common sense. Nevertheless, progress is being made. The goal-genuine intelligence on tap-is so valuable that vast resources and ingenuity will be thrown at it over the next few hundred years. My money, for what it is worth, is on the side of the computer engineers.

kasparov-deepblueIn the case of chess, truly excellent levels of play were only achieved once scientists had developed sufficient understanding of how humans manage to play the game so well, and figured out how to transfer some of that understanding into the computer’s design. Deep Blue’s intelligence was thus largely a matter of human intelligence, abstracted out and reimplemented in digital hardware. The same will be true in the case of common sense. Constructing computers which hold conversations will only be possible once we understand much better what it is that an ordinary person knows, and how that knowledge is organised, accessed and updated. Once these problems in cognitive science have been solved, the computer scientists will face the challenge of building electronic instantiations of the same principles.

In other words, artificial intelligence succeeds in part through mimicry. It produces silicon simulacra of the basic principles underlying human intelligence. This is because the fundamental requirements of intelligent performance are universal; what varies are their implementations in different kinds of hardware. Evolution developed in humans a neurobiological implementation of the solution to the problem of common sense intelligence. Artificial intelligence will develop an alternative implementation of what is, at the relevant abstract level, the same solution.



Suppose this is correct. Suppose that in fifty years or so computer scientists have succeeded in producing, say, an automatic personal banker. You dial the bank on your videophone and are connected to a virtual “talking head,” a kind of supersmooth version of Max Headroom. You interact with this artificial persona just as you would with an ordinary human being. The conversation is quite intimate; your banker has a name, a personality, and knows quite a bit about you from the bank’s files and your previous interactions. As long as you don’t stray too far from the world of deposits, balances, and mortgages, the illusion that you are interacting with a flesh-and-blood human will be overwhelming.

15_robottyper_lglNow for the critical question-is this personal banker human or machine? More generally, will artificial intelligence be producing artificial humans, or just machine intelligence? At one extreme there is the hard-line view that nothing can really be human unless it is homo sapiens, i.e., shares our own evolutionary ancestry and our biological incarnation. According to this position, no matter how sophisticated these systems become, they will always be mere machines, imitating but never instantiating human nature. At the other extreme there is the ultra-liberal view that membership of homo sapiens is at best an accident of history, and has no essential connection to one’s social and ethical status as human. It took many centuries, but in the West at least we finally arrived at the enlightened view that the borders of human kind have nothing to do with those of gender and skin colour. Some people now argue that we should extend these borders even further to include dolphins and other putative intelligentsia. The point is that recognition as “one of us,” with attendant rights and responsibilities, should depend not on arbitrary details of one’s history or embodiment but on one’s capacity to participate in human forms of life. Taken to its logical limit, this view would extend the privilege of human status even to programmed computers.

The philosophical choice between hard-line biologism and a more catholic liberalism is not an easy one, and I don’t intend to adjudicate the matter here. The point of interest is that artificially intelligent machines participating in human forms of life are the kind of case which put pressure on the seemingly simple distinction between Mankind and The Machine. For most of the industrial age, the distinction was obvious enough-people were flesh and blood, born of woman, rational and emotional, social and spiritual. Machines were metal and electricity, born of the workshop, cold and insensitive. The utterly alien character of traditional machines made it easy to see the relationship between Man and Machine as one of opposition and perhaps competition. This attitude of “them against us” is still with us even in the age of information technology. Thus the Kasparov-Deep Blue match is cast as a critical episode in a kind of cosmic struggle to the death between humanity and the emerging machine.

By the time computers have been programmed with common sense, the contrast between Mankind and Machine will have become blurred, if not entirely overthrown. Computers which match our everyday forms of intelligence, and achieve this precisely because they recapitulate the basic principles underlying our own intelligent behaviour, have become very much like us. It will not be easy, either psychologically or philosophically, to draw a rigid distinction between people and PCs. Of course, it will always be possible to doggedly maintain that human nature is essentially a matter of lineage or embodiment, and to distribute rights and privileges accordingly. As philosopher Robert Brandom remarked, “”We” is said in many ways.” There is an unavoidable element of arbitrariness in deciding that “we” will stop at the boundary of our species. Many will choose to draw the boundaries somewhat differently, and in the process revise the very concept of humanity.

I am suggesting that machines will never outperform humans in an intelligence contest. By the time any such confrontation could conceivably come about, the conceptual contrast between human and machine, upon which the apparent interest of the contest depends, will have been drastically revised. Computers with common sense will not be humans, in the ordinary sense of today. Neither, however, will they be just machines, in the ordinary sense of today. They will be a wholly new entrant onto the ontological stage, displacing forever the current constellation of concepts in terms of which we contemplate our place in the world. The irresistible onwards march of information technology will not produce machines superior to humans. Rather, it will overhaul our understanding of what we are and what machines are. It will replace a binary opposition with a rich spectrum of manifestations of intelligence, and a correspondingly rich range of ways of determining who or what counts as one of “us.”

Deep Blue’s victory over Kasparov was the first major public triumph of artificial, programmed intelligence over evolved biological intelligence. It was indeed an event of world-historical significance. Not, as the alarmist fears, because it signifies the arrival of intelligent machines as potential competitors to humanity. Rather, it is significant because it is the first major milestone in a long process of transformation of human self-understanding-and hence human being. If we see history in Hegelian terms, as a series of stages in the evolution of the spirit or self-consciousness, Deep Blue’s victory lies at the cusp of a new era. Our own mastery of technology, and our level of scientific self-understanding, is reaching the stage where we can recreate aspects of ourselves in non-biological form, and in the process dramatically transform our understanding of what we essentially are. As Kasparov himself put it:

[box title=”Video Interviews”]

For more video interviews please Subscribe to Adam Ford’s YouTube Channel