Posts

Review of Homo Deus: A Brief History of Tomorrow by Yuval Noah Harari – Steve Fuller

Sapiens, a breif history of humankind - Yuval Noah HarariMy sociology of knowledge students read Yuval Harari’s bestselling first book, Sapiens, to think about the right frame of reference for understanding the overall trajectory of the human condition. Homo Deus follows the example of Sapiens, using contemporary events to launch into what nowadays is called ‘big history’ but has been also called ‘deep history’ and ‘long history’. Whatever you call it, the orientation sees the human condition as subject to multiple overlapping rhythms of change which generate the sorts of ‘events’ that are the stuff of history lessons. But Harari’s history is nothing like the version you half remember from school.

In school historical events were explained in terms more or less recognizable to the agents involved. In contrast, Harari reaches for accounts that scientifically update the idea of ‘perennial philosophy’. Aldous Huxley popularized this phrase in his quest to seek common patterns of thought in the great world religions which could be leveraged as a global ethic in the aftermath of the Second World War. Harari similarly leverages bits of genetics, ecology, neuroscience and cognitive science to advance a broadly evolutionary narrative. But unlike Darwin’s version, Harari’s points towards the incipient apotheosis of our species; hence, the book’s title.

This invariably means that events are treated as symptoms if not omens of the shape of things to come. Harari’s central thesis is that whereas in the past we cowered in the face of impersonal natural forces beyond our control, nowadays our biggest enemy is the one that faces us in the mirror, which may or may not be able within our control. Thus, the sort of deity into which we are evolving is one whose superhuman powers may well result in self-destruction. Harari’s attitude towards this prospect is one of slightly awestruck bemusement.

Here Harari equivocates where his predecessors dared to distinguish. Writing with the bracing clarity afforded by the Existentialist horizons of the Cold War, cybernetics founder Norbert Wiener declared that humanity’s survival depends on knowing whether what we don’t know is actually trying to hurt us. If so, then any apparent advance in knowledge will always be illusory. As for Harari, he does not seem to see humanity in some never-ending diabolical chess match against an implacable foe, as in The Seventh Seal. Instead he takes refuge in the so-called law of unintended consequences. So while the shape of our ignorance does indeed shift as our knowledge advances, it does so in ways that keep Harari at a comfortable distance from passing judgement on our long term prognosis.

Homo Deus YuvalThis semi-detachment makes Homo Deus a suave but perhaps not deep read of the human condition. Consider his choice of religious precedents to illustrate that we may be approaching divinity, a thesis with which I am broadly sympathetic. Instead of the Abrahamic God, Harari tends towards the ancient Greek and Hindu deities, who enjoy both superhuman powers and all too human foibles. The implication is that to enhance the one is by no means to diminish the other. If anything, it may simply make the overall result worse than had both our intellects and our passions been weaker. Such an observation, a familiar pretext for comedy, wears well with those who are inclined to read a book like this only once.

One figure who is conspicuous by his absence from Harari’s theology is Faust, the legendary rogue Christian scholar who epitomized the version of Homo Deus at play a hundred years ago in Oswald Spengler’s The Decline of the West. What distinguishes Faustian failings from those of the Greek and Hindu deities is that Faust’s result from his being neither as clever nor as loving as he thought. The theology at work is transcendental, perhaps even Platonic.

In such a world, Harari’s ironic thesis that future humans might possess virtually perfect intellects yet also retain quite undisciplined appetites is a non-starter. If anything, Faust’s undisciplined appetites point to a fundamental intellectual deficiency that prevents him from exercising a ‘rational will’, which is the mark of a truly supreme being. Faust’s sense of his own superiority simply leads him down a path of ever more frustrated and destructive desire. Only the one true God can put him out of his misery in the end.

In contrast, if there is ‘one true God’ in Harari’s theology, it goes by the name of ‘Efficiency’ and its religion is called ‘Dataism’. Efficiency is familiar as the dimension along which technological progress is made. It amounts to discovering how to do more with less. To recall Marshall McLuhan, the ‘less’ is the ‘medium’ and the ‘more’ is the ‘message’. However, the metaphysics of efficiency matters. Are we talking about spending less money, less time and/or less energy?

It is telling that the sort of efficiency which most animates Harari’s account is the conversion of brain power to computer power. To be sure, computers can outperform humans on an increasing range of specialised tasks. Moreover, computers are getting better at integrating the operations of other technologies, each of which also typically replaces one or more human functions. The result is the so-called Internet of Things. But does this mean that the brain is on the verge of becoming redundant?

Those who say yes, most notably the ‘Singularitarians’ whose spiritual home is Silicon Valley, want to translate the brain’s software into a silicon base that will enable it to survive and expand indefinitely in a cosmic Internet of Things. Let’s suppose that such a translation becomes feasible. The energy requirements of such scaled up silicon platforms might still be prohibitive. For all its liabilities and mysteries, the brain remains the most energy efficient medium for encoding and executing intelligence. Indeed, forward facing ecologists might consider investing in a high-tech agronomy dedicated to cultivating neurons to function as organic computers – ‘Stem Cell 2.0’, if you will.

However, Harari does not see this possible future because he remains captive to Silicon Valley’s version of determinism, which prescribes a migration from carbon to silicon for anything worth preserving indefinitely. It is against this backdrop that he flirts with the idea that a computer-based ‘superintelligence’ might eventually find humans surplus to requirements in a rationally organized world. Like other Singularitarians, Harari approaches the matter in the style of a 1950s B-movie fan who sees the normative universe divided between ‘us’ (the humans) and ‘them’ (the non-humans).

Steve Fuller

Steve Fuller

The bravest face to put on this intuition is that computers will transition to superintelligence so soon – ‘exponentially’ as the faithful say — that ‘us vs. them’ becomes an operative organizing principle. More likely and messier for Harari is that this process will be dragged out. And during that time Homo sapiens will divide between those who identify with their emerging machine overlords, who are entitled to human-like rights, and those who cling to the new acceptable face of racism, a ‘carbonist’ ideology which would privilege organic life above any silicon-based translations or hybridizations. Maybe Harari will live long enough to write a sequel to Homo Deus to explain how this battle might pan out.

NOTE ON PUBLICATION: Homo Deus is published in September 2016 by Harvil Secker, an imprint of Penguin Random House. Fuller would like to thank The Literary Review for originally commissioning this review. It will appear in a subsequent edition of the magazine and is published here with permission.

Video Interview with Steve Fuller covering the Homo Deus book

Steve fuller discusses the new book Homo Deus, how it relates to the general transhumanist philosophy and movementfactors around the success of these ideas going mainstream, Yuval Noah Harari’s writing style, why there has been a bias within academia (esp sociology) to steer away from ideas which are less well established in history (and this is important because our successfully navigating the future will require a lot of new ideas), existential risk, and we contrast a posthuman future with a future dominated by an AI superintelligence.

Yuval Harari’s books

– ‘Homo Deus: A Brief History of Tomorrow’: https://www.amazon.com/Homo-Deus-Brief-History-Tomorrow-ebook/dp/B019CGXTP0/

– ‘Sapiens: A Brief History of Humankind’: https://www.amazon.com/Sapiens-Humankind-Yuval-Noah-Harari/dp/0062316095/

Discussion on the Coursera course ‘A Brief History of Humankind’ (which I took a few years ago): https://www.coursetalk.com/providers/coursera/courses/a-brief-history-of-humankind

The long-term future of AI (and what we can do about it) : Daniel Dewey at TEDxVienna

daniel deweyThis has been one of my favourite simple talks on AI Impacts – Simple, clear and straight to the point. Recommended as an introduction to the ideas (referred to in the title).

I couldn’t find the audio of this talk at TED – it has been added to archive.org:

 

Daniel Dewey is a research fellow in the Oxford Martin Programme on the Impacts of Future Technology at the Future of Humanity Institute, University of Oxford. His research includes paths and timelines to machine superintelligence, the possibility of intelligence explosion, and the strategic and technical challenges arising from these possibilities. Previously, Daniel worked as a software engineer at Google, did research at Intel Research Pittsburgh, and studied computer science and philosophy at Carnegie Mellon University. He is also a research associate at the Machine Intelligence Research Institute.

http://www.tedxvienna.at/

 

AGI Progress & Impediments – Progress in Artificial Intelligence Panel

Panelists: Ben Goertzel, David Chalmers, Steve Omohundro, James Newton-Thomas – held at the Singularity Summit Australia in 2011

Panelists discuss approaches to AGI, progress and impediments now and in the future.
Ben Goertzel:
Ben Goertzle with backdrop of headsBrain Emulation, Broad level roadmap simulation, bottleneck, lack of imaging technology, we don’t know what level of precision we need to reverse engineer biological intelligence. Ed Boyed – optimal brain imageing.
Not by Brain emulation (engineering/comp sci/cognitive sci), bottleneck is funding. People in the field believe/feel they know how to do it. To prove this, they need to integrate their architectures which looks like a big project. Takes a lot of money, but not as much as something like Microsoft Word.

David Chalmers (time 03:42):
DavidChalmersWe don’t know which of the two approaches. Though what form the singularity will take will likely be dependent on the approach we use to build AGI. We don’t understand the theory yet. Most don’t think we will have a perfect molecular scanner that scans the brain and its chemical constituents. 25 Years ago David Chalmers worked in Douglass Hofstadter’s AI lab, but his expertise in AI is now out of date. To get to Human Level AI by brute force or through cognitive psychology knows that the cog-sci is not in very good shape. Third approach is a hybrid of ruffly brain augmentation (through technology we are already using like ipads and computers etc) and technological extension and uploading. If using brain augmentation through tech and uploading as a first step in a Singularity then it is including Humans in the equation along with humanities values which may help shape a Singularity with those values.

Steve Omohundro (time 08:08):
steve_omohundro_headEarly in history AI, there was a distinction: The Neats and the Scruffies. John McCarthy (Stanford AI Lab) believed in mathematically precise logical representations – this shaped a lot of what Steve thought about how programming should be done. Marvin Minsky (MIT Lab) believed in exploring neural nets and self organising systems and the approach of throwing things together to see how it self-organises into intelligence. Both approaches are needed: the logical, mathematically precise, neat approach – and – the probabilistic, self-organising, fuzzy, learning approach, the scruffy. They have to come together. Theorem proving without any explorative aspect probably wont succeed. Purely Neural net based simulations can’t represent semantics well, need to combine systems with full semantics and systems with the ability to adapt to complex environments.

James Newton-Thomas (time 09:57)
james.newton-thomasJames has been playing with Neural-nets and has been disappointed with them not being thinks that Augmentation is the way forward. The AI problem is going to be easier to solve if we are smarter to solve it. Conferences such as this help infuse us with a collective empowerment of the individuals. There is an impediment – we are already being dehumanised with our Ipad, where the reason why we are having a conversation with others is a fact about our being part of a group and not about the information that can be looked up via an IPad. We need to careful in our approach so that we are able to maintain our humanity whilst gaining the advantages of the augmentation.

General Discussion (time 12:05):
David Chalmers: We are already becoming cyborgs in a sense by interacting with tech in our world. the more literal cyborg approach we are working on now. Though we are not yet at the point where the technology is commercialization to in principle allow a strong literal cyborg approach. Ben Goertzel: Though we could progress with some form of brain vocalization (picking up words directly from the brain), allowing to think a google query and have the results directly added to our mind – thus bypassing our low bandwidth communication and getting at the information directly in our heads. To do all this …
Steve Omohundro: EEG is gaining a lot of interest to help with the Quantified Self – brain interfaces to help measure things about their body (though the hardware is not that good yet).
Ben Goertzel: Use of BCIs for video games – and can detect whether you are aroused and paying attention. Though the resolution is very course – hard to get fine grained brain state information through the skull. Cranial jacks will get more information. Legal systems are an impediment.
James NT: Alan Snyder using time altering magnetic fields in helmets that shut down certain areas of the brain, which effectively makes people smarter in narrower domains of skill. Can provide an idiot savant ability at the cost of the ability to generalize. The brain that becomes to specific at one task is doing so at the cost of others – the process of generalization.

Ben Goertzel, David Chalmers, Steve Omohundro - A Thought Experiment

Ben Goertzel, David Chalmers, Steve Omohundro – A Thought Experiment

Vernor Vinge on the Turing Test, Artificial Intelligence

Preface

the_imitation_game_bOn the coat-tails of a the blockbuster film “The Imitation Game” I saw quite a bit of buzz on the internet about Alan Turing, and the Turing Test.  The title of the movie refers to the idea of the Turing Test may someday show that machines would ostensibly be (at least in controlled circumstances) indistinguishable from humans.
Vernor Vinge is a mathematician and science fiction author who is well known for many Hugo Award-winning novels and novellas*   and his 1993 essay “The Coming Technological Singularity”, in which he argues that the creation of superhuman artificial intelligence will mark the point at which “the human era will be ended”, such that no current models of reality are sufficient to predict beyond it.

 

Alan Turing and the Computability of Intelligence

Adam Ford: Alan Turing is considered the “Father of Theoretical Computer Science and Artificial Intelligence” – his view about the potential of AI contrasts with much of the skepticism that has subsequently arose.  What is at the root of this skepticism?

Vinge_Singularity_Omni_face250x303Vernor Vinge: The emotional source of the skepticism is the ineffable feeling that many (most?)  people have against the possibility that self-awareness could arise from simple, constructed devices.

 

AF: Many theorists feel that the combined talents of pure machines and humans will always produce more creative and therefore useful output – what are your thoughts?

VV: When it comes to intelligence, biology just doesn’t have legs. _However_ in the near term, teams of people plus machines can be much smarter than either — and this is one of the strongest reasons for being optimistic that we can manage the new era safely, and project that safety into the farther future.

 

AF: Is the human brain essentially a computer?

VV: Probably yes, but if not the lack can very likely be made up for with machine improvements that we humans can devise.

 

AF: Even AI critics John Searle and Hubert Dreyfus (i.e. “What Computers (Still) Can’t Do”) agree that a brain simulation is possible in theory, though they argue that merely mimicking the functioning brain would in itself be an admission of ignorance (concerning intelligence) – what are your thoughts?

VV: The question of whether there is self-awareness behind a mimick may be the most profound issue, but for almost all practical purposes it isn’t relevant: in a few years, I think we will be able to make machines that can run circles around any human mind by all externally measured criteria. So what if no one is really home inside that machine?

Offhand, I can think of only one practical import to the answer, but that _is_ something important: If such minds are self-aware in the human sense, then uploads suddenly become very important to us mortality-challenged beings.

For reductionists interested in _that_ issue, some confidence might be achieved with superintelligence architectures that model those structures in our minds that reductionists come to associate with self-awareness. (I can imagine this argument being carried on by the uploaded supermind children of Searle and Moravec — a trillion years from now when there might not be any biological minds around whatsoever.)

 

AF: Do you think Alan Turing’s reasons for believing in the potential of AI are different from your own and other modern day theorists?  If so in what ways?

VV: My guess is there is not much difference.

 

AF: Has Alan Turing and his work influenced your writing? If it has, how so?

VV: I’m not aware of direct influence. As a child, what chiefly influenced me was the science-fiction I was reading! Of course, those folks were often influenced by what was going in science and math and engineering of the time.

Alan Turing has had a multitude of incarnations in science fiction…   I think that besides being a broadly based math and science genius, Turing created accessible connections between classic philosophical questions and current issues.

 

AF: How do you think Alan Turing would respond to the specific concept of the Technological Singularity as described by you in your paper “The Coming Technological Singularity: How to Survive in the Post-Human Era“?

VV: I’d bet that Turing (and many AI pioneers) had extreme ideas about the consequences of superhuman machine intelligence. I’m not sure if Turing and I would agree about the potential for Intelligence Amplification and human/machine group minds.

I’d be _very_ interested in his reaction to modern analysis such as surveyed in Bostrom’s recent _Superintelligence_ book.

 

AF: In True Names, agents seek to protect their true identity. The guardian of the Coven’s castle is named ‘Alan Turing’ – what was the reason behind this?

It was a tip of the hat in Turing’s direction. By the time I wrote this story I had become quite aware of Alan Turing (contrasting with my childhood ignorance that I mentioned earlier).

 

AF: Your first novella Bookworm Run! was themed around brute forcing simpler-than-human-intelligence to super-intelligence (in it a chimpanzee’s intelligence is amplified).  You also explore the area of intelligence amplification in Marooned in Realtime.
Do you think it is possible for a Singularity to bootstrap from brute forcing simple cognitive models? If so do you think Super-Intelligence will be achieved through brute-forcing simple algorithms?

VV: I view “Intelligence Amplification” (IA) as a finessing of the hardest questions by building on top of what already exists. Thus even UI design lies on the path to the Singularity. One could argue that Intelligence Amplification is the surest way of insuring humanity in the super-intelligence (though some find that a very scary possibility in itself).

 

The Turing Test and Beyond

AF: Is the Turing Test important? If so, why, and how does it’s importance match up to tracking progress in Strong AI?

VV: In its general form, I regard the Turing Test as a marvelous, zen-like, bridge between reductionism and the inner feelings most people have about their own self-awareness.  Bravo Dr. Turing!

 

AF: Is a text conversation is ever a valid test for intelligence? Is blackbox testing enough for a valid test for intelligence?

VV: “Passing the Turing Test” depends very much on the setup:
a) The examining human (child? adult? fixated or afflicted adult? –see Sherry Turkle’s examples of college students who passed a chatbot).
b) The duration of the test.
c) The number of human examiners participating.
d) Restrictions on the examination domain.

In _The Emperor’s New Mind_, Penrose has a (mostly negative) critique of the Turing Test. But at the end he says that if the test was very broad, lasting years, and convincing to him (Penrose), then it might be meaningful to talk about a “pass grade”.

 

AF: The essence of Roger Penrose’s argument (in the Emperor’s New Mind)
–  It is impossible for a Turing machine to enumerate all possible Godel sentences. Such a program will always have a Godel sentence derivable from its program which it can never discover
–  Humans have no problem discovering these sentences and seeing the truth of them
And he concludes that humans are not reducible to turing machines.  Do you agree with Roger’s assessment  – Are humans not reducible to turing machines?

VV: This argument depends on comparing a mathematical object (the Turing Machine) with whatever kind of object the speaker considers a “human mind” to be.  As a logical argument, it leaves me dubious.

 

AF: Are there any existing interpretations of the Turing Test that you favour?

VV: I think Penrose’s version (described above) is the most important.

In conversation, the most important thing is that all sides know which flavor of the test they are talking about 🙂

 

AF: You mentioned it has been fun tracking Turing Test contests, what are your thoughts on attempts at passing the Turing Test so far?

VV: So far, it seems to me that the philosophically important thresholds are still far away. Fooling certain people, or fooling people for short periods of time seems to have been accomplished.

 

AF: Is there any specific type of intelligence we should be testing machines for?

VV: There are intelligence tests that would be very interesting to me, but I rather not call them versions of the Turing Test. For instance, I think we’re already in the territory where more and more [forms->sorts] of superhuman forms of creativity and “intuition” are possible.

I think there well also be performance tests for IA and group mind projects.

 

AF: Some argue that testing for ‘machine consciousness’ is more interesting – what are your thoughts?

VV: Again, I’d keep this possibility separate from Turing Test issues, though I do think that a being that could swiftly duplicate itself and ramp intellect up or down per resource and latency constraints would have a vastly different view of reality compared to the severe and static time/space/mortality restrictions that we humans live with.

 

AF: The Turing Test seems like a competitive sport.  Though some interpretations of the Turing Test have conditions which seem to be quite low.  The competitive nature of how the Turing Test is staged seems to me to select for the cheapest and least sophisticated methods to fool judges on a Turing Test panel.

VV: Yes.

 

AF: Should we be focusing on developing more complex and adaptive Turing style tests (more complex measurement criteria? more complex assessment)? What alternatives to a Turing Test competition (if any) would you suggest to motivate regular testing for machine intelligence?

VV: The answers to these questions may grow out of hard engineering necessity more than from the sport metaphor. Going forward, I imagine that different engineering requirements will acquire various tests, but they may look more like classical benchmark tests.

 

Tracking Progress in Artificial Intelligence

AF: Why is tracking progress towards AI important?

VV: Up to a point it could be important for the sort of safety reasons Bostrom discusses in _Superintelligence_. Such tracking could also provide some guidance for machine/human/society teams that might have the power to guide events along safe paths.

 

AF: What do you see as the most useful mechanisms for tracking progress towards a) human equivalence in AI, b) a Technological Singularity?

VV: The approach to human equivalence might be tracked with a broad range of tests. Such would also apply to the Singularity, but for a soft takeoff, I imagine there would be a lot of economic effects that could be tracked. For example:
–  trends in employment of classic humans, augmented humans, and computer/human teams;
–  trends in what particular jobs still have good employment;
–  changes in the personal characteristics of the most successful CEOs.

Direct tests of automation performance (such as we’ve discussed above) are also important, but as we approach the Singularity, the center of gravity shifts from the programmers to the programs and how the programs are gaming the systems.

 

AF: If you had a tardis and you could bring Alan Turing forward into the 21st century, would he be surprised at progress in AI?  What kinds of progress do you think he would be most interested in?

VV: I don’t have any special knowledge of Turing, but my guess is he would be pleased — and he would want to _understand_ by becoming a super himself.

 

AF: If and when the Singularity becomes imminent – is it likely that the majority of people will be surprised?

VV: A hard takeoff would probably be a surprise to most people. I suspect that a soft takeoff would be widely recognized.

 

Implications

AF: What opportunities could we miss if we are not well prepared (This includes opportunities for risk mitigation)?

VV: Really, the risk mitigation is the serious issue. Other categories of missed opportunities will probably be quickly fixed by the improving tech.  For pure AI, some risk mitigation is the sort of thing MIRI is researching.

For pure AI, IA, and group minds, I think risk mitigation involves making use of the human equivalent minds that already exist in great numbers (namely, the human race). If these teams and early enhancements recognized the issues, they can form a bridge across to the more powerful beings to come.

 

AF: You spoke about an AI Hard Takeoff as being potentially very bad – can you elaborate here?

VV: A hard takeoff is too fast for normal humans to react and accommodate to.  To me, a Hard Takeoff would be more like an explosion than like technological progress. Any failure in mitigation planning is suddenly beyond the ability of normal humans to fix.

 

AF: What stood out for you after reading Nick Bostrom’s book ‘Superintelligence – paths, dangers, strategies’?

VV: Yes. I think it’s an excellent discussion especially of the pure AI path to superintelligence. Even people who have no intense interest in these issues would find the first few chapters interesting, as they sketch out the problematic issues of pure AI superintelligence — including some points that may have been missed back in the twentieth century. The book then proceeds to a fascinating analysis of how to cope with these issues.

My only difference with the analysis presented is that while pure AI is likely the long term important issue, there could well be a period (especially in the case of a Soft Takeoff) where the IA and groupmind trajectories are crucial.

vernor_vinge_LosCon

Vernor Vinge at Los Con 2012

Notes:
* Hugo award winning novels & novellas include: A Fire Upon the Deep (1992), A Deepness in the Sky (1999), Rainbows End (2006), Fast Times at Fairmont High (2002), and The Cookie Monster (2004), and The Peace War (1984).

Also see video interview with Vernor Vinge on the Technological Singularity.

Life, Knowledge and Natural Selection – How Life (Scientifically) Designs its Future – Bill Hall

Bill HallStudies of the nature of life, evolutionary epistemology, anthropology and history of technology leads me reluctantly to the conclusion that Moore’s Law is taking us towards some kind of post-human singularity. The presentation explores fundamental aspects of life and knowledge, based on a fusion of Karl Popper’s (1972) evolutionary epistemology and Maturana and Varela’s (1980) autopoietic theory of life to show that knowledge and life must co-evolve, and that this co-evolution leads to exponential growth of knowledge and capabilities to control a planet (and the Universe???). The initial pace, based on changes to genetic heredity, is geologically slow. The addition of the capacity of living cognition for cultural heredity, changes the pace of significant change from millions of years, to millennia. Externalization of cultural knowledge to writing and printing increases the pace to centuries and decades. Networking virtual cultural knowledge at light speed via the internet, increases the pace to years or even months. In my lifetime I have seen the first generation digital computers evolve into the Global Brain.

As long as the requisites for live are available, competition for limiting resources inevitably leads to increasing complexity. Through most of the history of life, a species/individuals’ knowledge was embodied in its dynamic structure (e.g., of the nervous system) and genetic heritage that controls the development and regulation of structure. Some vertebrates evolved sufficient neural complexity to support the development of culture and cultural heredity. A few lineages, such as corvids (crows and their relatives), and two largely arboreal primate lineages (African apes and South American capuchin monkeys) independently evolved cultures able to transmit the knowledge to make and use increasingly complex tools from one generation to the next. Hominins, a lineage of tool-using apes forced by climate change around 4-5 million years ago to learn how to survive by extractive foraging and hunting on grassy savannas developed increasingly complex and sophisticated tool-kits for hunting and gathering, such that by around 2.5 million years ago our ancestors replaced most species of what was originally a substantial ecological guild of large carnivores.

Tools extend the physical and cognitive capabilities of the tool-users. In an ecological sense, hominin groups are defined by their shared survival knowledge, and inevitably compete to control limiting resources. Competition among groups led to the slow development of increasingly better stone and organic tools, and a genetically-based cognitive capacity to make and use tools. Homo heidelbergensis, that split into African (H. sapiens), European (Neanderthals), and Asian (Denisovans) some 200,000 years ago evolved complex linguistic capabilities that greatly increased the bandwidth for transmitting cultural knowledge. Some 70,000 years ago H. sapiens (“humans”) exited Africa to spread throughout Eurasia and quickly replace all other surviving hominin lineages. By ~ 50,000 years ago humans were making complex tools like bows and arrows, which put a premium on the capacity to remember the rapidly increasing volume of survival knowledge. At some point before the end of the last Ice Age, mnemonic tools were developed (“method of loci”, “songlines”) to extend the capacity of living memory by at least one order of magnitude and some 10,000 years ago as agriculture became practical in the “Fertile Crescent” monumental theaters of the mind (such as Göbekli Tepe and Stonehenge) and specialized knowledge management guilds such as the Masons provided the cultural capacity to enable the Agricultural Revolution. 7-4,000 years ago technologies for writing and the use of books and libraries enabled storing and sharing of cultural knowledge in material form external, facilitating the emergence of empires and nation-states.
Around 550 years ago printing enabled the mass production of books and widespread dissemination of bodies of knowledge to fuel the Reformation, Scientific and Industrial revolutions. Around 60 years ago the invention of the digital computer increasingly externalized cognitive processes and controls over other kinds of tools. Databases, word processing and the internet developed over the last ~30 years enabled knowledge to be created in the virtual world and then shared globally at light speed. Personal technologies developed in the last 10 years (e.g., smartphones) are allowing the emergence of post-human cyborgs. Moore’s Law of exponential growth suggests the capacity for a few orders of magnitude more before we reach the outer limits of quantum computing.

What happens next is anyone’s guess.

Slides available here: