Posts

The Generative Universe Hypothesis

Remembering Lee Smolin’s theory of the dynamical evolution of the universe  where through a form of natural selection, black holes spawn new universes, I thought that if a superintelligent civilization understood its mechanics, they may try to control it, and engineer or bias the physics in the spawned universe – and possibly migrate to this new universe.   Say that they found out how to talk along the parent/child relations between universes, it may be a an energy efficient way achieve some of the outcomes of simulations (as described in Nick Bostrom’s Simulation Hypothesis).

The idea of moving to a more hospitable universe could be such a strong attractor to post-singularity civs that, once discovered, it may be the an obvious choice for a variety of reasons.   A) Better computation by faster/easier networking – Say for instance, that the speed of light were a lot faster, and information could travel over longer distances than in this universe – then network speed may not be as much of a hindrance to developing larger civs, distributed computation, and mega-scale galactic brains. B) As a means of escape – If it so happened that neighbouring alien civs were close enough to pose a threat, then escaping this universe to a new generated universe could be ideal – especially if one could close the door behind, or lay a trap at the opening to the generated universe to capture probes or ships that weren’t ones own.  C) Mere curiosity – it may not be full blown utility maximization that is the lone object of the endeavor,  it could be just simple curiosity about how (stable) universes may operate if fine tuned differently. (How far can you take simulations in this universe to test how hypothetical universes could operate without actually generating and testing the universes?)  D) To escape the ultimate fate of this universe – according to the most popular current estimates, we have about 10100 years until the heat death of this universe. E) Better computation by a ‘cooler’ environment – A colder yet stable universe to compute in – similar to the previous point and the first point.  Some hypothesise that civs may sleep until the universe gets colder when computation can be done far more efficiently, where these civs long for the heat death so that they can get really get started with whatever projects they have in mind that require the computing power only made possible by extremely low temperatures more abundantly available at or near the heat death.  Well, what if you could engineer a universe to achieve temperatures far lower than that which would be available in this universe, while also allowing the benefit of the universe being relatively steady (say that’s something that’s needed) – and if it could be achieved sooner by a generative universe solution than waiting around for this universes heat death then why not?  F) Fault tolerance – distributing a civ across (generated) universes may preserve the civ against risks of the current one going unexpectedly pear shaped – the more fault tolerance the merrier G) Load balancing – if it’s posisble to communicate between parent/child relationships, then civs may generate universes merely to act as containers for computation, helping solve really really really big problems far faster, or scaffold extremely detailed virtual realities far more efficiently – less lag, less jitters – deeper immersion! 

If this Perhaps we will find evidence of alien civs around black holes generating and testing new universes before taking the leap to transcend so to speak.

Why leave the future evolution of universes up to blind natural selection?  Advanced post-singularity alien civs might hypothesize an extremely strict set of criteria to allow for the formation of the right kinds of matter and energy in child universes to either mirror our own universe, or more likely take it up a notch or two;  to new levels of interestingness – while computational capacity is limited if constrained by the laws of this containing universe, it may be that spawning a new universe could allow for more interesting and efficient computation.

It may also be a great way to escape the heat death of the universe 🙂

I spoke about the idea with Andrew Arnel a while ago while out for a drink, where I came up with a really cool name for this idea – though I can’t remember what it was 🙂  perhaps it only sounds good after a few beers – perhaps it was something like the ‘generative’, spawnulation or ‘genulation’ hypothesis…

 

Update: also more recently I commented about this idea on a FB post by Mike Johnson:
I may have a similar idea relating to smolins darwinistic black-hole universe generation. Why build simulations where it would be more efficient to actually generate new universes not computationally bounded by or contained by the originating universe – by nudging the physics that would emerge in the new universe to be more able to support flourishing life, more computation and wider novelty possibility spaces.


Furthermore I spoke to Sundance Bilson Thomson (a physicist in Australia who was supervised by Lee Smolin) about whether what influenced the physics in the child universes was local phenomena surrounding the black hole in the parent universe, or global phenomena of the parent universe.  He said it was global phenomena based on something to do with the way stars are formed.  So this might lower my credence in the Generative Universe hypothesis as it pertains to Lee Smolin’s idea – though I need to seek out whether the nature of the generated child universes could still be nudged or engineered.

Why don’t we see more larger brained species in our ecosystem?

Species with larger brains seem to have a higher general intelligence. So why haven’t all species evolved larger brains? Possibly because general intelligence relies on a capability for social learning, so larger brains are only useful for species that rely more on social learning. – Kaj Sotala

Larger brains cost require more fuel, larger brains require larger heads, larger heads are harder to scaffold, scaffolding costs nutrients, there are so many trade-offs… scaffolding+big brains are often a the expense of other important morphologies … locomotion abilities, large craniums at the expense of stronger and bigger mouths, and in many species larger heads correlate with more complications during birth.

Perhaps I should add – (and this seems similar to some AI singleton scenarios) – larger brains (as mentioned, correlates with greater intelligence), could give distinct first mover advantages when coupled with other morphologies that enable the organism to generally & efficiently manipulate its environment (opposable thumbs etc). The first mover advantage may be so powerful as to diminish dependencies on environmental factors – or the rest of the ecological web. We, as arguably the most intelligent species on the planet, seem to be moving through an era where we as a species decreasingly depend on aspects of the ecosystem or aspects of our evolutionary endowed morphology – we synthesize better alternatives – and its happening so fast that evolution by natural selection can’t keep up – it’s production of ‘new better models’ of organisms is crowded out by the rapid progress of civilization. So perhaps one reason we don’t see more larger brained species, is for anthropic reasons – ruffly that the higher the occurrence of large brained morphologies, the higher the likelihood of one single species taking the first mover advantages related to high intelligence – and as a result quickly becoming technologically advanced enough for either subduing and manipulating the ecosystem – which may crowd out (and essentially retard) the power of evolution by natural selection to evolve larger brains in other species. The species may go on to inadvertently destroy the ecosystem, intentionally phase out the current ecosystem (involving blind natural selection) perhaps for ethical or instrumental reasons, or leave the ecosystem and natural selection to its own devices and go interstellar – in which case there would be at least one less large brained species in the ecosystem, and perhaps re-opening a niche for another species to fill.

This was inspired by Kaj Sotala’s query about why we don’t see more larger brained species than we currently do.

Review of Homo Deus: A Brief History of Tomorrow by Yuval Noah Harari – Steve Fuller

Sapiens, a breif history of humankind - Yuval Noah HarariMy sociology of knowledge students read Yuval Harari’s bestselling first book, Sapiens, to think about the right frame of reference for understanding the overall trajectory of the human condition. Homo Deus follows the example of Sapiens, using contemporary events to launch into what nowadays is called ‘big history’ but has been also called ‘deep history’ and ‘long history’. Whatever you call it, the orientation sees the human condition as subject to multiple overlapping rhythms of change which generate the sorts of ‘events’ that are the stuff of history lessons. But Harari’s history is nothing like the version you half remember from school.

In school historical events were explained in terms more or less recognizable to the agents involved. In contrast, Harari reaches for accounts that scientifically update the idea of ‘perennial philosophy’. Aldous Huxley popularized this phrase in his quest to seek common patterns of thought in the great world religions which could be leveraged as a global ethic in the aftermath of the Second World War. Harari similarly leverages bits of genetics, ecology, neuroscience and cognitive science to advance a broadly evolutionary narrative. But unlike Darwin’s version, Harari’s points towards the incipient apotheosis of our species; hence, the book’s title.

This invariably means that events are treated as symptoms if not omens of the shape of things to come. Harari’s central thesis is that whereas in the past we cowered in the face of impersonal natural forces beyond our control, nowadays our biggest enemy is the one that faces us in the mirror, which may or may not be able within our control. Thus, the sort of deity into which we are evolving is one whose superhuman powers may well result in self-destruction. Harari’s attitude towards this prospect is one of slightly awestruck bemusement.

Here Harari equivocates where his predecessors dared to distinguish. Writing with the bracing clarity afforded by the Existentialist horizons of the Cold War, cybernetics founder Norbert Wiener declared that humanity’s survival depends on knowing whether what we don’t know is actually trying to hurt us. If so, then any apparent advance in knowledge will always be illusory. As for Harari, he does not seem to see humanity in some never-ending diabolical chess match against an implacable foe, as in The Seventh Seal. Instead he takes refuge in the so-called law of unintended consequences. So while the shape of our ignorance does indeed shift as our knowledge advances, it does so in ways that keep Harari at a comfortable distance from passing judgement on our long term prognosis.

Homo Deus YuvalThis semi-detachment makes Homo Deus a suave but perhaps not deep read of the human condition. Consider his choice of religious precedents to illustrate that we may be approaching divinity, a thesis with which I am broadly sympathetic. Instead of the Abrahamic God, Harari tends towards the ancient Greek and Hindu deities, who enjoy both superhuman powers and all too human foibles. The implication is that to enhance the one is by no means to diminish the other. If anything, it may simply make the overall result worse than had both our intellects and our passions been weaker. Such an observation, a familiar pretext for comedy, wears well with those who are inclined to read a book like this only once.

One figure who is conspicuous by his absence from Harari’s theology is Faust, the legendary rogue Christian scholar who epitomized the version of Homo Deus at play a hundred years ago in Oswald Spengler’s The Decline of the West. What distinguishes Faustian failings from those of the Greek and Hindu deities is that Faust’s result from his being neither as clever nor as loving as he thought. The theology at work is transcendental, perhaps even Platonic.

In such a world, Harari’s ironic thesis that future humans might possess virtually perfect intellects yet also retain quite undisciplined appetites is a non-starter. If anything, Faust’s undisciplined appetites point to a fundamental intellectual deficiency that prevents him from exercising a ‘rational will’, which is the mark of a truly supreme being. Faust’s sense of his own superiority simply leads him down a path of ever more frustrated and destructive desire. Only the one true God can put him out of his misery in the end.

In contrast, if there is ‘one true God’ in Harari’s theology, it goes by the name of ‘Efficiency’ and its religion is called ‘Dataism’. Efficiency is familiar as the dimension along which technological progress is made. It amounts to discovering how to do more with less. To recall Marshall McLuhan, the ‘less’ is the ‘medium’ and the ‘more’ is the ‘message’. However, the metaphysics of efficiency matters. Are we talking about spending less money, less time and/or less energy?

It is telling that the sort of efficiency which most animates Harari’s account is the conversion of brain power to computer power. To be sure, computers can outperform humans on an increasing range of specialised tasks. Moreover, computers are getting better at integrating the operations of other technologies, each of which also typically replaces one or more human functions. The result is the so-called Internet of Things. But does this mean that the brain is on the verge of becoming redundant?

Those who say yes, most notably the ‘Singularitarians’ whose spiritual home is Silicon Valley, want to translate the brain’s software into a silicon base that will enable it to survive and expand indefinitely in a cosmic Internet of Things. Let’s suppose that such a translation becomes feasible. The energy requirements of such scaled up silicon platforms might still be prohibitive. For all its liabilities and mysteries, the brain remains the most energy efficient medium for encoding and executing intelligence. Indeed, forward facing ecologists might consider investing in a high-tech agronomy dedicated to cultivating neurons to function as organic computers – ‘Stem Cell 2.0’, if you will.

However, Harari does not see this possible future because he remains captive to Silicon Valley’s version of determinism, which prescribes a migration from carbon to silicon for anything worth preserving indefinitely. It is against this backdrop that he flirts with the idea that a computer-based ‘superintelligence’ might eventually find humans surplus to requirements in a rationally organized world. Like other Singularitarians, Harari approaches the matter in the style of a 1950s B-movie fan who sees the normative universe divided between ‘us’ (the humans) and ‘them’ (the non-humans).

Steve Fuller

Steve Fuller

The bravest face to put on this intuition is that computers will transition to superintelligence so soon – ‘exponentially’ as the faithful say — that ‘us vs. them’ becomes an operative organizing principle. More likely and messier for Harari is that this process will be dragged out. And during that time Homo sapiens will divide between those who identify with their emerging machine overlords, who are entitled to human-like rights, and those who cling to the new acceptable face of racism, a ‘carbonist’ ideology which would privilege organic life above any silicon-based translations or hybridizations. Maybe Harari will live long enough to write a sequel to Homo Deus to explain how this battle might pan out.

NOTE ON PUBLICATION: Homo Deus is published in September 2016 by Harvil Secker, an imprint of Penguin Random House. Fuller would like to thank The Literary Review for originally commissioning this review. It will appear in a subsequent edition of the magazine and is published here with permission.

Video Interview with Steve Fuller covering the Homo Deus book

Steve fuller discusses the new book Homo Deus, how it relates to the general transhumanist philosophy and movementfactors around the success of these ideas going mainstream, Yuval Noah Harari’s writing style, why there has been a bias within academia (esp sociology) to steer away from ideas which are less well established in history (and this is important because our successfully navigating the future will require a lot of new ideas), existential risk, and we contrast a posthuman future with a future dominated by an AI superintelligence.

Yuval Harari’s books

– ‘Homo Deus: A Brief History of Tomorrow’: https://www.amazon.com/Homo-Deus-Brief-History-Tomorrow-ebook/dp/B019CGXTP0/

– ‘Sapiens: A Brief History of Humankind’: https://www.amazon.com/Sapiens-Humankind-Yuval-Noah-Harari/dp/0062316095/

Discussion on the Coursera course ‘A Brief History of Humankind’ (which I took a few years ago): https://www.coursetalk.com/providers/coursera/courses/a-brief-history-of-humankind