Antispecism & Compassionate Stewardship – David Pearce

I think our first ethical priority is to stop doing harm, and right now in our factory farms billions of non-human animals are being treated in ways that if our victims were human, we would get the perpetrators locked up for life. And the sentience (and what it’s worth the sapience) of a pig compares with the pre-linguistic toddler. A chicken perhaps may be no more intellectually advanced or sentient than a human infant. But before considering the suffering of free living animals we need to consider, I think, the suffering we’re causing our fellow creatures.

Essentially it’s a lifestyle choice – do we want to continue to exploit and abuse other sentient beings because we like the taste of their flesh, or do we want to embrace the cruelty free vegan lifestyle. Some people would focus on treating other sentient beings less inhumanely. I’d say that we really need an ethical revolution in which our focus is: how can we help other sentient beings rather than harm them?

It’s very straightforward indeed to be a vegetarian. Vegetarians tend to statistically live longer, they record high IQ scores, they tend to be slimmer – it’s very easy to be a vegetarian. A strict vegan lifestyle requires considerably more effort. But over the medium to long run I think our focus should be going vegan.

In the short run I think we should be closing factory farms and slaughterhouses. And given that factory farming and slaughterhouses are the greatest source of severe chronic readily avoidable
suffering in the world today, any talk of intervening compassionate stewardship of the rest of the living world is fanciful.

Will ethical argument alone persuade us to stop exploiting & killing other non-human beings because we like the taste of their flesh? Possibly not. I think realistically one wants a twin track strategy that combines animal advocacy with the development of in-vitro meat. But I would strenuously urge anyone watching this program to consider giving up meat and animal products if you are ethically serious.

The final strand of the Abolitionist Project on earth however is free-living animals in nature. And it might seem ecologically illiterate to argue that it is going to be feasible to take care of elephants, zebras, and free living animals. Because after all – let’s say there is starvation, it’s in winter, if you start feeding a lot of starving herbivores – all this does is lead the next spring to a population explosion followed by ecological collapse & more suffering than before.

However what is potentially feasible, if we’re ethically serious, is to micromanage the entire living world – now this sounds extremely far fetched and utopian, but I’ll sketch how it is feasible. Later this century and beyond, every cubic meter of the planet is going to be computationally accessible to surveillance, micro-management and control. And if we want to, we can use fertility regulation & immuno-contraception to regulate population numbers – cross-species fertility control. Starting off presumably with higher vertebrates – elephants for instance – already now – in the Kruger National Park for example – in preference to the cruel practice of culling, population numbers are controlled by immuno-contraception.

So starting off with higher vertebrates but eventually in our wildlife parks, then across the phylogenetic tree, it will be possible to micromanage the living world.

And just as right now if you were to stumble across a small child who is drowning in a pond – you would be guilty of complicity in that child’s drowning if you didn’t pull the child out – exactly the same intimacy over the rest of the living world is going to be feasible later this century and beyond.

Now what about obligate carnivores – predators? Surely it’s inevitable that they’re going to continue to prey on herbivores, so that means one might intuitively suppose that the abolitionist project could never be completed. But even there, if we’re ethically serious there are workarounds – in-vitro meat – for instance big cats if they are offered in vitro meat, catnip flavored in-vitro meat – they’re not going to be tempted to chase after herbivores.

Alternatively, a little bit of genetic tweaking, and you no longer have an obligate carnivore.

I’m supposing here that we do want to preserve recognizable approximations of today’s so-called charismatic megafauna – many people are extremely unhappy at the idea that lions or tigers or snakes or crocodiles should go extinct. I’m not personally persuaded that the world would be a worse place without crocodiles or snakes, but if we do want to preserve them it’s possible genetically to treat them or provide in vitro meat so that they don’t actually do any harm to sentient beings.

Some species essentialists would respond that a lion that is no longer chasing, asphyxiating, disemboweling zebras is no longer truly a lion. But one might make the same argument that a homo sapiens who is no longer beating his rivals over their heads, or waging war or practicing infanticide, slavery and all the other ghastly practices of our evolutionary past, or for that matter wearing clothes, that which are that someone who adopts a more civilized life style are no longer truly human – which I can only say good.

And likewise, if there is a living world in which lions are pacifistic, if a lion so to speak is lying down with the lamb I would say that is much more civilized.

Compassionate Biology

See this exerpt from The Antispeciesist Revolution:
If and when humans stop systematically harming other sentient beings, will our ethical duties to members of other species have been discharged? Not if the same ethical considerations as apply to members of other human races or age-groups apply also to members of other species of equivalent sentience. Thus if famine breaks out in sub-Saharan Africa and young human children are starving, then we recognise we have a duty to send aid; or better still, to take proactive to measures to ensure famines do not arise in the first instance, i.e. to provide not just food aid but family planning. So why not assist, say, starving free-living elephants? Until recently, no comparable interventions were feasible for members of other species. The technical challenges were insurmountable. Not least, the absence of cross-species fertility control technologies would have often made bad problems worse. Yet thanks to the exponential growth of computer power, every cubic metre of the planet will shortly be computationally accessible to micro-management, surveillance and control. Harnessed to biotechnology, nanotechnology and robotics, such tools confer unprecedented power over Nature. With unbridled power comes complicity. Ethically speaking, how many of the traditional cruelties of the living world do we wish to perpetuate? Orthodox conservation biologists argue we should not “interfere”: humans can’t “police” Nature. Antispeciesists disagree. Advocates of compassionate biology argue that humans and nonhumans alike should not be parasitised, starved, disembowelled, asphyxiated, or eaten alive.

As always, bioconservatives insist such miseries are “natural”; status quo bias runs deep. “”Custom will reconcile people to any atrocity””, observed George Bernard Shaw. Snuff movies in the guise of Nature documentaries are quite popular on Youtube, a counterpoint to the Disneyfied wildlife shows aired on mainstream TV. Moreover even sympathetic critics of compassionate biology might respond that helping free-living members of other species is prohibitively expensive. An adequate welfare safety-net scarcely exists for humans in many parts of the world. So how can we contemplate its extension to nonhumans – even just to large-brained, long-lived vertebrates in our Nature reserves? Provision of comprehensive healthcare for all free-living elephants, for example, might cost between two or three billion dollars annually. Compassionate stewardship of the living world would be technically daunting too, entailing ecosystem management, cross-species fertility control via immunocontraception, veterinary care, emergency famine-relief, GPS tracking and monitoring, and ultimately phasing out or genetically “reprogramming” carnivorous predators. The notional bill could approach the world’s 1.7 trillion-dollar annual arms budget. But irrespective of cost or timescale, if we are to be consistently non-speciesist, then decisions about resource allocation should be based not on species membership, but directly or indirectly on sentience. An elephant, for example, is at least as sentient as a human toddler – and may well be as sentient if not sapient as adult humans. If it is ethically obligatory to help sick or starving children, then it’s ethically obligatory to help sick or starving elephants – not just via crisis interventions but via long-term healthcare support.

A traditional conservation biologist might respond that elephants helped by humans are no longer truly wild. Yet on such a criterion, clothes-wearing humans or beneficiaries of food aid and family planning aren’t “wild” humans either. Why should this matter? “Free-living” and “wild” are conceptually distinct. To assume that the civilising process should be confined to our own species is mere speciesist prejudice. Humans, transhumans and posthumans must choose what forms of sentience we want to preserve and create on Earth and beyond. Humans already massively intervene in Nature, whether through habitat destruction, captive breeding programs for big cats, “rewilding”, etc. So the question is not whether humans should “interfere”, but rather what ethical principles should govern our interventions.

http://www.hedweb.com/transhumanism/antispeciesist.html

Subscribe to the YouTube Channel

Science, Technology & the Future

The Antispeciesist Revolution – read by David Pearce

The Antispeciesist Revolution

[Original text found here]

Speciesism.
When is it ethically acceptable to harm another sentient being? On some fairly modest(1) assumptions, to harm or kill someone simply on the grounds they belong to a different gender, sexual orientation or ethnic group is unjustified. Such distinctions are real but ethically irrelevant. On the other hand, species membership is normally reckoned an ethically relevant criterion. Fundamental to our conceptual scheme is the pre-Darwinian distinction between “humans” and “animals”. In law, nonhuman animals share with inanimate objects the status of property. As property, nonhuman animals can be bought, sold, killed or otherwise harmed as humans see fit. In consequence, humans treat nonhuman animals in ways that would earn a life-time prison sentence without parole if our victims were human. From an evolutionary perspective, this contrast in status isn’t surprising. In our ancestral environment of adaptedness, the human capacity to hunt, kill and exploit sentient beings of other species was fitness-enhancing(2). Our moral intuitions have been shaped accordingly. Yet can we ethically justify such behaviour today?

Naively, one reason for disregarding the interests of nonhumans is the dimmer-switch model of consciousness. Humans matter more than nonhuman animals because (most) humans are more intelligent. Intuitively, more intelligent beings are more conscious than less intelligent beings; consciousness is the touchstone of moral status.

The problem with the dimmer-switch model is that it’s empirically unsupported, among vertebrates with central nervous systems at least. Microelectrode studies of the brains of awake human subjects suggest that the most intense forms of experience, for example agony, terror and orgasmic bliss, are mediated by the limbic system, not the prefrontal cortex. Our core emotions are evolutionarily ancient and strongly conserved. Humans share the anatomical and molecular substrates of our core emotions with the nonhuman animals whom we factory-farm and kill. By contrast, distinctively human cognitive capacities such as generative syntax, or the ability to do higher mathematics, are either phenomenologically subtle or impenetrable to introspection. To be sure, genetic and epigenetic differences exist between, say, a pig and a human being that explain our adult behavioural differences, e.g. the allele of the FOXP2(1) gene implicated in the human capacity for recursive syntax. Such mutations have little to do with raw sentience(1).

Antispeciesism.
So what is the alternative to traditional anthropocentric ethics? Antispeciesism is not the claim that “All Animals Are Equal”, or that all species are of equal value, or that a human or a pig is equivalent to a mosquito. Rather the antispeciesist claims that, other things being equal, conscious beings of equivalent sentience deserve equal care and respect. A pig, for example, is of comparable sentience to a prelinguistic human toddler. As it happens, a pig is of comparable (or superior) intelligence to a toddler as well(5). However, such cognitive prowess is ethically incidental. If ethical status is a function of sentience, then to factory-farm and slaughter a pig is as ethically abhorrent as to factory-farm and slaughter a human baby. To exploit one and nurture the other expresses an irrational but genetically adaptive prejudice.

On the face of it, this antispeciesist claim isn’t just wrong-headed; it’s absurd. Philosopher Jonathan Haidt speaks of “moral dumfounding”(6), where we just know something is wrong but can’t articulate precisely why. Haidt offers the example of consensual incest between an adult brother and sister who use birth control. For evolutionary reasons, we “just know” such an incestuous relationship is immoral. In the case of any comparisons of pigs with human infants and toddlers, we “just know” at some deep level that any alleged equivalence in status is unfounded. After all, if there were no ethically relevant distinction between a pig and a toddler, or between a battery-farmed chicken and a human infant, then the daily behaviour of ordinary meat-eating humans would be sociopathic – which is crazy. In fact, unless the psychiatrists’ bible, Diagnostic and Statistical Manual of Mental Disorders, is modified explicitly to exclude behaviour towards nonhumans, most of us do risk satisfying its diagnostic criteria for the disorder. Even so, humans often conceive of ourselves as animal lovers. Despite the horrors of factory-farming, most consumers of meat and animal products are clearly not sociopaths in the normal usage of the term; most factory-farm managers are not wantonly cruel; and the majority of slaughterhouse workers are not sadists who delight in suffering. Serial killers of nonhuman animals are just ordinary men doing a distasteful job – “obeying orders” – on pain of losing their livelihoods.

Should we expect anything different? Jewish political theorist Hannah Arendt spoke famously of the “banality of evil”(7). If twenty-first century humans are collectively doing something posthuman superintelligence will reckon monstrous, akin to the [human] Holocaust or Atlantic slave trade, then it’s easy to assume our moral intuitions would disclose this to us. Our intuitions don’t disclose anything of the kind; so we sleep easy. But both natural selection and the historical record offer powerful reasons for doubting the trustworthiness of our naive moral intuitions. So the possibility that human civilisation might be founded upon some monstrous evil should be taken seriously – even if the possibility seems transparently absurd at the time.

One possible speciesist response is to raise the question of “potential”. Even if a pig is as sentient as a human toddler, there is a fundamental distinction between human toddlers and pigs. Only a toddler has the potential to mature into a rational adult human being.

The problem with this response is that it contradicts our treatment of humans who lack “potential”. Thus we recognise that a toddler with a progressive disorder who will never live to celebrate his third birthday deserves at least as much love, care and respect as his normally developing peers – not to be packed off to a factory-farm on the grounds it’s a shame to let good food go to waste. We recognise a similar duty of care for mentally handicapped adult humans and cognitively frail old people. For sure, historical exceptions exist to this perceived duty of care for vulnerable humans, e.g. the Nazi “euthanasia” program, with its eugenicist conception of “life unworthy of life”. But by common consent, we value young children and cognitively challenged adults for who they are, not simply for who they may – or may not – one day become. On occasion, there may controversially be instrumental reasons for allocating more care and resources to a potential genius or exceptionally gifted child than to a normal human. Yet disproportionate intraspecies resource allocation may be justified, not because high IQ humans are more sentient, but because of the anticipated benefits to society as a whole.

Practical Implications.
1. Invitrotarianism.

The greatest source of severe, chronic and readily avoidable suffering in the world today is man-made: factory farming. Humans currently slaughter over fifty billion sentient beings each year. One implication of an antispeciesist ethic is that factory farms should be shut and their surviving victims rehabilitated.

In common with most ethical revolutions in history, the prospect of humanity switching to a cruelty-free diet initially strikes most practically-minded folk as utopian dreaming. “Realists” certainly have plenty of hard evidence to bolster their case. As English essayist William Hazlitt observed, “The least pain in our little finger gives us more concern and uneasiness than the destruction of millions of our fellow-beings.” Without the aid of twenty-first century technology, the mass slaughter and abuse of our fellow animals might continue indefinitely. Yet tissue science technology promises to allow consumers to become moral agents without the slightest hint of personal inconvenience. Lab-grown in vitro meat produced in cell culture rather than a live animal has long been a staple of science fiction. But global veganism – or its ethical invitrotarian equivalent – is no longer a futuristic fantasy. Rapid advances in tissue engineering mean that in vitro meat will shortly be developed and commercialised. Today’s experimental cultured mincemeat can be supplanted by mass-manufactured gourmet steaks for the consumer market. Perhaps critically for its rapid public acceptance, in vitro meat does not need to be genetically modified – thereby spiking the guns of techno-luddites who might otherwise worry about “FrankenBurgers”. Indeed, cultured meat products will be more “natural” in some ways than their antibiotic-laced counterparts derived from factory-farmed animals.

Momentum for commercialisation is growing. Non-profit research organisations like New Harvest(8), working to develop alternatives to conventionally-produced meat, have been joined by hard-headed businessmen. Visionary entrepreneur and Stanford academic Peter Thiel has just funnelled $350,000 into Modern Meadow, a start-up that aims to combine 3D printing with in vitro meat cultivation. Within the next decade or so, gourmet steaks could be printed out from biological materials. In principle, the technology should be scalable.

Tragically, billions of nonhuman animals will grievously suffer and die this century at human hands before the dietary transition is complete. Humans are not obligate carnivores; eating meat and animal products is a lifestyle choice. “But I like the taste!” is not a morally compelling argument. Vegans and animal advocates ask whether we are ethically entitled to wait on a technological fix? The antispeciesist answer is clear: no.

2. Compassionate Biology.
If and when humans stop systematically harming other sentient beings, will our ethical duties to members of other species have been discharged? Not if the same ethical considerations as apply to members of other human races or age-groups apply also to members of other species of equivalent sentience. Thus if famine breaks out in sub-Saharan Africa and young human children are starving, then we recognise we have a duty to send aid; or better still, to take proactive measures to ensure famines do not arise in the first instance, i.e. to provide not just food aid but family planning. So why not assist, say, starving free-living elephants? Until recently, no comparable interventions were feasible for members of other species. The technical challenges were insurmountable. Not least, the absence of cross-species fertility control technologies would have often made bad problems worse. Yet thanks to the exponential growth of computer power, every cubic metre of the planet will shortly be computationally accessible to micro-management, surveillance and control. Harnessed to biotechnology, nanotechnology and robotics, such tools confer unprecedented power over Nature. With unbridled power comes complicity. Ethically speaking, how many of the traditional cruelties of the living world do we wish to perpetuate? Orthodox conservation biologists argue we should not “interfere”: humans can’t “police” Nature. Antispeciesists disagree. Advocates of compassionate biology argue that humans and nonhumans alike should not be parasitised, starved, disembowelled, asphyxiated, or eaten alive.

As always, bioconservatives insist such miseries are “natural”; status quo bias runs deep. “Custom will reconcile people to any atrocity”, observed George Bernard Shaw. Snuff movies in the guise of Nature documentaries are quite popular on Youtube, a counterpoint to the Disneyfied wildlife shows aired on mainstream TV. Moreover even sympathetic critics of compassionate biology might respond that helping free-living members of other species is prohibitively expensive. An adequate welfare safety-net scarcely exists for humans in many parts of the world. So how can we contemplate its extension to nonhumans – even just to large-brained, long-lived vertebrates in our Nature reserves? Provision of comprehensive healthcare for all free-living elephants(10), for example, might cost between two or three billion dollars annually. Compassionate stewardship of the living world would be technically daunting too, entailing ecosystem management, cross-species fertility control via immunocontraception, veterinary care, emergency famine-relief, GPS tracking and monitoring, and ultimately phasing out or genetically “reprogramming”(11) carnivorous predators. The notional bill could approach the world’s 1.7 trillion-dollar annual arms budget. But irrespective of cost or timescale, if we are to be consistently non-speciesist, then decisions about resource allocation should be based not on species membership, but directly or indirectly on sentience. An elephant, for example, is at least as sentient as a human toddler. If it is ethically obligatory to help sick or starving children, then it’s ethically obligatory to help sick or starving elephants – not just via crisis interventions but via long-term healthcare support.

A traditional conservation biologist might respond that elephants helped by humans are no longer truly wild. Yet on such a criterion, clothes-wearing humans or beneficiaries of food aid and family planning aren’t “wild” humans either. Why should this matter? “Free-living” and “wild” are conceptually distinct. To assume that the civilising process should be confined to our own species is mere speciesist prejudice. Humans, transhumans and posthumans must choose what forms of sentience we want to preserve and create on Earth and beyond. Humans already massively intervene in Nature, whether though habitat destruction, captive breeding programs for big cats, “rewilding”, etc. So the question is not whether humans should “interfere”, but rather what ethical principles should govern our interventions(12).

Speciesism and Superintelligence.
Why should transhumanists care about the suffering of nonhuman animals? This is not a “feel-good” issue. One reason we should care cuts to the heart of the future of life in the universe. Transhumanists differ over whether our posthuman successors will most likely be nonbiological artificial superintelligence; or cyborgs who effectively merge with our hyperintelligent machines; or our own recursively self-improving biological descendents who modify their own genetic source code and bootstrap their way to full-spectrum superintelligence(13). Regardless of the dominant lifeform of the posthuman era, biological humans have a vested interest in the behaviour of intellectually advanced beings towards cognitively humble creatures – if we survive at all. Compared to posthuman superintelligence, archaic humans may be no smarter than pigs or chickens – or perhaps worms. This does not augur well for Homo sapiens. Western-educated humans tend to view Jains as faintly ridiculous for practising ahimsa, or harmlessness, sweeping the ground in front of them to avoid inadvertently treading on insects. How quixotic! Yet the fate of sentient but cognitively humble lifeforms in relation to vastly superior intelligence is precisely the issue at stake as we confront the prospect of posthuman superintelligence. How can we ensure a Jain-like concern for comparatively simple-minded creatures such as ourselves? Why should superintelligences care any more than humans about the well-being of their intellectual inferiors? Might distinctively human-friendly superintelligence turn out to be as intellectually-incoherent as, say, Aryan-friendly superintelligence? If human primitives are to prove worthy of conservation, how can we implement technologies of impartial friendliness towards other sentients? And if posthumans do care, how do we know that a truly benevolent superintelligence wouldn’t turn Darwinian life into utilitronium with a communal hug?

Viewed in such a light, biological humanity’s prospects in a future world of superintelligence might seem dire. However, this worry expresses a one-dimensional conception of general intelligence. No doubt the nature of mature superintelligence is humanly unknowable. But presumably full-spectrum(14) superintelligence entails, at the very least, a capacity to investigate, understand and manipulate both the formal and the subjective properties of mind. Modern science aspires to an idealised “view from nowhere”(15), an impartial, God-like understanding of the natural universe, stripped of any bias in perspective and expressed in the language of mathematical physics. By the same token, a God-like superintelligence must also be endowed with the capacity impartially to grasp all possible first-person perspectives – not a partial and primitive Machiavellian cunning of the kind adaptive on the African savannah, but an unimaginably radical expansion of our own fitfully growing circle of empathy.

What such superhuman perspective-taking ability might entail is unclear. We are familiar with people who display abnormally advanced forms of “mind-blind”(16), autistic intelligence in higher mathematics and theoretical physics. Less well known are hyper-empathisers who display unusually sophisticated social intelligence. Perhaps the most advanced naturally occurring hyper-empathisers exhibit mirror-touch synaesthesia(17). A mirror-touch synaesthete cannot be unfriendly towards you because she feels your pain and pleasure as if it were her own. In principle, such unusual perspective-taking capacity could be generalised and extended with reciprocal neuroscanning technology and telemetry into a kind of naturalised telepathy, both between and within species. Interpersonal and cross-species mind-reading could in theory break down hitherto invincible barriers of ignorance between different skull-bound subjects of experience, thereby eroding the anthropocentric, ethnocentric and egocentric bias that has plagued life on Earth to date. Today, the intelligence-testing community tends to treat facility at empathetic understanding as if it were a mere personality variable, or at best some sort of second-rate cognition for people who can’t do IQ tests. But “mind-reading” can be a highly sophisticated, cognitively demanding ability. Compare, say, the sixth-order intentionality manifested by Shakespeare. Thus we shouldn’t conceive superintelligence as akin to God imagined by someone with autistic spectrum disorder. Rather full-spectrum superintelligence entails a God’s-eye capacity to understand the rich multi-faceted first-person perspectives of diverse lifeforms whose mind-spaces humans would find incomprehensibly alien.

An obvious objection arises. Just because ultra-intelligent posthumans may be capable of displaying empathetic superintelligence, how do we know such intelligence will be exercised? The short answer is that we don’t: by analogy, today’s mirror-touch synaesthetes might one day neurosurgically opt to become mind-blind. But then equally we don’t know whether posthumans will renounce their advanced logico-mathematical prowess in favour of the functional equivalent of wireheading. If they do so, then they won’t be superintelligent. The existence of diverse first-person perspectives is a fundamental feature of the natural world, as fundamental as the second law of thermodynamics or the Higgs boson. To be ignorant of fundamental features of the world is to be an idiot savant: a super-Watson(18) perhaps, but not a superintelligence(19).

High-Tech Jainism?
Jules Renard once remarked, “I don’t know if God exists, but it would be better for His reputation if He didn’t.” God’s conspicuous absence from the natural world needn’t deter us from asking what an omniscient, omnipotent, all-merciful deity would want humans to do with our imminent God-like powers. For we’re on the brink of a momentous evolutionary transition in the history of life on Earth. Physicist Freeman Dyson predicts we’ll soon “be writing genomes as fluently as Blake and Byron wrote verses”(20). The ethical risks and opportunities for apprentice deities are huge.

On the one hand, Karl Popper warns, “Those who promise us paradise on earth never produced anything but a hell”(21). Twentieth-century history bears out such pessimism. Yet for billions of sentient beings from less powerful species, existing life on Earth is hell. They end their miserable lives on our dinner plates: “for the animals it is an eternal Treblinka”, writes Jewish Nobel laureate Isaac Bashevis Singer(22).

In a more utopian vein, some utterly sublime scenarios are technically feasible later this century and beyond. It’s not clear whether experience below Sidgwick’s(23) “hedonic zero” has any long-term future. Thanks to molecular neuroscience, mastery of the brain’s reward circuitry could make everyday life wonderful beyond the bounds of normal human experience. There is no technical reason why the pitiless Darwinian struggle of the past half billion years can’t be replaced by an earthly paradise for all creatures great and small. Genetic engineering could allow “the lion to lie down with the lamb.” Enhancement technologies could transform killer apes into saintly smart angels. Biotechnology could abolish suffering throughout the living world. Artificial intelligence could secure the well-being of all sentience in our forward light-cone. Our quasi-immortal descendants may be animated by gradients of intelligent bliss orders of magnitude richer than anything physiologically feasible today.

Such fantastical-sounding scenarios may never come to pass. Yet if so, this won’t be because the technical challenges prove too daunting, but because intelligent agents choose to forgo the molecular keys to paradise for something else. Critically, the substrates of bliss don’t need to be species-specific or rationed. Transhumanists believe the well-being of all sentience(24) is the bedrock of any civilisation worthy of the name.

Also see this related interview with David Pearce on ‘Antispecism & Compassionate Stewardship’:

* * *
NOTES

1. How modest? A venerable tradition in philosophical meta-ethics is anti-realism. The meta-ethical anti-realist proposes that claims such as it’s wrong to rape women, kill Jews, torture babies (etc) lack truth value – or are simply false. (cf. JL Mackie, Ethics: Inventing Right and Wrong, Viking Press, 1977.) Here I shall assume that, for reasons we simply don’t understand, the pain-pleasure axis discloses the world’s inbuilt metric of (dis)value. Meta-ethical anti-realists may instead wish to interpret this critique of speciesism merely as casting doubt on its internal coherence rather than a substantive claim that a non-speciesist ethic is objectively true.

2. Extreme violence towards members of other tribes and races can be fitness-enhancing too. See, e.g. Richard Wrangham & Dale Peterson, Demonic Males: Apes and the Origins of Human Violence, Houghton Mifflin, 1997.

3. Fisher SE, Scharff C (2009). “FOXP2 as a molecular window into speech and language”. Trends Genet. 25 (4): 166–77. doi:10.1016/j.tig.2009.03.002. PMID 19304338.

4. Interpersonal and interspecies comparisons of sentience are of course fraught with problems. Comparative studies of how hard a human or nonhuman animal will work to avoid or obtain a particular stimulus give one crude behavioural indication. Yet we can go right down to the genetic and molecular level, e.g. interspecies comparisons of SCN9A genotype. (cf. http://www.pnas.org/? content/early/2010/02/23/?0913181107.full.pdf) We know that in humans the SCN9A gene modulates pain-sensitivity. Some alleles of SCN9A give rise to hypoalgesia, others alleles to hyperalgesia. Nonsense mutations yield congenital insensitivity to pain. So we could systematically compare the SCN9A gene and its homologues in nonhuman animals. Neocortical chauvinists will still be sceptical of non-mammalian sentience, pointing to the extensive role of cortical processing in higher vertebrates. But recall how neuroscanning techniques reveal that during orgasm, for example, much of the neocortex effectively shuts down. Intensity of experience is scarcely diminished.

5. Held S, Mendl M, Devereux C, and Byrne RW. 2001. “Studies in social cognition: from primates to pigs”. Animal Welfare 10:S209-17.

6. Jonathan Haidt, The Righteous Mind: Why Good People Are Divided by Politics and Religion, Pantheon Books, 2012.

7. Hannah Arendt, Eichmann in Jerusalem: A Report on the Banality of Evil, Viking Press, 1963.

8. http://www.new-harvest.org/

9. “PayPal Founder Backs Synthetic Meat Printing Company”, Wired, August 16 2012. http://www.wired.com/wiredscience/2012/08/3d-printed-meat/

10. https://www.abolitionist.com/reprogramming/elephantcare.html

11. https://www.abolitionist.com/reprogramming/index.html

12. The scholarly literature on the problem of wild animal suffering is still sparse. But perhaps see Arne Naess, “Should We Try To Relieve Clear Cases of Suffering in Nature?”, published in The Selected Works of Arne Naess, Springer, 2005; Oscar Horta, “The Ethics of the Ecology of Fear against the Nonspeciesist Paradigm: A Shift in the Aims of Intervention in Nature”, Between the Species, Issue X, August 2010. http://digitalcommons.calpoly.edu/bts/vol13/iss10/10/ ; Brian Tomasik, “The Importance of Wild-Animal Suffering”, http://www.utilitarian-essays.com/suffering-nature.html ; and the first print-published plea for phasing out carnivorism in Nature, Jeff McMahan’s “The Meat Eaters”, The New York Times. September 19, 2010. http://opinionator.blogs.nytimes.com/2010/09/19/the-meat-eaters/

13. Singularity Hypotheses, A Scientific and Philosophical Assessment, Eden, A.H.; Moor, J.H.; Søraker, J.H.; Steinhart, E. (Eds.) Spinger 2013. http://singularityhypothesis.blogspot.co.uk/p/table-of-contents.html

14. David Pearce, The Biointelligence Explosion. (preprint), 2012. https://www.biointelligence-explosion.com.

15. Thomas Nagel, The View From Nowhere , OUP, 1989.

16. Simon Baron-Cohen (2009). “Autism: the empathizing–systemizing (E-S) theory” (PDF). Ann N Y Acad Sci 1156: 68–80. doi:10.1111/j.1749-6632.2009.04467.x. PMID 19338503.

17. Banissy, M. J. & Ward, J. (2007). Mirror-touch synesthesia is linked with empathy. Nature Neurosci. doi: 10.1038/nn1926.

18. Stephen Baker. Final Jeopardy: Man vs. Machine and the Quest to Know Everything. Houghton Mifflin Harcourt. 2011.

19. Orthogonality or convergence? For an alternative to the convergence thesis, see Nick Bostrom, “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents”, 2012, http://www.nickbostrom.com/superintelligentwill.pdf; and Eliezer Yudkowsky, Carl Shulman, Anna Salamon, Rolf Nelson, Steven Kaas, Steve Rayhawk, Zack Davis, and Tom McCabe. “Reducing Long-Term Catastrophic Risks from Artificial Intelligence”, 2010. http://singularity.org/files/ReducingRisks.pdf

20. Freeman Dyson, “When Science & Poetry Were Friends”, New York Review of Books, August 13, 2009.

21. As quoted in Jon Winokur, In Passing: Condolences and Complaints on Death, Dying, and Related Disappointments, Sasquatch Books, 2005.

22. Isaac Bashevis Singer, The Letter Writer, 1964.

23. Henry Sidgwick, The Methods of Ethics. London, 1874, 7th ed. 1907.

24. The Transhumanist Declaration (1998, 2009). http://humanityplus.org/philosophy/transhumanist-declaration/

David Pearce
September 2012

Link to video

CLAIRE – a new European confederation for AI research

While the world wakes up to the huge potential impacts of AI in the future, how will national worries about other nations gaining ‘AI Supremacy’ effect development?
Especially development in AI Ethics & safety?

Claire-AI is a new European confederation.
Self described as

CONFEDERATION OF LABORATORIES FOR ARTIFICIAL INTELLIGENCE RESEARCH IN EUROPE – Excellence across all of AI. For all of Europe. With a Human-Centred Focus.
Liking the ‘human-centered’ focus (albeit a bit vague), but where is their focus on ethics?

A Competitive Vision

Their vision admits of a fear that Europe may be the losers in a race to achieve AI Supremacy, and this is worrisome – seen as a race between tribes, AI development could be a race to the bottom of the barrel of AI safety and alignment.

In the United States of America, huge investments in AI are made by the private sector. In 2017, the Canadian government started making major investments in AI research, focusing mostly on existing strength in deep learning. In 2017, China released its Next Generation AI Development Plan, with the explicit goal of attaining AI supremacy by 2030.

However, in terms of investment in talent, research, technology and innovation in AI, Europe lags far behind its competitors. As a result, the EU and associated countries are increasingly losing talent to academia and industry elsewhere. Europe needs to play a key role in shaping how AI changes the world, and, of course, benefit from the results of AI research. The reason is obvious: AI is crucial for meeting Europe’s needs to address complex challenges as well as for positioning Europe and its nations in the global market.

Also the FAQ page reflects this sentiment:

Why does Europe have to act, and act quickly? There would be serious economic consequences if Europe were to fall behind in AI technology, along with a brain-drain that already draws AI talent away from Europe, to countries that have placed a high priority on AI research. The more momentum this brain-drain develops, the harder it will be to reverse. There is also a risk of increasing dependence on AI technology developed elsewhere, which would bring economic disadvantages, lack of transparency and broad use of AI technology that is not well aligned with European values.
What are ‘European Values’? They aren’t spelt out very specifically – but I suspect much like other nations, they want whats best for the nation economically, and with regard to security.

Claire-AI’s vision of Ethics

There is mention of ‘humane’ AI – but this is not described in detail anywhere on their site.
What is meant by ‘human-centred’?

Human-centred AI is strongly based on human values and judgement. It is designed to complement rather than replace human intelligence. Human-centred AI is transparent, explainable, fair (i.e., free from hidden bias), and socially compatible. Is developed and deployed based on careful consideration of the disruptions AI technology can cause.
Many AI experts are convinced that the combination of learning and reasoning techniques will enable the next leap forward in AI; it also provides the basis for reliable, trustworthy, safe AI.

So, what are their goals?

What are we trying to achieve? Our main goal is to strengthen AI research and innovation in Europe.

Summing up

Strong AI when achieved, will be extremely powerful because intelligence is powerful. Over the last few years the interest in AI has ramped up significantly – with new companies and initiatives sprouting like mushrooms. The more competitiveness and economy of attention focusing on AI development in a race dynamic to achieve ‘AI supremacy’ will likely result in Strong AI being achieved sooner than previously expected by experts, as well as motivation to precautionary measures.
This race dynamic is good reason to focus on researching how we should think about the strategy to cope with global coordination problems in AI safety as well as its possible impact on an intelligence explosion.

The race dynamic could spur projects to move faster toward superintelligence while reducing investment in solving the control problem. Additional detrimental effects of the race dynamic are also possible, such as direct hostilities between competitors. Suppose that two nations are racing to develop the first superintelligence, and that one of them is seen to be pulling ahead. In a winner-takes-all situation, a lagging project might be tempted to launch a desperate strike against its rival rather than passively await defeat. Anticipating this possibility, the frontrunner might be tempted to strike preemptively. If the antagonists are powerful states, the clash could be bloody. (A “surgical strike” against the rival’s AI project might risk triggering a larger confrontation and might in any case not be feasible if the host country has taken precautions.)Nick Bostrom - Superintelligence: Paths, Dangers, Strategies

Humanity has a history of falling into Hobbsian Traps – since a first mover advantage of Strong AI could be overpowered compared to other economic focuses, a race to achieve such a powerful general purpose optimiser as Strong AI, could result in military arms races.

As with any general-purpose technology, it is possible to identify concerns around particular applications. It has been argued, for example, that military applications of AI, including lethal autonomous weapons, might incite new arms races, or lower the threshold for nations to go to war, or give terrorists and assassins new tools for violence.Nick Bostrom - Strategic Implications of Openness in AI Development

What could be done to mitigate against an AI arms race?

Website: https://claire-ai.org

Moral Enhancement – Are we morally equipped to deal with humanities grand challenges? Anders Sandberg

The topic of Moral Enhancement is controversial (and often misrepresented); it is considered by many to be repugnant – provocative questions arise like “who’s morals?”, “who are the ones to be morally enhanced?”, “will it be compulsory?”, “won’t taking a morality pill decrease the value of the intended morality if it skips the difficult process we normally go through to become better people?”, “Shouldn’t people be concerned that use of enhancements which alter character traits might consumer’s authenticity?”

Humans have a built in capacity of learning moral systems from their parents and other people. We are not born with any particular moral [code] – but with the ability to learn it just like we learn languages. The problem is of course this built in facility might have worked quite well back in the Stone Age when we were evolving in small tribal communities – but doesn’t work that well when surrounded with a high-tech civilization, millions of other people and technology that could be
potentially very dangerous. So we might need to update our moral systems and that is the interesting question of moral enhancement: can we make ourselves more fit for a current work?Anders Sandberg - Are we morally equipped for the future?
Humans have an evolved capacity to learn moral systems – we became more adept at learning moral systems that aided our survival in the ancestral environment – but are our moral instincts fit for the future?

Illustration by Daniel Gray

Let’s build some context. For millennia humans have lived in complex social structures constraining and encouraging certain types of behaviour. More recently for similar reasons people go through years of education at the end of which (for the most part) are more able to morally function in the modern world – though this world is very different from that of our ancestors, and when considering the possibilities for vastly radical change at breakneck speed in the future, it’s hard to know how humans will keep up both intellectually and ethically. This is important to consider as the degree to which we shape the future for the good depends both on how well and how ethically we solve the problems needed to achieve change that on balance (all things equal) benefits humanity (and arguably all morally relevant life-forms).

Can we engineer ourselves to be more ethically fit for the future?

Peter Singer discussed how our circles of care and compassion have expanded over the years – through reason we have been able to expand our natural propensity to act morally and the circumstances in which we act morally.

We may need to expand our circle of ethical consideration to include artificial life – considering certain types of software as moral patients.

So, if we think we could use a boost in our propensity for ethical progress,

How do we actually achieve ideal Moral Enhancement?

That’s a big topic (see a list of papers on the subject of ME here) – the answers may depend on what our goals and  preferences. One idea (among many others) is to regulate the level of Oxytocin (the cuddle hormone) – though this may come with the drawback of increasing distrust in the out-group.
Since morality depends on us being able to make accurate predictions and solve complex ethical problems, ‘Intelligence Enhancement‘ could be an effective aspect of moral enhancement. 

Morality is dependent on us being able to predict what’s going to happen when we do something. So various forms of intelligence enhancement might be very useful also for becoming more moral. Our ability to control our reactions that allow our higher-order values to control our lower order values is also important, that might actually require us to literally rewire or have biochips that help us do it.Anders Sandberg - Are we morally equipped for the future?

How we decide whether to use Moral Enhancement Therapy will be interesting – it may be needed to help solve global coordination problems; to increase the likelihood that we will, as a civilization, cooperate and cope with many known and as yet to be realised complex ethical quandaries as we move through times of unprecedented social and technological change.

This interview is part of a larger series that was completed in Oxford, UK late 2012.

Interview Transcript

Anders Sandberg

So humans have a kind of built-in capacity of learning moral systems from their parents and other people we’re not born with any particular moral [code] but the ability to learn it just like we can learn languages. The problem is of course this built-in facility might have worked quite well back in the Stone Age when we were evolving in small tribal communities – but doesn’t work that well when surrounded with a high-tech civilization, millions of other people and technology that could be potentially very dangerous. So we might need to update our moral systems. And that is the interesting question of moral enhancement:

  • can we make ourselves more fit for a current work?
  • And what kind of fitness should we be talking about?

For example we might want to improve on altruism – that we should be coming to strangers. But in a big society, in a big town – of course there are going to be some stranger’s that you shouldn’t trust. So it’s not just blind trust you want to enhance – you actually want to enhance ability to make careful judgements; to figure out what’s going to happen on whom you can trust. So maybe you want to have some other aspect, maybe the care – the circle of care – is what you want to expand.

Peter Singer pointed out that there are circles of care and compassion have been slowly expanding from our own tribe and their own gender, to other genders, to other people and eventually maybe to other species. But this is still biologically based a lot of it is going on here in the brain and might be modified. Maybe we should artificially extend these circles of care to make sure that we actually do care about those entities we ought to be caring about. This might be a problem of course, because some of these agents might be extremely different for what we used to.

For example machine intelligence might produce more machines or software that is a ‘moral patient’ – we actually ought to be caring about the suffering of software. That might be very tricky because our pattern receptors up in the brain are not very tuned for that – we tend to think that if it’s got a face and the speaks then it’s human and then we can care about it. But who thinks about Google? Maybe we could get super-intelligences that we actually ought to care a lot about, but we can’t recognize them at all because they’re so utterly different from ourselves.

So there are some easy ways of modifying how we think and react – for example by taking a drug. So the hormone oxytocin is sometimes called ‘the cuddle hormone’ – it’s released when breastfeeding and when having bodily contact with your loved one, and it generally seems to be making us more altruistic; more willing to trust strangers. You can kind of sniff it and run an economic game and you can immediately see a change in response. It might also make you a bit more ego-centric. It does enlarge feelings of comfort and family friendliness – except that it’s
only within what you consider to be your family. So we might want to tweak that.

Similarly we might think about adding links to our brains that allow us to think in better ways. After all, morality is dependent on us being able to predict what’s going to happen when we do something. So various forms of intelligence enhancement might be very useful also for becoming more moral. Our ability to control our reactions that allow our higher-order values to control our lower order values is also important, that might actually require us to literally rewire or have biochips that help us do it.

But most important is that we need the information we need to retrain the subtle networks in a brain in order to think better. And that’s going to require something akin to therapy – it might not necessarily be about lying on a sofa and telling your psychologist about your mother. It might very well be a bit of training, a bit of cognitive enhancement, maybe a bit of brain scanning – to figure out what actually ails you. It’s probably going to look very very different from anything Freud or anybody else envisioned for the future.

But I think in the future we’re actually going to try to modify ourselves so we’re going to be extra certain, maybe even extra moral, so we can function in a complex big world.

 

Related Papers

Neuroenhancement of Love and Marriage: The Chemicals Between Us

Anders contributed to this paper ‘Neuroenhancement of Love and Marriage: The Chemicals Between Us‘. This paper reviews the evolutionary history and biology of love and marriage. It examines the current and imminent possibilities of biological manipulation of lust, attraction and attachment, so called neuroenhancement of love. We examine the arguments for and against these biological interventions to influence love. We argue that biological interventions offer an important adjunct to psychosocial interventions, especially given the biological limitations inherent in human love.

Human Engineering and Climate Change

Anders also contributed to the paper “Human Engineering and Climate Change” which argues that cognitive, moral and biological enhancement could increase human ecological sustainability.

Many thanks for watching!

Consider supporting me by:
a) Subscribing to my YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating via Patreon: https://www.patreon.com/scifuture and/or
c) Sharing the media I create

Kind regards,
Adam Ford
– Science, Technology & the Future: http://scifuture.org

The Great Filter, a possible explanation for the Fermi Paradox – interview with Robin Hanson

I grew up wondering about the nature of alien life, what it might look like, what they might do, and whether we will discover any time soon. Though aside from a number of conspiracy theories, and conjecture on Tabby’s Star, so far we have not discovered any signs of life out there in the cosmos. Why is it so?
Given the Drake Equation (which attempts to quantify the likelihood and detectability of extraterrestrial civilizations), it seems as though the universe should be teaming with life.  So where are all those alien civilizations?

The ‘L’ in the Drake equation (length of time civilizations emit detectable signs out into space) for a technologically advanced civilization could be a very long time – why haven’t we detected any?

There are alternative many explanations for reasons why we have not yet detected evidence of an advanced alien civilization, such as:
– Rare earth hypothesis – Astrophysicist ‘Michael H. Hart’ argues for a very narrow habitable zone based on climate studies.
– John Smart’s STEM theory
– Some form of transcendence

The universe is a pretty big place. If it’s just us, seems like an awful waste of space.Carl Sagan - 'Contact'

 

Our observable universe being seemingly dead implies that expansionist civilizations are extremely rare; a vast majority of stuff that starts on the path of life never makes it, therefore there must be at least one ‘great filter’ that stops the majority of life from evolving towards an expansionist civilization.

Peering into the history of biological evolution on earth, we have seen various convergences in evolution – these ‘good tricks’ include things like transitions from single cellular to multi-cellular (at least 14 times), eyes, wings etc. If we can see convergences in both evolution, and in the types of tools various human colonies created after being geographically dispersed, Deducing something about the directions complex life could take, especially ones that become technologically adept could inform us about our future.

The ‘Great Filter’ – should we worry?

The theory is, given estimates (including the likes of the Drake Equation), it’s not an unreasonable to argue that there should have been more than enough time and space for cosmic expansionist civilizations (Kardashev type I, II, III and beyond) to arise that are at least a billion years old – and that at least one of their light cones should have intersected with ours.  Somehow, they have been filtered out.  Somehow, planets with life on them make some distance towards spacefaring expansionist civs, but get’s stopped along the way. While we don’t specifically know what that great filter is, there have been many theories – though if the filter is real, seems that it has been very effective.

The argument in Robin’s piece ‘The Great Filter – Are We Almost Past It?’ is somewhat complex, here are some points I found interesting:

  • Life Will Colonize – taking hints from evolution and the behavoir of our human ancestors, it feasible that our ancestors will colonize the cosmos.
    • Looking at earth’s ecosystem, we see that life has consistently evolved to fill almost every ecological niche in the seas, on land and below. Humans as a single species has migrated from the African Savannah to colonize most of the planet filling new geographic and economic niches as requisite technological reach is achieved to take advantage of reproductively useful resources.
    • We should expect humanity to expand to other parts of the solar system, then out into the galaxy in so far as there exists motivation and freedom to do so.  Even if most of society become wireheads or VR addicted ‘navel gazers’, they will want more and more resources to fuel more and more powerful computers, and may also want to distribute civilization to avoid local disasters.
    • This indicates that alien life will attempt to do the same, and eventually, absent great filters, expand their civilization through the cosmos.
  • The Data Point – future technological advances will likely enable civilization to expand ‘explosively’ fast (relative to cosmological timescales) throughout the cosmos – however we a yet have no evidence of this happening, and if there was available evidence, we would have likely detected it by now – much of the argument for the great filter follows from this.
    • within at most the next million years (absent filters) it is foreseeable that our civilization may reach an “explosive point”; rapidly expanding outwards to utilize more and more available mass and energy resources.
    • Civilization will ‘scatter & adapt’ to expand well beyond the reach of any one large catastrophy (i.e. a supernova) to avoid total annihilation.
    • Civilization will recognisably disturb the places it colonizes, adapting the environment into ideal structures (i.e. create orbiting solar collectors, dyson spheres or matrioshka brains thereby substantially changing the star’s spectral output and appearance.  Really really advanced civs may even attempt wholesale reconstruction of galaxies)
    • But we haven’t detected an alien takeover on our planet, or seen anything in the sky to reflect expansionalist civs – even if earth or our solar system was kept in a ‘nature preserve’ (look up the “Zoo Hypothesis”) we should be able to see evidence in the sky of aggressive colonization of other star systems.  Despite great success stories in explaining how natural phenomenon in the cosmos works (mostly “dead” physical processes), we see no convincing evidence of alien life.
  • The Great Filter – ‘The Great Silence’ implies that at least one of the 9 steps to achieving an advanced expansionist civilization (outlined below) is very improbable; somewhere between dead matter and explosive growth lies The Great Filter.
    1. The right star system (including organics)
    2. Reproductive something (e.g. RNA)
    3. Simple (prokaryotic) single-cell life
    4. Complex (archaeatic & eukaryotic) single-cell life
    5. Sexual reproduction
    6. Multi-cell life
    7. Tool-using animals with big brains
    8. Where we are now
    9. Colonization explosion
  • Someone’s Story is Wrong / It Matters Who’s Wrong –  the great silence, as mentioned above seems to indicate that more or more of plausible sounding stories we have about the transitions through each of the 9 steps above is less probable than they look or just plain wrong. To the extent that the evolutionary steps to achieve our civilization were easy, our future success to achieve a technologically advanced / superintelligent / explosively expansionist civilization is highly improbable.  Realising this helps may help inform how we approach how we strategize our future.
    • Some scientists think that transitioning from prokaryotic (single-celled) life and archaeatic or eukaryotic life is rare – though it seems it has happened at least 42 times
    • Even if most of society wants to stagnate or slow down to stable speeds of expansion, it’s not infeasible that some part of our civ will escape and rapidly expand
    • Optimism about our future opposes optimisim about the ease at which life can evolve to where we are now.
    • Being aware of the Great Filter may at least help us improve our chances
  • Reconsidering Biology – Several potentially hard trial-and-error steps between dead matter and modern humans (lifecomplexitysexsocietycradle and language etc) – the harder they were, the more likely they can account for the great silence
  • Reconsidering AstroPhysics – physical phenomena which might reduce the likelihood we would see evidence of an expansionist civ
    • fast space travel may be more difficult even for superintelligence, the lower the maximum speed, the more it could account for the great silence.
    • The relative size of the universe could be smaller than we think, containing less constellations
    • There could be natural ‘baby universes’ which erupt with huge amounts of matter/energy which keep expansionist civs occupied, or effectively trapped
    • Harvesting energy on a large scale may be impossible, or the way in which it is done always preserves natural spectra
    • Advanced life may consistently colonize dark matter
  • Rethinking Social Theories – in order for advanced civs to be achieved, they must first loose ‘predispositions to territoriality and aggression’ making them ‘less likely to engage in galactic emperialism’

We can’t detect expansionist civs, and our default assumption is that there was plenty of time and hospitable space for advanced enough life to arise – especially if you agree with panspermia – that life could be seeded by precursors on roaming cosmic bodies (i.e. comets) – resulting in more life-bearing planets on them.  We can assume plausible reasons for a series of filters which slow down or halt evolutionary progress which would otherwise finally arrive at technologically savvy life capable of expansionist civs – but why all of them?

It seems like we as a technologically capable species are on the verge of having our civilizaiton escape earths gravity well and go spacefaring – so how far along the great filter are we?

Though it’s been thought to be less accurate than some of its predecessors, and more of a rallying point – let us revisit the Drake Equation anyway because its a good tool for helping understand the apparent contradiction between high probability estimates for the existence of extraterrestrial civilizations, and the complete lack of evidence that such civilizations exist.

The number of active, communicative extraterrestrial civilizations in the Milky Way galaxy N is assumed to be equal to the mathematical product of:

  1. R, the average rate of star formations, in our galaxy,
  2. fp, the fraction of formed stars that have planets,
  3. ne for stars that have planets, the average number of planets that can potentially support life,
  4. fl, the fraction of those planets that actually develop life,
  5. fi, the fraction of planets bearing life on which intelligent, civilized life, has developed,
  6. fc, the fraction of these civilizations that have developed communications, i.e., technologies that release detectable signs into space, and
  7. L, the length of time over which such civilizations release detectable signals,

 

Which of the values on the right side of the equation (1 to 7 above) are the biggest reasons (or most significant filters) for the ‘N’ value  (the estimated number of alien civilizations in our galaxy capable of communication) being so small?  if a substantial amount of the great filter is explained by ‘L’, then we are in trouble because the length of time expansionist civs emit signals likely correlates with how long they survive before disappearing (which we can assume likely means going extinct, though there are other possible explanations for going silent).  If other civs don’t seem to last long, then we can infer statistically that our civ might not either.  The larger the remaining filter we have ahead of us, the more cautious and careful we ought to be to avoid potential show stoppers.

So let’s hope that the great filter is behind us, or a substantial proportion is – meaning that the seemingly rare occurrence of expansionist civs is likely because the emergence of intelligent life is rare, rather than it being because the time expansionist civs exist is short.

The more we develop our theories about the potential behaviours of expansionist civs the more we may expand upon or adapt the ‘L’ section of the drake equation.

Many of the paramaters in the Drake Equation are really hard to quantify – exoplanet data from the Keplar Telescope has been used to adapt the Drake equation already – based on this data it seems that there seems to be far more potentially earth like habitable planets within our galaxy, which both excites me because news about alien life is exciting, and frustrates me because it decreases the odds that the larger portion of the great filter is behind us.

Only by doing the best we can with the very best that an era offers, do we find the way to do better in the future.'Frank Drake' - A Reminiscence of Project Ozma, Cosmic Search Vol. 1, No. 1, January 1979

Interview

…we should remember that the Great Filter is so very large that it is not enough to just find some improbable steps; they must be improbable enough. Even if life only evolves once per galaxy, that still leaves the problem of explaining the rest of the filter: why we haven’t seen an explosion arriving here from any other galaxies in our past universe? And if we can’t find the Great Filter in our past, we’ll have to fear it in our future.Robin Hanson - The 'Great Filter' - should we worry?

As stated on the Overcoming Bias blog:

We have three main sources of info on existential risks (xrisks):

  1. Inside View Analysis – where we try to use our best theories to reason about particular causal processes.
  2. Earth Track Records – the empirical distribution of related events observed so far on Earth.
  3. The Great Filter – inferences from the fact that the universe looks dead everywhere but here.

These sources are roughly equally informative. #2 suggests xrisks are low, even if high enough to deserve much effort to prevent them. I’d say that most variations on #1 suggest the same. However, #3 suggests xrisks could be very high, which should encourage more xrisk-mitigation efforts.

Ironically most xrisk efforts (of which I’m aware) focus on AI-risk, which can’t explain the great filter. Most analysis efforts also focus on #1, less on #2, and almost none on #3.

“What’s the worst that could happen?” – in 1996 (revised in 1998) Robin Hanson wrote:

Humanity seems to have a bright future, i.e., a non-trivial chance of expanding to fill the universe with lasting life. But the fact that space near us seems dead now tells us that any given piece of dead matter faces an astronomically low chance of begating such a future. There thus exists a great filter between death and expanding lasting life, and humanity faces the ominous question: how far along this filter are we?The Great Filter - Are We Almost Past It? - 'Robin Hanson'
If the ‘Great Filter’ is ahead of us, we could fatalistically resign ourselves to the view that human priorities too skewed to coordinate towards avoiding being ‘filtered’, or we can try to do something to decrease the odds of being filtered. To coordinate what our way around a great filter we need to have some idea of plausible filters.
How may a future great filter manifest?
– Reapers (mass effect)?
– Bezerker probes sent out to destroy any up and coming civilization that reaches a certain point? (A malevolent alien teenager in their basement could have seeded self-replicating bezerker probes as a ‘practical joke’)
– A robot takeover? (If this has been the cause of great filters in the past then why don’t we see evidence of expansionist robot civilizations? see here.  Also if the two major end states of life are either dead or genocidal intelligence explosion, and we aren’t the first, then it is speculated that we should live in a young universe.)

Robin Hanson gave a TedX talk on the Great Filter:

Bio

Robin Hanson is an associate professor of economics at George Mason University, a research associate at the Future of Humanity Institute of Oxford University, and chief scientist at Consensus Point. After receiving his Ph.D. in social science from the California Institute of Technology in 1997, Robin was a Robert Wood Johnson Foundation health policy scholar at the University of California at Berkeley. In 1984, Robin received a masters in physics and a masters in the philosophy of science from the University of Chicago, and afterward spent nine years researching artificial intelligence, Bayesian statistics, and hypertext publishing at Lockheed, NASA, and independently.

Robin has over 70 publications, and has pioneered prediction markets since 1988, being a principal architect of the first internal corporate markets, at Xanadu in 1990, of the first web markets, the Foresight Exchange since 1994, of DARPA’s Policy Analysis Market, from 2001 to 2003, and of Daggre/Scicast, since 2010.

Links

Robin Hanson’s 1998 revision on the paper he wrote on the Great Filter in 1996
– The Drake Equation at connormorency  (where I got the Drake equation image – thanks)|
Slate Star Codex – Don’t Fear the Filter
Ask Ethan: How Fast Could Life Have Arisen In The Universe?
Keith Wiley – The Fermi Paradox, Self-Replicating Probes, Interstellar Transport Bandwidth

Another Milestone in Achieving Brain Preservation & Whole Brain Emulation

A technology designed to preserve synapses across the whole brain of a large mammal is successful – covered in this interview with Keith Wiley, Fellow of the Brain Preservation Foundation.
(see below)
In an announcement from the Brain Preservation Foundation, it’s president Ken Hayworth writes:

Using a combination of ultrafast glutaraldehyde fixation and very low temperature storage, researchers have demonstrated for the first-time ever a way to preserve a brain’s connectome (the 150 trillion synaptic connections presumed to encode all of a person’s knowledge) for centuries-long storage in a large mammal. This laboratory demonstration clears the way to develop Aldehyde-Stabilized Cryopreservation into a ‘last resort’ medical option, one that would prevent the destruction of the patient’s unique connectome, offering at least some hope for future revival via mind uploading. [ref]

The neuroscience and medical communities should begin an open debate regarding ASC’s ability to preserve the information content of the brain. BPF President Ken Hayworth

The significance of the Aldehyde-Stabilized-Cryopreservation as a means to achieve future revival is hotly debated among neuroscientists, cryonicists, futurists, philosophers, and likely some concerned clergymen of various persuasions. Keith Wiley (a fellow at BPF) reached out to me to do an interview on the subject – always eager to help fan the flames, I enthusiastically accepted. I also happen to think that the topic is very important (see my previous interviews with Kennith Hayworth on the first small-mammalian preservation prize being won: ‘Verifiable Brain Preservation’, and a two part epic interview on brain preservation: see part 1 and part 2).

Interview with Keith Wiley

Discussing the Brain Preservation Foundation’s announcement of the large mammal prize and related topics.


Topics covered:

Keith Wiley

Keith Wiley

– 1000ft view: What/why research brain preservation?
– The burning of the library of Alexandria was an unfortunate loss of knowledge. How can we be so complacent about brain death?
– Where are we at? Neuroscience imaging technology is preparing to map entire insect and small mammal brains at the nanometer scale using ultrafast electron microscopes, with the near-term goal of reading memories.
– Aldehyde-Stabilized Cryopreservation: what is it? How does it work?
– Previous small-mammal brain preservation prize won in 2016 – how does large-brain one differ? Extra proof of concept? How is it emblematic of progress?
– The difference between biological and uploaded revival (because the award winning technique that made the news can’t be reversed for biological revival) – Ship of Theseus / Grandfathers Axe
– The BPF’s heavy interest in gaining scientific credibility for brain preservation through peer-reviewed publications and research, and through objective investigation of preserved brains for verification — the BPF’s lack of confidence in relying on futuristic nanotechnology to repair any damage caused by the preservation process (which cryonics folks generally rely on when told their process might be damaging the brain)

Brain Preservation Foundation: http://www.brainpreservation.org/

About the BPF: The Brain Preservation Foundation is a non-profit organization with the goal of furthering research in whole brain preservation. The BPF does not currently support the offering of ASC, or any other preservation method, to human patients. This single Prize winning laboratory demonstration is insufficient to address the types of quality control measures that should be expected of any procedure that would be applied to humans. BPF president Kenneth Hayworth has released a document outlining his position on what should be expected prior to any such offering.


About Keith Wiley: he is a fellow with the Brain Preservation Foundation and a board member with Carboncopies, which promotes research and development into whole brain emulation. He has written several articles and papers on the philosophy of mind uploading. His book, A Taxonomy and Metaphysics of Mind-Uploading, is available on Amazon. Keith’s website is http://keithwiley.com.

A link to the associated text chatroom discussion (which seems to disappear after the live event ends) is here.

The Debate Rages On!

The BPF prize kindles debates around the world on

  • which brain preservation techniques actually work, and how do we verify this?
  • what are the best roadmaps to achieve viable brain preservation in view to achieve individual survival beyond our current understanding of biological death?
  • and ultimately, if ‘technological resurrection’ were possible, should we allow it?

All very healthy debates to be having!

See PrWeb’s article
Aldehyde-Stabilized Cryopreservation Wins Final Phase of Brain Preservation Prize
.

The significance of this Prize win is sure to be debated. Those who dismiss the possibility of future mind uploading will likely view ASC as simply the high-quality embalming and cold storage of a deceased body—an utter waste of time and resources. On the other hand, those who expect that humanity will eventually develop mind uploading technology are more likely to view ASC as perhaps their best chance to survive and reach that future world. It may take decades or even centuries to develop the technology to upload minds if it is even possible at all. ASC would enable patients to safely wait out those centuries. For now, neuroscience is actively exploring the plausibility of mind uploading through ongoing studies of the physical basis of memory, and through development of large-scale neural simulations and tools to map connectomes. This Prize win should shine a spotlight on such neuroscience research, underscoring its importance to humanity.

I point this out because adoption of pattern versus continuity views of identity should determine an individual’s view of the utility of vitrifixation for brain preservation. The primary point to consider here is that chemical fixation is a good deal less reversible than present day vitrification, low temperature storage with cryoprotectants. The reversible vitrification of organs is a near-future goal for a number of research groups. But reversing chemical fixation would require advanced molecular nanotechnology at the very least – it is in principle possible, but far, far distant in our science fiction future. The people advocating vitrifixation are generally of the pattern identity persuasion: they want, as soon as possible, a reliable, highest quality means of preserving the data of the mind. It doesn’t matter to them that it is effectively irreversible, as they aren’t hoping to use the brain again after the fact. ASHBURN, Va. (PRWEB) March 13, 2018

Fight Ageing’s alternative view on preservation of pattern and continuity is summarized here here

For those of us who adhere to the alternative viewpoint, the continuity theory of identity, the self is the combination of the pattern and its implementation in a specific set of matter: it is this mind as encoded in this brain. A copy is a copy, a new entity, not the self. Discarding the stored brain is death. The goal in the continuity theory view is to use some combination of future biotechnology and nanotechnology to reverse the storage methodology, repair any damage accumulated in the brain, and house it in a new body, restoring that individual to life.

I point this out because adoption of pattern versus continuity views of identity should determine an individual’s view of the utility of vitrifixation for brain preservation. The primary point to consider here is that chemical fixation is a good deal less reversible than present day vitrification, low temperature storage with cryoprotectants. The reversible vitrification of organs is a near-future goal for a number of research groups. But reversing chemical fixation would require advanced molecular nanotechnology at the very least – it is in principle possible, but far, far distant in our science fiction future. The people advocating vitrifixation are generally of the pattern identity persuasion: they want, as soon as possible, a reliable, highest quality means of preserving the data of the mind. It doesn’t matter to them that it is effectively irreversible, as they aren’t hoping to use the brain again after the fact. Fight Ageing -

Also see the Alcor Position Statement on Brain Preservation Foundation Prize.

While ASC produces clearer images than current methods of vitrification without fixation, it does so at the expense of being toxic to the biological machinery of life by wreaking havoc on a molecular scale. Chemical fixation results in chemical changes (the same as embalming) that are extreme and difficult to evaluate in the absence of at least residual viability. Certainly, fixation is likely to be much harder to reverse so as to restore biological viability as compared to vitrification without fixation. Fixation is also known to increase freezing damage if cryoprotectant penetration is inadequate, further adding to the risk of using fixation under non-ideal conditions that are common in cryonics. Another reason for lack of interest in pursuing this approach is that it is a research dead end on the road to developing reversible tissue preservation in the nearer future.

I point this out because adoption of pattern versus continuity views of identity should determine an individual’s view of the utility of vitrifixation for brain preservation. The primary point to consider here is that chemical fixation is a good deal less reversible than present day vitrification, low temperature storage with cryoprotectants. The reversible vitrification of organs is a near-future goal for a number of research groups. But reversing chemical fixation would require advanced molecular nanotechnology at the very least – it is in principle possible, but far, far distant in our science fiction future. The people advocating vitrifixation are generally of the pattern identity persuasion: they want, as soon as possible, a reliable, highest quality means of preserving the data of the mind. It doesn’t matter to them that it is effectively irreversible, as they aren’t hoping to use the brain again after the fact. Alcor President, Max More

 

No doubt these issues will inspire heated discussion, and challenge some of the core assumptions of what it means to be human, to have a thinking & self-aware mind, to be alive, and to die. Though some cherished beliefs may be bruised in the process, I believe humanity will be better for it – especially if brain preservation technologies actually do work in .

Medical time-travel for the win!


Many thanks for reading/watching!

Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture

b) Donating
– Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
– Etherium: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
– Patreon: https://www.patreon.com/scifuture

c) Sharing the media SciFuture creates: http://scifuture.org

Kind regards,
Adam Ford
– Science, Technology & the Future

The Point of View of the Universe – Peter Singer

Peter Singer discusses the new book ‘The Point Of View Of The Universe – Sidgwick & Contemporary Ethics’ (By Katarzyna de Lazari-Radek and Peter Singer) He also discusses his reasons for changing his mind about preference utilitarianism.

 

Buy the book here: http://ukcatalogue.oup.com/product/97… Bart Schultz’s (University of Chicago) Review of the book: http://ndpr.nd.edu/news/49215-he-poin… “Restoring Sidgwick to his rightful place of philosophical honor and cogently defending his central positions are obviously no small tasks, but the authors are remarkably successful in pulling them off, in a defense that, in the case of Singer at least, means candidly acknowledging that previous defenses of Hare’s universal prescriptivism and of a desire or preference satisfaction theory of the good were not in the end advances on the hedonistic utilitarianism set out by Sidgwick. But if struggles with Singer’s earlier selves run throughout the book, they are intertwined with struggles to come to terms with the work of Derek Parfit, both Reasons and Persons (Oxford, 1984) and On What Matters (Oxford, 2011), works that have virtually defined the field of analytical rehabilitations of Sidgwick’s arguments. The real task of The Point of View of the Universe — the title being an expression that Sidgwick used to refer to the impartial moral point of view — is to defend the effort to be even more Sidgwickian than Parfit, and, intriguingly enough, even more Sidgwickian than Sidgwick himself.”

One Big Misconception About Consciousness – Christof Koch

Christof Koch (Allen Institute for Brain Science) discusses Shannon information and it’s theoretical limitations in explaining consciousness –

Information Theory misses a critical aspect of consciousnessChristof Koch

Christof argues that we don’t need observers to have conscious experiences (other poeple, god, etc), the underlying assumptions behind traditional information theory assumes Shannon information – and that a big misconception about the structure of consciousness stems from this idea – assuming that Shannon information is enough to explain consciousness.  Shannon information is about “sending information from a channel to a receiver – consciousness isn’t about sending anything to anybody.”  So what other kind of information is there?

The ‘information’ in Integrated Information Theory (IIT) does not refer to Shannon information.  Etymologically, the word ‘information’ derives from ‘informare’ – “it refers to information in the original sense of the word ‘Informare’ – to give form to” – that is to give form to a high dimensional structure.

 

 

It’s worth noting that many disagree with Integrated Information Theory – including Scott Aaronson – see here, here and here.

 

See interview below:

“It’s a theory that proceeds from phenomenology to as it were mechanisms in physics”.

IIT is also described in Christof Koch’s Consciousness: Confessions of a Romantic Reductionist’.

Axioms and postulates of integrated information theory

5 axioms / essential properties of experience of consciousness that are foundation to IIT – the intent is to capture the essential aspects of all conscious experience. Each axiom should apply to every possible experience.

  • Intrinsic existence: Consciousness exists: each experience is actual—indeed, that my experience here and now exists (it is real) is the only fact I can be sure of immediately and absolutely. Moreover, my experience exists from its own intrinsic perspective, independent of external observers (it is intrinsically real or actual).
  • Composition: Consciousness is structured: each experience is composed of multiple phenomenological distinctions, elementary or higher-order. For example, within one experience I may distinguish a book, a blue color, a blue book, the left side, a blue book on the left, and so on.
  • Information: Consciousness is specific: each experience is the particular way it is—being composed of a specific set of specific phenomenal distinctions—thereby differing from other possible experiences (differentiation). For example, an experience may include phenomenal distinctions specifying a large number of spatial locations, several positive concepts, such as a bedroom (as opposed to no bedroom), a bed (as opposed to no bed), a book (as opposed to no book), a blue color (as opposed to no blue), higher-order “bindings” of first-order distinctions, such as a blue book (as opposed to no blue book), as well as many negative concepts, such as no bird (as opposed to a bird), no bicycle (as opposed to a bicycle), no bush (as opposed to a bush), and so on. Similarly, an experience of pure darkness and silence is the particular way it is—it has the specific quality it has (no bedroom, no bed, no book, no blue, nor any other object, color, sound, thought, and so on). And being that way, it necessarily differs from a large number of alternative experiences I could have had but I am not actually having.
  • Integration: Consciousness is unified: each experience is irreducible to non-interdependent, disjoint subsets of phenomenal distinctions. Thus, I experience a whole visual scene, not the left side of the visual field independent of the right side (and vice versa). For example, the experience of seeing the word “BECAUSE” written in the middle of a blank page is irreducible to an experience of seeing “BE” on the left plus an experience of seeing “CAUSE” on the right. Similarly, seeing a blue book is irreducible to seeing a book without the color blue, plus the color blue without the book.
  • Exclusion: Consciousness is definite, in content and spatio-temporal grain: each experience has the set of phenomenal distinctions it has, neither less (a subset) nor more (a superset), and it flows at the speed it flows, neither faster nor slower. For example, the experience I am having is of seeing a body on a bed in a bedroom, a bookcase with books, one of which is a blue book, but I am not having an experience with less content—say, one lacking the phenomenal distinction blue/not blue, or colored/not colored; or with more content—say, one endowed with the additional phenomenal distinction high/low blood pressure. Moreover, my experience flows at a particular speed—each experience encompassing say a hundred milliseconds or so—but I am not having an experience that encompasses just a few milliseconds or instead minutes or hours.

So, does IIT solve what David Chalmers calls the “Hard Problem of consciousness”?

Christof Koch  is an American neuroscientist best known for his work on the neural bases of consciousness. He is the President and Chief Scientific Officer of the Allen Institute for Brain Science in Seattle. From 1986 until 2013, he was a professor at the California Institute of Technology.

This interview is a short section of a larger interview which will be released at a later date.

Amazing Progress in Artificial Intelligence – Ben Goertzel

At a recent conference in Beijing (the Global Innovators Conference) – I did yet another video interview with the legendary AGI guru – Ben Goertzel. This is the first part of the interview, where he talks about some of the ‘amazing’ progress in AI over recent years, including Deep Mind’s AlphaGo sealing a 4-1 victory over Go grandmaster Lee Sedol, progress in hybrid architectures in AI (Deep Learning, Reinforcement Learning, etc), interesting academic research in AI being taken up by tech giants, and finally providing some sobering remarks on the limitations of deep neural networks.

Consciousness in Biological and Artificial Brains – Prof Christof Koch

Event Description: Human and non-human animals not only act in the world but are capable of conscious experience. That is, it feels like something to have a brain and be cold, angry or see red. I will discuss the scientific progress that has been achieved over the past decades in characterizing the behavioral and the neuronal correlates of consciousness, based on clinical case studies as well as laboratory experiments. I will introduce the Integrated Information Theory (IIT) that explains in a principled manner which physical systems are capable of conscious, subjective experience. The theory explains many biological and medical facts about consciousness and its pathologies in humans, can be extrapolated to more difficult cases, such as fetuses, mice, or non-mammalian brains and has been used to assess the presence of consciousness in individual patients in the clinic. IIT also explains why consciousness evolved by natural selection. The theory predicts that deep convolutional networks and von Neumann computers would experience next to nothing, even if they perform tasks that in humans would be associated with conscious experience and even if they were to run software faithfully simulating the human brain.

[Meetup Event Page]

Supported by The Florey Institute of Neuroscience & Mental Health, the University of Melbourne and the ARC Centre of Excellence for Integrative Brain Function.

 

 

Who: Prof Christof Koch, President and Chief Scientific Officer, Allen Institute for Brain Sciences, Seattle, USA

Venue: Melbourne Brain Centre, Ian Potter Auditorium, Ground Floor, Kenneth Myer Building (Building 144), Genetics Lane, 30 Royal Parade, University of Melbourne, Parkville

This will be of particular interest to those who know of David Pearce, Andreas Gomez, Mike Johnson and Brian Tomasik’s works – see this online panel: