Posts

The Antispeciesist Revolution – read by David Pearce

The Antispeciesist Revolution

[Original text found here]

Speciesism.
When is it ethically acceptable to harm another sentient being? On some fairly modest(1) assumptions, to harm or kill someone simply on the grounds they belong to a different gender, sexual orientation or ethnic group is unjustified. Such distinctions are real but ethically irrelevant. On the other hand, species membership is normally reckoned an ethically relevant criterion. Fundamental to our conceptual scheme is the pre-Darwinian distinction between “humans” and “animals”. In law, nonhuman animals share with inanimate objects the status of property. As property, nonhuman animals can be bought, sold, killed or otherwise harmed as humans see fit. In consequence, humans treat nonhuman animals in ways that would earn a life-time prison sentence without parole if our victims were human. From an evolutionary perspective, this contrast in status isn’t surprising. In our ancestral environment of adaptedness, the human capacity to hunt, kill and exploit sentient beings of other species was fitness-enhancing(2). Our moral intuitions have been shaped accordingly. Yet can we ethically justify such behaviour today?

Naively, one reason for disregarding the interests of nonhumans is the dimmer-switch model of consciousness. Humans matter more than nonhuman animals because (most) humans are more intelligent. Intuitively, more intelligent beings are more conscious than less intelligent beings; consciousness is the touchstone of moral status.

The problem with the dimmer-switch model is that it’s empirically unsupported, among vertebrates with central nervous systems at least. Microelectrode studies of the brains of awake human subjects suggest that the most intense forms of experience, for example agony, terror and orgasmic bliss, are mediated by the limbic system, not the prefrontal cortex. Our core emotions are evolutionarily ancient and strongly conserved. Humans share the anatomical and molecular substrates of our core emotions with the nonhuman animals whom we factory-farm and kill. By contrast, distinctively human cognitive capacities such as generative syntax, or the ability to do higher mathematics, are either phenomenologically subtle or impenetrable to introspection. To be sure, genetic and epigenetic differences exist between, say, a pig and a human being that explain our adult behavioural differences, e.g. the allele of the FOXP2(1) gene implicated in the human capacity for recursive syntax. Such mutations have little to do with raw sentience(1).

Antispeciesism.
So what is the alternative to traditional anthropocentric ethics? Antispeciesism is not the claim that “All Animals Are Equal”, or that all species are of equal value, or that a human or a pig is equivalent to a mosquito. Rather the antispeciesist claims that, other things being equal, conscious beings of equivalent sentience deserve equal care and respect. A pig, for example, is of comparable sentience to a prelinguistic human toddler. As it happens, a pig is of comparable (or superior) intelligence to a toddler as well(5). However, such cognitive prowess is ethically incidental. If ethical status is a function of sentience, then to factory-farm and slaughter a pig is as ethically abhorrent as to factory-farm and slaughter a human baby. To exploit one and nurture the other expresses an irrational but genetically adaptive prejudice.

On the face of it, this antispeciesist claim isn’t just wrong-headed; it’s absurd. Philosopher Jonathan Haidt speaks of “moral dumfounding”(6), where we just know something is wrong but can’t articulate precisely why. Haidt offers the example of consensual incest between an adult brother and sister who use birth control. For evolutionary reasons, we “just know” such an incestuous relationship is immoral. In the case of any comparisons of pigs with human infants and toddlers, we “just know” at some deep level that any alleged equivalence in status is unfounded. After all, if there were no ethically relevant distinction between a pig and a toddler, or between a battery-farmed chicken and a human infant, then the daily behaviour of ordinary meat-eating humans would be sociopathic – which is crazy. In fact, unless the psychiatrists’ bible, Diagnostic and Statistical Manual of Mental Disorders, is modified explicitly to exclude behaviour towards nonhumans, most of us do risk satisfying its diagnostic criteria for the disorder. Even so, humans often conceive of ourselves as animal lovers. Despite the horrors of factory-farming, most consumers of meat and animal products are clearly not sociopaths in the normal usage of the term; most factory-farm managers are not wantonly cruel; and the majority of slaughterhouse workers are not sadists who delight in suffering. Serial killers of nonhuman animals are just ordinary men doing a distasteful job – “obeying orders” – on pain of losing their livelihoods.

Should we expect anything different? Jewish political theorist Hannah Arendt spoke famously of the “banality of evil”(7). If twenty-first century humans are collectively doing something posthuman superintelligence will reckon monstrous, akin to the [human] Holocaust or Atlantic slave trade, then it’s easy to assume our moral intuitions would disclose this to us. Our intuitions don’t disclose anything of the kind; so we sleep easy. But both natural selection and the historical record offer powerful reasons for doubting the trustworthiness of our naive moral intuitions. So the possibility that human civilisation might be founded upon some monstrous evil should be taken seriously – even if the possibility seems transparently absurd at the time.

One possible speciesist response is to raise the question of “potential”. Even if a pig is as sentient as a human toddler, there is a fundamental distinction between human toddlers and pigs. Only a toddler has the potential to mature into a rational adult human being.

The problem with this response is that it contradicts our treatment of humans who lack “potential”. Thus we recognise that a toddler with a progressive disorder who will never live to celebrate his third birthday deserves at least as much love, care and respect as his normally developing peers – not to be packed off to a factory-farm on the grounds it’s a shame to let good food go to waste. We recognise a similar duty of care for mentally handicapped adult humans and cognitively frail old people. For sure, historical exceptions exist to this perceived duty of care for vulnerable humans, e.g. the Nazi “euthanasia” program, with its eugenicist conception of “life unworthy of life”. But by common consent, we value young children and cognitively challenged adults for who they are, not simply for who they may – or may not – one day become. On occasion, there may controversially be instrumental reasons for allocating more care and resources to a potential genius or exceptionally gifted child than to a normal human. Yet disproportionate intraspecies resource allocation may be justified, not because high IQ humans are more sentient, but because of the anticipated benefits to society as a whole.

Practical Implications.
1. Invitrotarianism.

The greatest source of severe, chronic and readily avoidable suffering in the world today is man-made: factory farming. Humans currently slaughter over fifty billion sentient beings each year. One implication of an antispeciesist ethic is that factory farms should be shut and their surviving victims rehabilitated.

In common with most ethical revolutions in history, the prospect of humanity switching to a cruelty-free diet initially strikes most practically-minded folk as utopian dreaming. “Realists” certainly have plenty of hard evidence to bolster their case. As English essayist William Hazlitt observed, “The least pain in our little finger gives us more concern and uneasiness than the destruction of millions of our fellow-beings.” Without the aid of twenty-first century technology, the mass slaughter and abuse of our fellow animals might continue indefinitely. Yet tissue science technology promises to allow consumers to become moral agents without the slightest hint of personal inconvenience. Lab-grown in vitro meat produced in cell culture rather than a live animal has long been a staple of science fiction. But global veganism – or its ethical invitrotarian equivalent – is no longer a futuristic fantasy. Rapid advances in tissue engineering mean that in vitro meat will shortly be developed and commercialised. Today’s experimental cultured mincemeat can be supplanted by mass-manufactured gourmet steaks for the consumer market. Perhaps critically for its rapid public acceptance, in vitro meat does not need to be genetically modified – thereby spiking the guns of techno-luddites who might otherwise worry about “FrankenBurgers”. Indeed, cultured meat products will be more “natural” in some ways than their antibiotic-laced counterparts derived from factory-farmed animals.

Momentum for commercialisation is growing. Non-profit research organisations like New Harvest(8), working to develop alternatives to conventionally-produced meat, have been joined by hard-headed businessmen. Visionary entrepreneur and Stanford academic Peter Thiel has just funnelled $350,000 into Modern Meadow, a start-up that aims to combine 3D printing with in vitro meat cultivation. Within the next decade or so, gourmet steaks could be printed out from biological materials. In principle, the technology should be scalable.

Tragically, billions of nonhuman animals will grievously suffer and die this century at human hands before the dietary transition is complete. Humans are not obligate carnivores; eating meat and animal products is a lifestyle choice. “But I like the taste!” is not a morally compelling argument. Vegans and animal advocates ask whether we are ethically entitled to wait on a technological fix? The antispeciesist answer is clear: no.

2. Compassionate Biology.
If and when humans stop systematically harming other sentient beings, will our ethical duties to members of other species have been discharged? Not if the same ethical considerations as apply to members of other human races or age-groups apply also to members of other species of equivalent sentience. Thus if famine breaks out in sub-Saharan Africa and young human children are starving, then we recognise we have a duty to send aid; or better still, to take proactive measures to ensure famines do not arise in the first instance, i.e. to provide not just food aid but family planning. So why not assist, say, starving free-living elephants? Until recently, no comparable interventions were feasible for members of other species. The technical challenges were insurmountable. Not least, the absence of cross-species fertility control technologies would have often made bad problems worse. Yet thanks to the exponential growth of computer power, every cubic metre of the planet will shortly be computationally accessible to micro-management, surveillance and control. Harnessed to biotechnology, nanotechnology and robotics, such tools confer unprecedented power over Nature. With unbridled power comes complicity. Ethically speaking, how many of the traditional cruelties of the living world do we wish to perpetuate? Orthodox conservation biologists argue we should not “interfere”: humans can’t “police” Nature. Antispeciesists disagree. Advocates of compassionate biology argue that humans and nonhumans alike should not be parasitised, starved, disembowelled, asphyxiated, or eaten alive.

As always, bioconservatives insist such miseries are “natural”; status quo bias runs deep. “Custom will reconcile people to any atrocity”, observed George Bernard Shaw. Snuff movies in the guise of Nature documentaries are quite popular on Youtube, a counterpoint to the Disneyfied wildlife shows aired on mainstream TV. Moreover even sympathetic critics of compassionate biology might respond that helping free-living members of other species is prohibitively expensive. An adequate welfare safety-net scarcely exists for humans in many parts of the world. So how can we contemplate its extension to nonhumans – even just to large-brained, long-lived vertebrates in our Nature reserves? Provision of comprehensive healthcare for all free-living elephants(10), for example, might cost between two or three billion dollars annually. Compassionate stewardship of the living world would be technically daunting too, entailing ecosystem management, cross-species fertility control via immunocontraception, veterinary care, emergency famine-relief, GPS tracking and monitoring, and ultimately phasing out or genetically “reprogramming”(11) carnivorous predators. The notional bill could approach the world’s 1.7 trillion-dollar annual arms budget. But irrespective of cost or timescale, if we are to be consistently non-speciesist, then decisions about resource allocation should be based not on species membership, but directly or indirectly on sentience. An elephant, for example, is at least as sentient as a human toddler. If it is ethically obligatory to help sick or starving children, then it’s ethically obligatory to help sick or starving elephants – not just via crisis interventions but via long-term healthcare support.

A traditional conservation biologist might respond that elephants helped by humans are no longer truly wild. Yet on such a criterion, clothes-wearing humans or beneficiaries of food aid and family planning aren’t “wild” humans either. Why should this matter? “Free-living” and “wild” are conceptually distinct. To assume that the civilising process should be confined to our own species is mere speciesist prejudice. Humans, transhumans and posthumans must choose what forms of sentience we want to preserve and create on Earth and beyond. Humans already massively intervene in Nature, whether though habitat destruction, captive breeding programs for big cats, “rewilding”, etc. So the question is not whether humans should “interfere”, but rather what ethical principles should govern our interventions(12).

Speciesism and Superintelligence.
Why should transhumanists care about the suffering of nonhuman animals? This is not a “feel-good” issue. One reason we should care cuts to the heart of the future of life in the universe. Transhumanists differ over whether our posthuman successors will most likely be nonbiological artificial superintelligence; or cyborgs who effectively merge with our hyperintelligent machines; or our own recursively self-improving biological descendents who modify their own genetic source code and bootstrap their way to full-spectrum superintelligence(13). Regardless of the dominant lifeform of the posthuman era, biological humans have a vested interest in the behaviour of intellectually advanced beings towards cognitively humble creatures – if we survive at all. Compared to posthuman superintelligence, archaic humans may be no smarter than pigs or chickens – or perhaps worms. This does not augur well for Homo sapiens. Western-educated humans tend to view Jains as faintly ridiculous for practising ahimsa, or harmlessness, sweeping the ground in front of them to avoid inadvertently treading on insects. How quixotic! Yet the fate of sentient but cognitively humble lifeforms in relation to vastly superior intelligence is precisely the issue at stake as we confront the prospect of posthuman superintelligence. How can we ensure a Jain-like concern for comparatively simple-minded creatures such as ourselves? Why should superintelligences care any more than humans about the well-being of their intellectual inferiors? Might distinctively human-friendly superintelligence turn out to be as intellectually-incoherent as, say, Aryan-friendly superintelligence? If human primitives are to prove worthy of conservation, how can we implement technologies of impartial friendliness towards other sentients? And if posthumans do care, how do we know that a truly benevolent superintelligence wouldn’t turn Darwinian life into utilitronium with a communal hug?

Viewed in such a light, biological humanity’s prospects in a future world of superintelligence might seem dire. However, this worry expresses a one-dimensional conception of general intelligence. No doubt the nature of mature superintelligence is humanly unknowable. But presumably full-spectrum(14) superintelligence entails, at the very least, a capacity to investigate, understand and manipulate both the formal and the subjective properties of mind. Modern science aspires to an idealised “view from nowhere”(15), an impartial, God-like understanding of the natural universe, stripped of any bias in perspective and expressed in the language of mathematical physics. By the same token, a God-like superintelligence must also be endowed with the capacity impartially to grasp all possible first-person perspectives – not a partial and primitive Machiavellian cunning of the kind adaptive on the African savannah, but an unimaginably radical expansion of our own fitfully growing circle of empathy.

What such superhuman perspective-taking ability might entail is unclear. We are familiar with people who display abnormally advanced forms of “mind-blind”(16), autistic intelligence in higher mathematics and theoretical physics. Less well known are hyper-empathisers who display unusually sophisticated social intelligence. Perhaps the most advanced naturally occurring hyper-empathisers exhibit mirror-touch synaesthesia(17). A mirror-touch synaesthete cannot be unfriendly towards you because she feels your pain and pleasure as if it were her own. In principle, such unusual perspective-taking capacity could be generalised and extended with reciprocal neuroscanning technology and telemetry into a kind of naturalised telepathy, both between and within species. Interpersonal and cross-species mind-reading could in theory break down hitherto invincible barriers of ignorance between different skull-bound subjects of experience, thereby eroding the anthropocentric, ethnocentric and egocentric bias that has plagued life on Earth to date. Today, the intelligence-testing community tends to treat facility at empathetic understanding as if it were a mere personality variable, or at best some sort of second-rate cognition for people who can’t do IQ tests. But “mind-reading” can be a highly sophisticated, cognitively demanding ability. Compare, say, the sixth-order intentionality manifested by Shakespeare. Thus we shouldn’t conceive superintelligence as akin to God imagined by someone with autistic spectrum disorder. Rather full-spectrum superintelligence entails a God’s-eye capacity to understand the rich multi-faceted first-person perspectives of diverse lifeforms whose mind-spaces humans would find incomprehensibly alien.

An obvious objection arises. Just because ultra-intelligent posthumans may be capable of displaying empathetic superintelligence, how do we know such intelligence will be exercised? The short answer is that we don’t: by analogy, today’s mirror-touch synaesthetes might one day neurosurgically opt to become mind-blind. But then equally we don’t know whether posthumans will renounce their advanced logico-mathematical prowess in favour of the functional equivalent of wireheading. If they do so, then they won’t be superintelligent. The existence of diverse first-person perspectives is a fundamental feature of the natural world, as fundamental as the second law of thermodynamics or the Higgs boson. To be ignorant of fundamental features of the world is to be an idiot savant: a super-Watson(18) perhaps, but not a superintelligence(19).

High-Tech Jainism?
Jules Renard once remarked, “I don’t know if God exists, but it would be better for His reputation if He didn’t.” God’s conspicuous absence from the natural world needn’t deter us from asking what an omniscient, omnipotent, all-merciful deity would want humans to do with our imminent God-like powers. For we’re on the brink of a momentous evolutionary transition in the history of life on Earth. Physicist Freeman Dyson predicts we’ll soon “be writing genomes as fluently as Blake and Byron wrote verses”(20). The ethical risks and opportunities for apprentice deities are huge.

On the one hand, Karl Popper warns, “Those who promise us paradise on earth never produced anything but a hell”(21). Twentieth-century history bears out such pessimism. Yet for billions of sentient beings from less powerful species, existing life on Earth is hell. They end their miserable lives on our dinner plates: “for the animals it is an eternal Treblinka”, writes Jewish Nobel laureate Isaac Bashevis Singer(22).

In a more utopian vein, some utterly sublime scenarios are technically feasible later this century and beyond. It’s not clear whether experience below Sidgwick’s(23) “hedonic zero” has any long-term future. Thanks to molecular neuroscience, mastery of the brain’s reward circuitry could make everyday life wonderful beyond the bounds of normal human experience. There is no technical reason why the pitiless Darwinian struggle of the past half billion years can’t be replaced by an earthly paradise for all creatures great and small. Genetic engineering could allow “the lion to lie down with the lamb.” Enhancement technologies could transform killer apes into saintly smart angels. Biotechnology could abolish suffering throughout the living world. Artificial intelligence could secure the well-being of all sentience in our forward light-cone. Our quasi-immortal descendants may be animated by gradients of intelligent bliss orders of magnitude richer than anything physiologically feasible today.

Such fantastical-sounding scenarios may never come to pass. Yet if so, this won’t be because the technical challenges prove too daunting, but because intelligent agents choose to forgo the molecular keys to paradise for something else. Critically, the substrates of bliss don’t need to be species-specific or rationed. Transhumanists believe the well-being of all sentience(24) is the bedrock of any civilisation worthy of the name.

Also see this related interview with David Pearce on ‘Antispecism & Compassionate Stewardship’:

* * *
NOTES

1. How modest? A venerable tradition in philosophical meta-ethics is anti-realism. The meta-ethical anti-realist proposes that claims such as it’s wrong to rape women, kill Jews, torture babies (etc) lack truth value – or are simply false. (cf. JL Mackie, Ethics: Inventing Right and Wrong, Viking Press, 1977.) Here I shall assume that, for reasons we simply don’t understand, the pain-pleasure axis discloses the world’s inbuilt metric of (dis)value. Meta-ethical anti-realists may instead wish to interpret this critique of speciesism merely as casting doubt on its internal coherence rather than a substantive claim that a non-speciesist ethic is objectively true.

2. Extreme violence towards members of other tribes and races can be fitness-enhancing too. See, e.g. Richard Wrangham & Dale Peterson, Demonic Males: Apes and the Origins of Human Violence, Houghton Mifflin, 1997.

3. Fisher SE, Scharff C (2009). “FOXP2 as a molecular window into speech and language”. Trends Genet. 25 (4): 166–77. doi:10.1016/j.tig.2009.03.002. PMID 19304338.

4. Interpersonal and interspecies comparisons of sentience are of course fraught with problems. Comparative studies of how hard a human or nonhuman animal will work to avoid or obtain a particular stimulus give one crude behavioural indication. Yet we can go right down to the genetic and molecular level, e.g. interspecies comparisons of SCN9A genotype. (cf. http://www.pnas.org/? content/early/2010/02/23/?0913181107.full.pdf) We know that in humans the SCN9A gene modulates pain-sensitivity. Some alleles of SCN9A give rise to hypoalgesia, others alleles to hyperalgesia. Nonsense mutations yield congenital insensitivity to pain. So we could systematically compare the SCN9A gene and its homologues in nonhuman animals. Neocortical chauvinists will still be sceptical of non-mammalian sentience, pointing to the extensive role of cortical processing in higher vertebrates. But recall how neuroscanning techniques reveal that during orgasm, for example, much of the neocortex effectively shuts down. Intensity of experience is scarcely diminished.

5. Held S, Mendl M, Devereux C, and Byrne RW. 2001. “Studies in social cognition: from primates to pigs”. Animal Welfare 10:S209-17.

6. Jonathan Haidt, The Righteous Mind: Why Good People Are Divided by Politics and Religion, Pantheon Books, 2012.

7. Hannah Arendt, Eichmann in Jerusalem: A Report on the Banality of Evil, Viking Press, 1963.

8. http://www.new-harvest.org/

9. “PayPal Founder Backs Synthetic Meat Printing Company”, Wired, August 16 2012. http://www.wired.com/wiredscience/2012/08/3d-printed-meat/

10. https://www.abolitionist.com/reprogramming/elephantcare.html

11. https://www.abolitionist.com/reprogramming/index.html

12. The scholarly literature on the problem of wild animal suffering is still sparse. But perhaps see Arne Naess, “Should We Try To Relieve Clear Cases of Suffering in Nature?”, published in The Selected Works of Arne Naess, Springer, 2005; Oscar Horta, “The Ethics of the Ecology of Fear against the Nonspeciesist Paradigm: A Shift in the Aims of Intervention in Nature”, Between the Species, Issue X, August 2010. http://digitalcommons.calpoly.edu/bts/vol13/iss10/10/ ; Brian Tomasik, “The Importance of Wild-Animal Suffering”, http://www.utilitarian-essays.com/suffering-nature.html ; and the first print-published plea for phasing out carnivorism in Nature, Jeff McMahan’s “The Meat Eaters”, The New York Times. September 19, 2010. http://opinionator.blogs.nytimes.com/2010/09/19/the-meat-eaters/

13. Singularity Hypotheses, A Scientific and Philosophical Assessment, Eden, A.H.; Moor, J.H.; Søraker, J.H.; Steinhart, E. (Eds.) Spinger 2013. http://singularityhypothesis.blogspot.co.uk/p/table-of-contents.html

14. David Pearce, The Biointelligence Explosion. (preprint), 2012. https://www.biointelligence-explosion.com.

15. Thomas Nagel, The View From Nowhere , OUP, 1989.

16. Simon Baron-Cohen (2009). “Autism: the empathizing–systemizing (E-S) theory” (PDF). Ann N Y Acad Sci 1156: 68–80. doi:10.1111/j.1749-6632.2009.04467.x. PMID 19338503.

17. Banissy, M. J. & Ward, J. (2007). Mirror-touch synesthesia is linked with empathy. Nature Neurosci. doi: 10.1038/nn1926.

18. Stephen Baker. Final Jeopardy: Man vs. Machine and the Quest to Know Everything. Houghton Mifflin Harcourt. 2011.

19. Orthogonality or convergence? For an alternative to the convergence thesis, see Nick Bostrom, “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents”, 2012, http://www.nickbostrom.com/superintelligentwill.pdf; and Eliezer Yudkowsky, Carl Shulman, Anna Salamon, Rolf Nelson, Steven Kaas, Steve Rayhawk, Zack Davis, and Tom McCabe. “Reducing Long-Term Catastrophic Risks from Artificial Intelligence”, 2010. http://singularity.org/files/ReducingRisks.pdf

20. Freeman Dyson, “When Science & Poetry Were Friends”, New York Review of Books, August 13, 2009.

21. As quoted in Jon Winokur, In Passing: Condolences and Complaints on Death, Dying, and Related Disappointments, Sasquatch Books, 2005.

22. Isaac Bashevis Singer, The Letter Writer, 1964.

23. Henry Sidgwick, The Methods of Ethics. London, 1874, 7th ed. 1907.

24. The Transhumanist Declaration (1998, 2009). http://humanityplus.org/philosophy/transhumanist-declaration/

David Pearce
September 2012

Link to video

Anders Sandberg -The Technological Singularity

Anders Sandberg.00_23_53_16.Still031Anders gives a short tutorial on the Singularity – clearing up confusion and highlighting important aspects of the Technological Singularity and related ideas, such as accelerating change, horizons of predictability, self-improving artificial intelligence, and the intelligence explosion.

Tutorial Video:

Points covered in the tutorial:

  • The Mathematical Singularity
  • The Technological Singularity: A Horizon of predictability
  • Confusion Around The Technological Singularity
  • Drivers of Accelerated Growth
  • Technology Feedback Loops
  • A History of Coordination
  • Technological Inflection Points
  • Difficult of seeing what happens after an Inflection Point
  • The Intelligence Explosion
  • An Optimisation Power Applied To Itself
  • Group Minds
  • The HIVE Singularity: A Networked Global Mind
  • The Biointelligence explosion
  • Humans are difficult to optimise

An Overview of Models of the Technological Singularity

anders-sandberg-technology-feedback-loopsSee Anders’ paper ‘An overview of models of technological singularity
This paper reviews different definitions and models of technological singularity. The models range from conceptual sketches to detailed endogenous growth models, as well as attempts to fit empirical data to quantitative models. Such models are useful for examining the dynamics of the world-system and possible types of future crisis points where fundamental transitions are likely to occur. Current models suggest that, generically, even small increasing returns tends to produce radical growth. If mental capital becomes copyable (such as would be the case for AI or brain emulation) extremely rapid growth would also become likely.
http://agi-conf.org/2010/wp-content/uploads/2009/06/agi10singmodels2.pdf

[The] Technological singularity is of increasing interest among futurists both as a predicted possibility in the midterm future and as subject for methodological debate. The concept is used in a variety of contexts, and has acquired an unfortunately large number of meanings. Some versions stress the role of artificial intelligence, others refer to more general technological change. These multiple meanings can overlap, and many writers use combinations of meanings: even Vernor Vinge’s seminal essay that coined the term uses several meanings. Some of these meanings may imply each other but often there is a conflation of different elements that likely (but not necessarily) occur in parallel. This causes confusion and misunderstanding to the extent that some critics argue that the term should be avoided altogether. At the very least the term ‘singularity’ has led to many unfortunate assumptions that technological singularity involves some form of mathematical singularity and can hence be ignored as unphysical.Anders Sandberg

A list of models described in the paper:

A. Accelerating change

Exponential or superexponential technological growth (with linked economical growth and social change) (Ray Kurzweil (Kur05), John Smart (Smang))

B. Self improving technology

Better technology allows faster development of new and better technology. (Flake (Fla06))

C. Intelligence explosion

Smarter systems can improve themselves, producing even more intelligence in a strong feedback loop. (I.J. Good (Goo65), Eliezer Yudkowsky)

D. Emergence of superintelligence

(Singularity Institute) 1

E. Prediction horizon

Rapid change or the emergence of superhuman intelligence makes the future impossible to predict from our current limited knowledge and experience. (Vinge, (Vin93))

F. Phase transition

The singularity represents a shift to new forms of organisation. This could be a fundamental difference in kind such as humanity being succeeded by posthuman or artificial intelligences,
a punctuated equilibrium transition or the emergence of a new meta-system level. (Teilhard de Chardin, Valentin Turchin (Tur77), Heylighen (Hey07))

G. Complexity disaster

Increasing complexity and interconnectedness causes increasing payoffs, but increases instability. Eventually this produces a crisis, beyond which point the dynamics must be different.
(Sornette (JS01), West (BLH+07))

H. Inflexion point

Large-scale growth of technology or economy follows a logistic growth curve. The singularity represents the inflexion point where change shifts from acceleration to de-acceleration. (Extropian
FAQ, T. Modis (Mod02))

I. Infinite progress

The rate of progress in some domain goes to infinity in nite time. (Few, if any, hold this to be plausible 2 )

anders-sandberg-the-technological-singularity-predictability-horizon

Many thanks for watching!

Consider supporting SciFuture by:
a) Subscribing to the YouTube channel:
b) Donating via Patreon: https://www.patreon.com/scifuture and/or
c) Sharing the media SciFuture creates

Science, Technology & the Future: http://scifuture.org

Suffering, and Progress in Ethics – Peter Singer

Peter Singer_profileSuffering is generally bad – Peter Singer (who is a Hedonistic Utilitarian), and most Effective Altruists would agree with this. Though in addressing the need for suffering today Peter acknowledges that, as we are presently constituted, suffering is useful as a warning sign (e.g. against further injury). But what about the future?
What if we could eliminate suffering?
Perhaps in the future we will have advanced technological interventions to warn us of danger that will be functionally similar to suffering, but without the nasty raw feels.
Peter Singer, like David Pearce, suggests that if we could eliminate suffering of non-human animals that are capable of suffering, perhaps in some way that is difficult to imagine now – that this would be a good thing.

Video Interview:

I would see no reason to regret the absence of sufferingPeter Singer
Peter can’t see any regret to lament the disappearance of suffering, though perhaps people may say it would be useful to help understand literature of the past. Perhaps there are some indirect uses for suffering – but on balance Peter thinks that the elimination of suffering would be an amazingly good thing to do.

Singer thinks it is interesting to speculate what might be possible for the future of human beings, if we do survive over the longer term. To what extent are we going to be able to enhance ourselves? In particular to what extent are we going to be more ethical human beings – which brings to question ‘Moral Enhancement’.

The Expanding Circle - Peter SingerHave we made Progress in Ethics? Peter argues for the case that our species has expanded the circle of our ethical concern we have in his book ‘The Expanding Circle‘, and more recently Steven Pinker took up this idea in ‘Better Angels Of Our Nature’ – and this has happened over the millennia, beyond initially the tribal group, then to a national level, beyond ethnic groups to all human beings, and now we are starting to expand moral concern to non-human sentient beings as well.

Steven Pinker thinks that increases in our ethical consideration is bound up with increases in our intelligence (as proposed by James Flynn – the Flynn Effect – though this research is controversial (it could be actual increases in intelligence or just the ability to do more abstract reasoning)) and increases in our ability to reason abstractly.

As mentioned earlier there are other ways in which we may increase our ability and tendency to be more moral (see Moral Enhancement), and in the future we may discover genes that may influence us to think more about others, to dwell less on negative emotions like anger or rage. It is hard to say whether people will use these kinds of moral enhancers voluntarily, or whether we need state policies to encourage people to use moral enhances in order to produce better communities – and there are a lot of concerns here that people may legitimately have about how the moral enhancement project takes place. Peter sees this as a fascinating prospect and that it would be great to be around to see how things develop over the next couple of centuries.

Note Steven Pinker said of Peter’s book:

Singer’s theory of the expanding circle remains an enormously insightful concept, which reconciles the existence of human nature with political and moral progress. It was also way ahead of its time. . . . It’s wonderful to see this insightful book made available to a new generation of readers and scholars.Steven Pinker

The Expanding Circle

Abstract: What is ethics? Where do moral standards come from? Are they based on emotions, reason, or some innate sense of right and wrong? For many scientists, the key lies entirely in biology–especially in Darwinian theories of evolution and self-preservation. But if evolution is a struggle for survival, why are we still capable of altruism?

Peter Singer - The Most Good You Should Do - EA Global Melbourne 2015In his classic study The Expanding Circle, Peter Singer argues that altruism began as a genetically based drive to protect one’s kin and community members but has developed into a consciously chosen ethic with an expanding circle of moral concern. Drawing on philosophy and evolutionary psychology, he demonstrates that human ethics cannot be explained by biology alone. Rather, it is our capacity for reasoning that makes moral progress possible. In a new afterword, Singer takes stock of his argument in light of recent research on the evolution of morality.

References:
The Expanding Circle book page at Princeton University: http://press.princeton.edu/titles/9434.html

The Flynn Effect: http://en.wikipedia.org/wiki/Flynn_effect

Peter Singer – Ethics, Evolution & Moral Progress – https://www.youtube.com/watch?v=91UQAptxDn8

For more on Moral Enhancement see Julian Savulescu’s and others writings on the subject.

Subscribe to this Channel: http://youtube.com/subscription_center?add_user=TheRationalFuture

Science, Technology & the Future: http://scifuture.org

A Non-trivial Pursuit of Happiness – Paradise Engineering with David Pearce :-)

It is non-trivial for a couple of reasons:
a) the pursuit of this vision of happiness is not trivial – it is likely to be a very challenging endeavor (though totally worthy of the effort)
b) the aim is to achieve non-trivial modes of happiness, kinds of information-sensitive gradients of bliss (as opposed to being stuck in a narrow-local maximum of ecstatic stupor)

Imagine that the best experience possible – and imagine that it would be lower than tomorrows floor.

It may be that our decedents will have the chance to re-engineer themselves to be able to experience well-being far beyond what we can experience and imagine today.

Full blown paradise engineering is likely not something that people alive now should expect, though if we are ethically serious, we should be investigating ways to redesign our default mode of being to flourish in states of bliss.

Transcript

Think of the most wonderful experience of your life – now imagine if life could be as good as that – or rather imagine if life could be better than that all the time. Just imagine if your best
experience ever could be lower than tomorrow’s hedonic flaw. Other things being equal, wouldn’t it be better if we live in paradise?
Now, for much of history this kind of talk would be simply could be dismissed as utopian dreaming; that manipulating the environment in
innumerable different ways has been tried and to be honest we’re not significantly happier now than ancestors on the African savanna – certainly not if suicide,
depression marital breakup statistics et cetera are taken seriously.
However thanks to biotechnology now it will be possible re-engineer ourselves; to edit our own source code; to enjoy life animated by gradients of bliss – other things being equal,
doesn’t it make sense to make that our default option?

What could go wrong? Well lots of things could go wrong – but that’s true of any experiment – and that’s what having kids involves today. When two people decide to bring children into
the world chances are the moment they are going to be bringing in an awful of suffering into the world too.
Whereas in future when one creates new life one will be creating these potentially creating gradients of lifelong well-being. And if we’re ethically serious, that’s the approach I think we ought to be taking.

A lost people will probably think |well that’s all well and good maybe our
children, grandchildren or great-grandchildren will enjoy this kind of fabulous life.”
“What about me now?” – because we’re human, one can listen to these wonderful tales some futurists relate of how good life could be in future – a future of super-intelligence, super-longevity and super-happiness – all these wonderful things – what about now? One still has bills to pay, taxes, relationship problems, just the messy nitty-gritty reality of life – unfortunately I don’t have a panacea now or rather the kinds of interventions one can suggest: good diets, exercise, sleep discipline… unfortunately are on not as exciting as this tantalizing prospect that our children and grandchildren will enjoy.

But after that somber note perhaps its worth suggesting that with to designer drugs and with future autosomal genetic therapy it will be possible for adults my age and older to enjoy the best time of their lives too – perhaps not full blown paradise engineering; the richness that our descendants may enjoy – but there is no reason to be skeptical that the later in years of our life won’t be incomparably richer than anything that’s gone before.

The Hedonistic Imperative

The Hedonistic Imperative outlines how genetic engineering and nanotechnology will abolish suffering in all sentient life.

The abolitionist project is hugely ambitious but technically feasible. It is also instrumentally rational and morally urgent. The metabolic pathways of pain and malaise evolved because they served the fitness of our genes in the ancestral environment. They will be replaced by a different sort of neural architecture – a motivational system based on heritable gradients of bliss. States of sublime well-being are destined to become the genetically pre-programmed norm of mental health. It is predicted that the world’s last unpleasant experience will be a precisely dateable event.

Two hundred years ago, powerful synthetic pain-killers and surgical anesthetics were unknown. The notion that physical pain could be banished from most people’s lives would have seemed absurd. Today most of us in the technically advanced nations take its routine absence for granted. The prospect that what we describe as psychological pain, too, could ever be banished is equally counter-intuitive. The feasibility of its abolition turns its deliberate retention into an issue of social policy and ethical choice.

Subscribe to our YouTube Channel | Science, Technology & the Future

Is there a Meaningful Future for Non-Optimal Moral Agents?

In an interview last year, I had a discussion with John Danaher on the Hedonistic Imperative & Superintelligence – a concern he has with HI is that it denies or de-emphasises some kind of moral agency – in moral theory there is a distinction between moral agents (being a responsible actor able to make moral decisions, influence direction of moral progress, shapes its future, and owes duties to others) and moral patients who may be deemed to have limited or no grounds for moral agency/autonomy/responsibility – they are simply a recipient of moral benefits – in contrast to humans, animals could be classified as moral patients – (see Stanford writing on Grounds for Moral Status).

As time goes on, the notion of strong artificial intelligence leading to Superintelligence (which may herald in something like an Intelligence Explosion) and ideas like the hedonistic imperative becomes less sensational sci-fi concepts and more like visions of realizable eventualities. Thinking about moral endpoints comes to me a paradoxical feeling of triumph and disempowerment.

John’s concern is that ensuring the well-being of humans (conscious entities) is consistent with denying their moral agency – minimizing their capacity to act – that there is a danger that the outcome of HI or an Intelligence Explosion may result in sentient life being made very happy forever, but unable to make choices – with a focus on a future entirely based on bliss whilst ignoring other aspects of what makes for a valuable or worthwhile existence.

Artificial Heart chipsSo even if we have a future where a) we are made very happy and b) we are subject to a wide variety of novelty (which I argue for in Novelty Utilitarianism) without some kind of self-determination we may not be able to enjoy part of what arguably makes for a worthwhile existence.

If the argument for moral agency is completely toppled by the argument against free will then I can see why there would be no reason for it – and that bliss/novelty may be enough – though I personally haven’t been convinced that this is the case.

Also the idea that moral agency and novelty should be ranked as auxiliary aspects to the main imperative of reducing suffering/increasing bliss seems problematic – I get the sense that they (agency/novelty) could easily be swapped out for most non-optimal moral agents in the quest for -suffering/+bliss troublesome.
The idea that upon evaluating grounds for moral status, our ethical/moral quotient may not match or even come close to a potential ethical force of a superintelligence is also troubling. If we are serious about the best ethical outcomes, when the time comes, should we be committed to resigning all moral agency to agents that are more adept at producing peek moral outcomes?
ancillary-one-esk-glitchIs it really possible for non-optimal agents to have a meaningful moral input in a universe where they’ve been completely outperformed by moral machines? Is a life of novelty & bliss the most optimal outcome we can hope for?

There probably should be some more discussion on trade-offs between moral agency, peek experience and novelty.

Discussion in this video here starts at 24:02

Below is the whole interview with John Danaher:

Wireheading with David Pearce

Is the Hedonistic Imperative equivalent to wire-heading?
People are often concerned about the future being a cyber-puink dystopia where people are hard wired into pleasure centers like smacked out like lotus eating milk-sops devoid of meaningful existence. Does David Pearce’s Hedonistic Imperative entail a future where we are all in thrall to permanent experiential orgasms – intravenously hotwired into our pleasure centers via some kind of soma like drug turning us into blissful-idiots?

Adam Ford: I think some people often conflate or distill the Hedonistic Imperative to mean ‘wireheading’ – what do you (think)?

David Pearce: Yes, I mean, clearly if one does argue that were going to phase out the biology of suffering and live out lives of perpetual bliss then it’s very natural to assimilate this to something like ‘wireheading’ – but for all sorts of reasons I don’t think wireheading (i.e. intercrainial self-stimulation of the reward centers and it’s pharmacological equivalent) is a plausible scenario for our future. Not least there will presumably always be selection pressure against wireheading – wireheads do not want to have baby wireheads and raise wirehead children.
I think a much more credible scenario is the idea that were going to re-calibrate the hedonic treadmill and allow ourselves and our future children to enjoy lives based on gradients of intelligent bliss. And one of the advantages of re-calibration rather than straight forward hedonic maximization is that by urging recalibration one isn’t telling people they ought to be giving up their existing preferences or values is that if your hedonic set-point (i.e. your average state of wellbeing) is much higher than it is now your quality wireheads - white of life will really be much higher – but it doesn’t involve any sacrifice of the values you hold most dear.
As a rather simplistic way of putting it – clearly where one lies basically on the hedonic axis will impose serious cognitive biases (i.e. someone who is let’s say depressive or prone to low mood) at least will have a very different set of biases from someone who is naturally cheerful. But none-the-less it doesn’t entail, so long as we aim for a motivational architecture of gradients of bliss, it doesn’t entail giving up anything you want to hold onto. I think that’s really important because a lot of people will be worried that somehow that if, yes, we do enter into some kind of secular paradise – it will involve giving up their normal relationships, their ordinary values and what they hold most dear. Re-calibration does not entail this (wireheading).

Adam Ford: That’s interesting – people think that you know as soon as you turn on the Hedonistic Imperative you are destined for a very narrow set of values – that could be just one peek experience being replayed over and over again – in some narrow local maximum.

wirehead-utility-function-hijacking1024x448David Pearce: Yes – I suppose one thinks of (kind of) crazed wirehead rats – in fairness, if one does imagine orgasmic bliss most people don’t complain that their orgasms are too long (and I’m not convinced that there is something desperately wrong with orgasmic bliss that lasts weeks, months, years or even centuries) but one needs to examine the wider sociological picture – and ask ‘is it really sustainable for us to become blissed out as distinct form blissful’.

Adam Ford: Right – and by blissed out you mean something like the lotus eaters found in Odysseus?

David Pearce: Yes, I mean clearly it is one version of paradise and bliss – they call it meditative tranquility (not doing anything) – but there are other versions of bliss in which one is hyper-motivated. It seems that, crudely speaking, motivation (which is mediated by the meso-limbic dopamene system) and raw bliss (which is associated with mu-opiod activation of our twin hedonic-hotspots) – the axis are orthogonal. Now they’re very closely interrelated (thanks to natural selection) – but in principle we can amplify one or damp down the other. Empirically, at any rate it seems to be the case today that the happiest people are also the most motivated – they have the greatest desires – I mean, this runs counter to the old buddhist notion that desire is suffering – but if you actually look at people who are depressive or chronically depressed quite frequently they have an absence of desire or motivation. But the point is we should be free to choose – yes it is potentially hugely liberatery – this control over our reward architecture, our pleasure circuitry that biotechnology offers – but let’s get things right. We don’t want to mess things up and produce the equivalent of large numbers of people on Heroin – and this is why I so strenuously urge the case for re-calibration – in the long run genetically, in the short run by various no-recreational drugs.

Clearly it is one version of paradise and bliss – they call it meditative tranquility (not doing anything) – but there are other versions of bliss in which one is hyper-motivated.David Pearce

Adam Ford: Ok… People may be worried that re-calibrating someone is akin to disrupting the continuum of self (or this enduring metaphysical ego) – so the person at the other end wouldn’t be really a continuation of the person at the beginning. What do you think? How would you respond to that sort of criticism?

wireheading - static David PearceDavid Pearce: It depends how strict ones conception of what personal identity is. Now, would you be worried if to learn tomorrow that you had won the national lottery (for example)? It would transform your lifestyle, your circle of friends – would this trigger the anxiety that the person who was living the existence of a multi-millionaire wasn’t really you? Well perhaps you should perhaps you should be worried about this – but on the whole most people would be relatively relaxed at the prospect. I would see this more as akin to a small child growing up – yes in one sense as one becomes a mature adult one has killed the toddler or lost the essence of what it was to be a toddler – but only in a very benign sense. And by aiming for re-calibration and hedonic enrichment rather than maximization, there is much less of a risk of loosing anything that you think is really valuable or important.

Adam Ford: Okay – well that’s interesting – we’ll talk about value. In order to not loose forms of value – even if you don’t use it (the values) much – you might have some values that you leave up in the attic to gather dust – like toys that you don’t play with anymore – but you might want to pick up once in a thousand years or what not. How do you then preserve complexity of value while also achieving high hedonic states – do you think they can go hand in hand? Or do you think preserving complexity of value reduces the likelihood that you will be able to achieve optimal hedonic states?

David Pearce: As an empirical matter – and I stress empirical here – it seems to be the case that the happiest are responsive to the broadest possible range of rewarding stimuli – it tends to be depressives who get stuck in a rut. So other things being equal – by re-calibrating ourselves, becoming happy and then superhappy – we can potentially at any rate, yes, enrich the complexity of our lives with a range of rewarding stimuli – it makes getting stuck in a rut less likely both for the individual and for civilization as a whole.
I think one of the reasons we are afraid of some kind of loss of complexity is that the idea of heaven – including in traditional christian heaven – it can sound a bit monotonous, and for happy people at least one of the experiences they find most unpleasant is boredom. But essentially it should be a matter of choice – yes, someone who is very happy to, let’s say, listen to a piece of music or contemplate or art, should be free to do so, and not forced into leading a very complex or complicated life – but equally folk who want to do a diverse range of things – well that’s feasible too.

For all sorts of reasons I don’t think wireheading… is a plausible scenario for our future. Not least there will presumably always be selection pressure against wireheading – wireheads do not want to have baby wireheads and raise wirehead children.David Pearce

– video/audio interview continues on past 10:00