Posts

The Antispeciesist Revolution – read by David Pearce

The Antispeciesist Revolution

[Original text found here]

Speciesism.
When is it ethically acceptable to harm another sentient being? On some fairly modest(1) assumptions, to harm or kill someone simply on the grounds they belong to a different gender, sexual orientation or ethnic group is unjustified. Such distinctions are real but ethically irrelevant. On the other hand, species membership is normally reckoned an ethically relevant criterion. Fundamental to our conceptual scheme is the pre-Darwinian distinction between “humans” and “animals”. In law, nonhuman animals share with inanimate objects the status of property. As property, nonhuman animals can be bought, sold, killed or otherwise harmed as humans see fit. In consequence, humans treat nonhuman animals in ways that would earn a life-time prison sentence without parole if our victims were human. From an evolutionary perspective, this contrast in status isn’t surprising. In our ancestral environment of adaptedness, the human capacity to hunt, kill and exploit sentient beings of other species was fitness-enhancing(2). Our moral intuitions have been shaped accordingly. Yet can we ethically justify such behaviour today?

Naively, one reason for disregarding the interests of nonhumans is the dimmer-switch model of consciousness. Humans matter more than nonhuman animals because (most) humans are more intelligent. Intuitively, more intelligent beings are more conscious than less intelligent beings; consciousness is the touchstone of moral status.

The problem with the dimmer-switch model is that it’s empirically unsupported, among vertebrates with central nervous systems at least. Microelectrode studies of the brains of awake human subjects suggest that the most intense forms of experience, for example agony, terror and orgasmic bliss, are mediated by the limbic system, not the prefrontal cortex. Our core emotions are evolutionarily ancient and strongly conserved. Humans share the anatomical and molecular substrates of our core emotions with the nonhuman animals whom we factory-farm and kill. By contrast, distinctively human cognitive capacities such as generative syntax, or the ability to do higher mathematics, are either phenomenologically subtle or impenetrable to introspection. To be sure, genetic and epigenetic differences exist between, say, a pig and a human being that explain our adult behavioural differences, e.g. the allele of the FOXP2(1) gene implicated in the human capacity for recursive syntax. Such mutations have little to do with raw sentience(1).

Antispeciesism.
So what is the alternative to traditional anthropocentric ethics? Antispeciesism is not the claim that “All Animals Are Equal”, or that all species are of equal value, or that a human or a pig is equivalent to a mosquito. Rather the antispeciesist claims that, other things being equal, conscious beings of equivalent sentience deserve equal care and respect. A pig, for example, is of comparable sentience to a prelinguistic human toddler. As it happens, a pig is of comparable (or superior) intelligence to a toddler as well(5). However, such cognitive prowess is ethically incidental. If ethical status is a function of sentience, then to factory-farm and slaughter a pig is as ethically abhorrent as to factory-farm and slaughter a human baby. To exploit one and nurture the other expresses an irrational but genetically adaptive prejudice.

On the face of it, this antispeciesist claim isn’t just wrong-headed; it’s absurd. Philosopher Jonathan Haidt speaks of “moral dumfounding”(6), where we just know something is wrong but can’t articulate precisely why. Haidt offers the example of consensual incest between an adult brother and sister who use birth control. For evolutionary reasons, we “just know” such an incestuous relationship is immoral. In the case of any comparisons of pigs with human infants and toddlers, we “just know” at some deep level that any alleged equivalence in status is unfounded. After all, if there were no ethically relevant distinction between a pig and a toddler, or between a battery-farmed chicken and a human infant, then the daily behaviour of ordinary meat-eating humans would be sociopathic – which is crazy. In fact, unless the psychiatrists’ bible, Diagnostic and Statistical Manual of Mental Disorders, is modified explicitly to exclude behaviour towards nonhumans, most of us do risk satisfying its diagnostic criteria for the disorder. Even so, humans often conceive of ourselves as animal lovers. Despite the horrors of factory-farming, most consumers of meat and animal products are clearly not sociopaths in the normal usage of the term; most factory-farm managers are not wantonly cruel; and the majority of slaughterhouse workers are not sadists who delight in suffering. Serial killers of nonhuman animals are just ordinary men doing a distasteful job – “obeying orders” – on pain of losing their livelihoods.

Should we expect anything different? Jewish political theorist Hannah Arendt spoke famously of the “banality of evil”(7). If twenty-first century humans are collectively doing something posthuman superintelligence will reckon monstrous, akin to the [human] Holocaust or Atlantic slave trade, then it’s easy to assume our moral intuitions would disclose this to us. Our intuitions don’t disclose anything of the kind; so we sleep easy. But both natural selection and the historical record offer powerful reasons for doubting the trustworthiness of our naive moral intuitions. So the possibility that human civilisation might be founded upon some monstrous evil should be taken seriously – even if the possibility seems transparently absurd at the time.

One possible speciesist response is to raise the question of “potential”. Even if a pig is as sentient as a human toddler, there is a fundamental distinction between human toddlers and pigs. Only a toddler has the potential to mature into a rational adult human being.

The problem with this response is that it contradicts our treatment of humans who lack “potential”. Thus we recognise that a toddler with a progressive disorder who will never live to celebrate his third birthday deserves at least as much love, care and respect as his normally developing peers – not to be packed off to a factory-farm on the grounds it’s a shame to let good food go to waste. We recognise a similar duty of care for mentally handicapped adult humans and cognitively frail old people. For sure, historical exceptions exist to this perceived duty of care for vulnerable humans, e.g. the Nazi “euthanasia” program, with its eugenicist conception of “life unworthy of life”. But by common consent, we value young children and cognitively challenged adults for who they are, not simply for who they may – or may not – one day become. On occasion, there may controversially be instrumental reasons for allocating more care and resources to a potential genius or exceptionally gifted child than to a normal human. Yet disproportionate intraspecies resource allocation may be justified, not because high IQ humans are more sentient, but because of the anticipated benefits to society as a whole.

Practical Implications.
1. Invitrotarianism.

The greatest source of severe, chronic and readily avoidable suffering in the world today is man-made: factory farming. Humans currently slaughter over fifty billion sentient beings each year. One implication of an antispeciesist ethic is that factory farms should be shut and their surviving victims rehabilitated.

In common with most ethical revolutions in history, the prospect of humanity switching to a cruelty-free diet initially strikes most practically-minded folk as utopian dreaming. “Realists” certainly have plenty of hard evidence to bolster their case. As English essayist William Hazlitt observed, “The least pain in our little finger gives us more concern and uneasiness than the destruction of millions of our fellow-beings.” Without the aid of twenty-first century technology, the mass slaughter and abuse of our fellow animals might continue indefinitely. Yet tissue science technology promises to allow consumers to become moral agents without the slightest hint of personal inconvenience. Lab-grown in vitro meat produced in cell culture rather than a live animal has long been a staple of science fiction. But global veganism – or its ethical invitrotarian equivalent – is no longer a futuristic fantasy. Rapid advances in tissue engineering mean that in vitro meat will shortly be developed and commercialised. Today’s experimental cultured mincemeat can be supplanted by mass-manufactured gourmet steaks for the consumer market. Perhaps critically for its rapid public acceptance, in vitro meat does not need to be genetically modified – thereby spiking the guns of techno-luddites who might otherwise worry about “FrankenBurgers”. Indeed, cultured meat products will be more “natural” in some ways than their antibiotic-laced counterparts derived from factory-farmed animals.

Momentum for commercialisation is growing. Non-profit research organisations like New Harvest(8), working to develop alternatives to conventionally-produced meat, have been joined by hard-headed businessmen. Visionary entrepreneur and Stanford academic Peter Thiel has just funnelled $350,000 into Modern Meadow, a start-up that aims to combine 3D printing with in vitro meat cultivation. Within the next decade or so, gourmet steaks could be printed out from biological materials. In principle, the technology should be scalable.

Tragically, billions of nonhuman animals will grievously suffer and die this century at human hands before the dietary transition is complete. Humans are not obligate carnivores; eating meat and animal products is a lifestyle choice. “But I like the taste!” is not a morally compelling argument. Vegans and animal advocates ask whether we are ethically entitled to wait on a technological fix? The antispeciesist answer is clear: no.

2. Compassionate Biology.
If and when humans stop systematically harming other sentient beings, will our ethical duties to members of other species have been discharged? Not if the same ethical considerations as apply to members of other human races or age-groups apply also to members of other species of equivalent sentience. Thus if famine breaks out in sub-Saharan Africa and young human children are starving, then we recognise we have a duty to send aid; or better still, to take proactive measures to ensure famines do not arise in the first instance, i.e. to provide not just food aid but family planning. So why not assist, say, starving free-living elephants? Until recently, no comparable interventions were feasible for members of other species. The technical challenges were insurmountable. Not least, the absence of cross-species fertility control technologies would have often made bad problems worse. Yet thanks to the exponential growth of computer power, every cubic metre of the planet will shortly be computationally accessible to micro-management, surveillance and control. Harnessed to biotechnology, nanotechnology and robotics, such tools confer unprecedented power over Nature. With unbridled power comes complicity. Ethically speaking, how many of the traditional cruelties of the living world do we wish to perpetuate? Orthodox conservation biologists argue we should not “interfere”: humans can’t “police” Nature. Antispeciesists disagree. Advocates of compassionate biology argue that humans and nonhumans alike should not be parasitised, starved, disembowelled, asphyxiated, or eaten alive.

As always, bioconservatives insist such miseries are “natural”; status quo bias runs deep. “Custom will reconcile people to any atrocity”, observed George Bernard Shaw. Snuff movies in the guise of Nature documentaries are quite popular on Youtube, a counterpoint to the Disneyfied wildlife shows aired on mainstream TV. Moreover even sympathetic critics of compassionate biology might respond that helping free-living members of other species is prohibitively expensive. An adequate welfare safety-net scarcely exists for humans in many parts of the world. So how can we contemplate its extension to nonhumans – even just to large-brained, long-lived vertebrates in our Nature reserves? Provision of comprehensive healthcare for all free-living elephants(10), for example, might cost between two or three billion dollars annually. Compassionate stewardship of the living world would be technically daunting too, entailing ecosystem management, cross-species fertility control via immunocontraception, veterinary care, emergency famine-relief, GPS tracking and monitoring, and ultimately phasing out or genetically “reprogramming”(11) carnivorous predators. The notional bill could approach the world’s 1.7 trillion-dollar annual arms budget. But irrespective of cost or timescale, if we are to be consistently non-speciesist, then decisions about resource allocation should be based not on species membership, but directly or indirectly on sentience. An elephant, for example, is at least as sentient as a human toddler. If it is ethically obligatory to help sick or starving children, then it’s ethically obligatory to help sick or starving elephants – not just via crisis interventions but via long-term healthcare support.

A traditional conservation biologist might respond that elephants helped by humans are no longer truly wild. Yet on such a criterion, clothes-wearing humans or beneficiaries of food aid and family planning aren’t “wild” humans either. Why should this matter? “Free-living” and “wild” are conceptually distinct. To assume that the civilising process should be confined to our own species is mere speciesist prejudice. Humans, transhumans and posthumans must choose what forms of sentience we want to preserve and create on Earth and beyond. Humans already massively intervene in Nature, whether though habitat destruction, captive breeding programs for big cats, “rewilding”, etc. So the question is not whether humans should “interfere”, but rather what ethical principles should govern our interventions(12).

Speciesism and Superintelligence.
Why should transhumanists care about the suffering of nonhuman animals? This is not a “feel-good” issue. One reason we should care cuts to the heart of the future of life in the universe. Transhumanists differ over whether our posthuman successors will most likely be nonbiological artificial superintelligence; or cyborgs who effectively merge with our hyperintelligent machines; or our own recursively self-improving biological descendents who modify their own genetic source code and bootstrap their way to full-spectrum superintelligence(13). Regardless of the dominant lifeform of the posthuman era, biological humans have a vested interest in the behaviour of intellectually advanced beings towards cognitively humble creatures – if we survive at all. Compared to posthuman superintelligence, archaic humans may be no smarter than pigs or chickens – or perhaps worms. This does not augur well for Homo sapiens. Western-educated humans tend to view Jains as faintly ridiculous for practising ahimsa, or harmlessness, sweeping the ground in front of them to avoid inadvertently treading on insects. How quixotic! Yet the fate of sentient but cognitively humble lifeforms in relation to vastly superior intelligence is precisely the issue at stake as we confront the prospect of posthuman superintelligence. How can we ensure a Jain-like concern for comparatively simple-minded creatures such as ourselves? Why should superintelligences care any more than humans about the well-being of their intellectual inferiors? Might distinctively human-friendly superintelligence turn out to be as intellectually-incoherent as, say, Aryan-friendly superintelligence? If human primitives are to prove worthy of conservation, how can we implement technologies of impartial friendliness towards other sentients? And if posthumans do care, how do we know that a truly benevolent superintelligence wouldn’t turn Darwinian life into utilitronium with a communal hug?

Viewed in such a light, biological humanity’s prospects in a future world of superintelligence might seem dire. However, this worry expresses a one-dimensional conception of general intelligence. No doubt the nature of mature superintelligence is humanly unknowable. But presumably full-spectrum(14) superintelligence entails, at the very least, a capacity to investigate, understand and manipulate both the formal and the subjective properties of mind. Modern science aspires to an idealised “view from nowhere”(15), an impartial, God-like understanding of the natural universe, stripped of any bias in perspective and expressed in the language of mathematical physics. By the same token, a God-like superintelligence must also be endowed with the capacity impartially to grasp all possible first-person perspectives – not a partial and primitive Machiavellian cunning of the kind adaptive on the African savannah, but an unimaginably radical expansion of our own fitfully growing circle of empathy.

What such superhuman perspective-taking ability might entail is unclear. We are familiar with people who display abnormally advanced forms of “mind-blind”(16), autistic intelligence in higher mathematics and theoretical physics. Less well known are hyper-empathisers who display unusually sophisticated social intelligence. Perhaps the most advanced naturally occurring hyper-empathisers exhibit mirror-touch synaesthesia(17). A mirror-touch synaesthete cannot be unfriendly towards you because she feels your pain and pleasure as if it were her own. In principle, such unusual perspective-taking capacity could be generalised and extended with reciprocal neuroscanning technology and telemetry into a kind of naturalised telepathy, both between and within species. Interpersonal and cross-species mind-reading could in theory break down hitherto invincible barriers of ignorance between different skull-bound subjects of experience, thereby eroding the anthropocentric, ethnocentric and egocentric bias that has plagued life on Earth to date. Today, the intelligence-testing community tends to treat facility at empathetic understanding as if it were a mere personality variable, or at best some sort of second-rate cognition for people who can’t do IQ tests. But “mind-reading” can be a highly sophisticated, cognitively demanding ability. Compare, say, the sixth-order intentionality manifested by Shakespeare. Thus we shouldn’t conceive superintelligence as akin to God imagined by someone with autistic spectrum disorder. Rather full-spectrum superintelligence entails a God’s-eye capacity to understand the rich multi-faceted first-person perspectives of diverse lifeforms whose mind-spaces humans would find incomprehensibly alien.

An obvious objection arises. Just because ultra-intelligent posthumans may be capable of displaying empathetic superintelligence, how do we know such intelligence will be exercised? The short answer is that we don’t: by analogy, today’s mirror-touch synaesthetes might one day neurosurgically opt to become mind-blind. But then equally we don’t know whether posthumans will renounce their advanced logico-mathematical prowess in favour of the functional equivalent of wireheading. If they do so, then they won’t be superintelligent. The existence of diverse first-person perspectives is a fundamental feature of the natural world, as fundamental as the second law of thermodynamics or the Higgs boson. To be ignorant of fundamental features of the world is to be an idiot savant: a super-Watson(18) perhaps, but not a superintelligence(19).

High-Tech Jainism?
Jules Renard once remarked, “I don’t know if God exists, but it would be better for His reputation if He didn’t.” God’s conspicuous absence from the natural world needn’t deter us from asking what an omniscient, omnipotent, all-merciful deity would want humans to do with our imminent God-like powers. For we’re on the brink of a momentous evolutionary transition in the history of life on Earth. Physicist Freeman Dyson predicts we’ll soon “be writing genomes as fluently as Blake and Byron wrote verses”(20). The ethical risks and opportunities for apprentice deities are huge.

On the one hand, Karl Popper warns, “Those who promise us paradise on earth never produced anything but a hell”(21). Twentieth-century history bears out such pessimism. Yet for billions of sentient beings from less powerful species, existing life on Earth is hell. They end their miserable lives on our dinner plates: “for the animals it is an eternal Treblinka”, writes Jewish Nobel laureate Isaac Bashevis Singer(22).

In a more utopian vein, some utterly sublime scenarios are technically feasible later this century and beyond. It’s not clear whether experience below Sidgwick’s(23) “hedonic zero” has any long-term future. Thanks to molecular neuroscience, mastery of the brain’s reward circuitry could make everyday life wonderful beyond the bounds of normal human experience. There is no technical reason why the pitiless Darwinian struggle of the past half billion years can’t be replaced by an earthly paradise for all creatures great and small. Genetic engineering could allow “the lion to lie down with the lamb.” Enhancement technologies could transform killer apes into saintly smart angels. Biotechnology could abolish suffering throughout the living world. Artificial intelligence could secure the well-being of all sentience in our forward light-cone. Our quasi-immortal descendants may be animated by gradients of intelligent bliss orders of magnitude richer than anything physiologically feasible today.

Such fantastical-sounding scenarios may never come to pass. Yet if so, this won’t be because the technical challenges prove too daunting, but because intelligent agents choose to forgo the molecular keys to paradise for something else. Critically, the substrates of bliss don’t need to be species-specific or rationed. Transhumanists believe the well-being of all sentience(24) is the bedrock of any civilisation worthy of the name.

Also see this related interview with David Pearce on ‘Antispecism & Compassionate Stewardship’:

* * *
NOTES

1. How modest? A venerable tradition in philosophical meta-ethics is anti-realism. The meta-ethical anti-realist proposes that claims such as it’s wrong to rape women, kill Jews, torture babies (etc) lack truth value – or are simply false. (cf. JL Mackie, Ethics: Inventing Right and Wrong, Viking Press, 1977.) Here I shall assume that, for reasons we simply don’t understand, the pain-pleasure axis discloses the world’s inbuilt metric of (dis)value. Meta-ethical anti-realists may instead wish to interpret this critique of speciesism merely as casting doubt on its internal coherence rather than a substantive claim that a non-speciesist ethic is objectively true.

2. Extreme violence towards members of other tribes and races can be fitness-enhancing too. See, e.g. Richard Wrangham & Dale Peterson, Demonic Males: Apes and the Origins of Human Violence, Houghton Mifflin, 1997.

3. Fisher SE, Scharff C (2009). “FOXP2 as a molecular window into speech and language”. Trends Genet. 25 (4): 166–77. doi:10.1016/j.tig.2009.03.002. PMID 19304338.

4. Interpersonal and interspecies comparisons of sentience are of course fraught with problems. Comparative studies of how hard a human or nonhuman animal will work to avoid or obtain a particular stimulus give one crude behavioural indication. Yet we can go right down to the genetic and molecular level, e.g. interspecies comparisons of SCN9A genotype. (cf. http://www.pnas.org/? content/early/2010/02/23/?0913181107.full.pdf) We know that in humans the SCN9A gene modulates pain-sensitivity. Some alleles of SCN9A give rise to hypoalgesia, others alleles to hyperalgesia. Nonsense mutations yield congenital insensitivity to pain. So we could systematically compare the SCN9A gene and its homologues in nonhuman animals. Neocortical chauvinists will still be sceptical of non-mammalian sentience, pointing to the extensive role of cortical processing in higher vertebrates. But recall how neuroscanning techniques reveal that during orgasm, for example, much of the neocortex effectively shuts down. Intensity of experience is scarcely diminished.

5. Held S, Mendl M, Devereux C, and Byrne RW. 2001. “Studies in social cognition: from primates to pigs”. Animal Welfare 10:S209-17.

6. Jonathan Haidt, The Righteous Mind: Why Good People Are Divided by Politics and Religion, Pantheon Books, 2012.

7. Hannah Arendt, Eichmann in Jerusalem: A Report on the Banality of Evil, Viking Press, 1963.

8. http://www.new-harvest.org/

9. “PayPal Founder Backs Synthetic Meat Printing Company”, Wired, August 16 2012. http://www.wired.com/wiredscience/2012/08/3d-printed-meat/

10. https://www.abolitionist.com/reprogramming/elephantcare.html

11. https://www.abolitionist.com/reprogramming/index.html

12. The scholarly literature on the problem of wild animal suffering is still sparse. But perhaps see Arne Naess, “Should We Try To Relieve Clear Cases of Suffering in Nature?”, published in The Selected Works of Arne Naess, Springer, 2005; Oscar Horta, “The Ethics of the Ecology of Fear against the Nonspeciesist Paradigm: A Shift in the Aims of Intervention in Nature”, Between the Species, Issue X, August 2010. http://digitalcommons.calpoly.edu/bts/vol13/iss10/10/ ; Brian Tomasik, “The Importance of Wild-Animal Suffering”, http://www.utilitarian-essays.com/suffering-nature.html ; and the first print-published plea for phasing out carnivorism in Nature, Jeff McMahan’s “The Meat Eaters”, The New York Times. September 19, 2010. http://opinionator.blogs.nytimes.com/2010/09/19/the-meat-eaters/

13. Singularity Hypotheses, A Scientific and Philosophical Assessment, Eden, A.H.; Moor, J.H.; Søraker, J.H.; Steinhart, E. (Eds.) Spinger 2013. http://singularityhypothesis.blogspot.co.uk/p/table-of-contents.html

14. David Pearce, The Biointelligence Explosion. (preprint), 2012. https://www.biointelligence-explosion.com.

15. Thomas Nagel, The View From Nowhere , OUP, 1989.

16. Simon Baron-Cohen (2009). “Autism: the empathizing–systemizing (E-S) theory” (PDF). Ann N Y Acad Sci 1156: 68–80. doi:10.1111/j.1749-6632.2009.04467.x. PMID 19338503.

17. Banissy, M. J. & Ward, J. (2007). Mirror-touch synesthesia is linked with empathy. Nature Neurosci. doi: 10.1038/nn1926.

18. Stephen Baker. Final Jeopardy: Man vs. Machine and the Quest to Know Everything. Houghton Mifflin Harcourt. 2011.

19. Orthogonality or convergence? For an alternative to the convergence thesis, see Nick Bostrom, “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents”, 2012, http://www.nickbostrom.com/superintelligentwill.pdf; and Eliezer Yudkowsky, Carl Shulman, Anna Salamon, Rolf Nelson, Steven Kaas, Steve Rayhawk, Zack Davis, and Tom McCabe. “Reducing Long-Term Catastrophic Risks from Artificial Intelligence”, 2010. http://singularity.org/files/ReducingRisks.pdf

20. Freeman Dyson, “When Science & Poetry Were Friends”, New York Review of Books, August 13, 2009.

21. As quoted in Jon Winokur, In Passing: Condolences and Complaints on Death, Dying, and Related Disappointments, Sasquatch Books, 2005.

22. Isaac Bashevis Singer, The Letter Writer, 1964.

23. Henry Sidgwick, The Methods of Ethics. London, 1874, 7th ed. 1907.

24. The Transhumanist Declaration (1998, 2009). http://humanityplus.org/philosophy/transhumanist-declaration/

David Pearce
September 2012

Link to video

Consciousness in Biological and Artificial Brains – Prof Christof Koch

Event Description: Human and non-human animals not only act in the world but are capable of conscious experience. That is, it feels like something to have a brain and be cold, angry or see red. I will discuss the scientific progress that has been achieved over the past decades in characterizing the behavioral and the neuronal correlates of consciousness, based on clinical case studies as well as laboratory experiments. I will introduce the Integrated Information Theory (IIT) that explains in a principled manner which physical systems are capable of conscious, subjective experience. The theory explains many biological and medical facts about consciousness and its pathologies in humans, can be extrapolated to more difficult cases, such as fetuses, mice, or non-mammalian brains and has been used to assess the presence of consciousness in individual patients in the clinic. IIT also explains why consciousness evolved by natural selection. The theory predicts that deep convolutional networks and von Neumann computers would experience next to nothing, even if they perform tasks that in humans would be associated with conscious experience and even if they were to run software faithfully simulating the human brain.

[Meetup Event Page]

Supported by The Florey Institute of Neuroscience & Mental Health, the University of Melbourne and the ARC Centre of Excellence for Integrative Brain Function.

 

 

Who: Prof Christof Koch, President and Chief Scientific Officer, Allen Institute for Brain Sciences, Seattle, USA

Venue: Melbourne Brain Centre, Ian Potter Auditorium, Ground Floor, Kenneth Myer Building (Building 144), Genetics Lane, 30 Royal Parade, University of Melbourne, Parkville

This will be of particular interest to those who know of David Pearce, Andreas Gomez, Mike Johnson and Brian Tomasik’s works – see this online panel:

Ethics In An Uncertain World – Australian Humanist Convention 2017

Join Peter Singer & AC Grayling to discuss some of the most pressing issues facing society today – surviving the Trump era, Climate Change, Naturalism & the Future of Humanity.

Ethics In An Uncertain World

After an incredibly successful convention in Brisbane in May, 2016, the Humanist Society of Victoria together with the Council of Australian Humanist Societies will be hosting Australian Humanists at the start of April to discuss and learn about some of the most pressing issues facing society today and how Humanists and the world view we hold can help to shape a better future for all of society.

Official Conference LinkGet Tickets Here | Gala Dinner | FAQs | Meetup Link | Google Map Link

Lineup

AC Grayling – Humanism, the individual and society
Peter Singer – Public Ethics in the Trump Era
Clive Hamilton – Humanism and the Anthropocene
Meredith Doig – Interbelief presentations in schools
Monica Bini – World-views in the school curriculum
James Fodor – ???
Adam Ford – Humanism & Population Axiology

SciFuture supports and endorses the Humanist Convention in 2017 in efforts to explore ethics foundational in enlightenment values, march against prejudice, and help make sense of the world. SciFuture affirms that human beings (and indeed many other nonhuman animals) have the right to flourish, be happy, and give meaning and shape to their own lives.

Peter Singer wrote about Taking Humanism Beyond Speciesism – Free Inquiry, 24, no. 6 (Oct/Nov 2004), pp. 19-21

AC Grayling’s talk on Humanism at the British Humanists Association:

 

Narratives, Values & Progress – Anders Sandberg

Anders Sandberg discusses ideas & values and where we get them from, mindsets for progress, and that we are living in a unique era of technological change but also, importantly we are aware that we are living in an era of great change. Is there a direction in ethics? Is morality real? If so, how do we find it? What will our descendants think of our morals today – will they be weird to future generations?

One of the interesting things about our current world is that we are aware that a lot of ideas about morality are things going on in our culture and in our heads – and are not just the laws of nature – that’s very useful. Some people of course think that there is some ideal or best moral system – and maybe there is – but we’re not very good at finding it. It might turn out that in the long run if there is some kind of ultimate sensible moral – we’re going to find it – but that might take a very long time and might take brains much more powerful than ours – it might turn out that all sufficiently advanced alien civilizations eventually figure out the right thing to do – and do it. But it could also turn out actually when we meet real advanced aliens they’re going to be as confused about philosophy as we are – that’s one of the interesting things to find out about the universe.Anders Sandberg

Points covered:
– Technologies of the Future
– Efficient sustainability, in-vitro meat
– Living in an era of awareness of change
– Values have changed over time
– Will our morals be weird to future generations?
– Where is ethics going?
– Does moral relativism adequately explain reductions in violence?
– Is there an ideal ‘best moral system’? and if so, how do we find it?

Transcript

I grew up reading C.S. Lewis and his Narnia Stories. And at that time I didn’t get what was going on – I think it was when finally I was reading one, I then started thinking ‘this seems like an allegory’ and then sort of realizing ‘a christian allegory’ and then I felt ‘oh dear!’. I had to of course read all of them. In the end I was quite cross at Lewis for trying to foist that kind of stuff on children. He of course was unashamed – he was arguing in his letters ‘of course, if you are a christian you should make christian stories and try to tell them’ – but then of course he hides everything – so instead of having Jesus he turns him into a lion and so on.
But there’s an interesting problem in general of course ‘where do we get our ideas from?’. I grew up in boring Sweden in the 70’s so I had to read a lot of science fiction in order to get excited. That science fiction story reading made me interested in the technology & science and made it real – but it also gave me a sort of libertarian outlook accidentally. I realised that well, maybe our current rules for society are arbitrary – we could change them into something better. And aliens are people too, as well as robots. So in the end that kind of education also set me on my path.
So in general what we read as children effects us in sometimes very subtle ways – I was reading one book about technologies of the future by a German researcher – today of course it is very laughably 60ish – very much thinking about cybernetics and the big technologies, fusion reactions and rockets – but it also got me thinking about ‘we can change the world completely’ – there is no reason to think that it works out that only 700 billion people can live on earth – we rebuild it to house trillions – it wouldn’t be a particularly nice world, it would be nightmarish by our current standards – but it would actually be possible to do. It’s rather that we have a choice of saying ‘maybe we want to keep our world rather small scale with just a few billion people on it’. Other would say ‘we can’t event sustain a few billion people on the planet – we’re wearing out the biosphere’ – but again it’s based on a certain assumption about how the biosphere functions – we can produce the food more efficiently than we currently do. If we went back to the primitive hunter gatherers we would need several hundred earths to sustain us all simply hunter gatherers need enormous areas of land in order to get enough prey to hunt down in order to survive. Agriculture is much more effective – and we can go far beyond that – things like hydroponics and in-vitro meat might actually in the future mean that we would say it’s absolutely disgusting, or rather weird to culture farmland or eat animals! ‘Why would you actually eat animals? Well only disgusting people back in the stone-age did that’. In that stone age they were using silicone of course.
Dividing history into ages is very fraught because when you declare that ‘this is the atomic age’ you make certain assumptions – so the atomic age didn’t turn out so well because people lost their faith in their friend the atom – the space age didn’t turn out to be a space age because people found better ways of using the money – in a sense we went out into space prematurely before there was a good business case for it. The computer age on the other hand – well now computers are so everywhere that we could just as well call it the air age – it’s everywhere. Similarly the internet – that’s just the latest innovation – probably as people in the future look back we’re going to call it something completely different – just like we want to divide history into things like the Medieval age, or the Renaissance, which are not always more than just labels. What I think is unique about our era in history is that we’re very aware that we are living in a changing world; that is not going to be the same in 100 years, that is going to be utterly utterly different from what it was 100 years back. So many historical eras people have been thinking ‘oh we’re on the cusp of greatness or a great disaster’. But we actually have objective good reasons for thinking things cannot remain as they were. There are too many people, too many brains, too much technology – and a lot of these technologies are very dangerous and very transformative – so if we can get through this without too much damage to ourselves and the planet, I think we are going to have a very interesting future. But it’s also probably going to be a future that is somewhat alien from what we can foresee.
If we took an ancient roman and put him into the modern society he would absolutely shocked – not just by our technology, but by our values. We are very clear that compassion is a good virtue, and he would say the opposite and say ‘compassion is for old ladies’ – and of course a medieval knight would say ‘you have no honor in the 21st century’ and we’d say ‘oh yes, honor killings and all that – that’s bad, yeah actually a lot of those medieval honorable ideals they’re actually immoral by our standards’. So we should probably take that our moral standards are going to be regarded by the future as equally weird and immoral – and this is of course a rather chilling thought because our personal information is going to be available in the future to our descendants or even ourselves as older people with different values – a lot of advanced technologies we are worrying about are going to be wielded by our children, or by an older version of ourselves in ways we might not approve – but they’re going to say ‘yes but we’ve actually figured out the ethics now’.
The problem of course of where ethics is ever going is a really interesting question in itself – so people say oh yes, it’s just relative, it’s just societies making up rules to live by – but I do think we learned a few things – the reduction in violence over historical eras shows that we are getting something right. I don’t think that our relatives could just say that ‘violence is arbitrarily sometimes good and sometimes bad’ – I think it’s very clearly a bad thing. So I think we are making moral progress in some sense – we are figuring out better ways of thinking about morality. One of the interesting things about our current world is that we are aware that a lot of ideas about morality are things going on in our culture and in our heads – and are not just the laws of nature – that’s very useful. Some people of course think that there is some ideal or best moral system – and maybe there is – but we’re not very good at finding it. It might turn out that in the long run if there is some kind of ultimate sensible moral – we’re going to find it – but that might take a very long time and might take brains much more powerful than ours – it might turn out that all sufficiently advanced alien civilizations eventually figure out the right thing to do – and do it. But it could also turn out actually when we meet real advanced aliens they’re going to be as confused about philosophy as we are – that’s one of the interesting things to find out about the universe.

anders-sandberg-will-our-morals-be-weird-to-future-generations

Points covered:
– Technologies of the Future
– Efficient sustainability, in-vitro meat
– Living in an era of awareness of change
– Values have changed over time
– Will our morals be weird to future generations?
– Where is ethics going?
– Does moral relativism adequately explain reductions in violence?
– Is there an ideal ‘best moral system’? and if so, how do we find it?

Suffering, and Progress in Ethics – Peter Singer

Peter Singer_profileSuffering is generally bad – Peter Singer (who is a Hedonistic Utilitarian), and most Effective Altruists would agree with this. Though in addressing the need for suffering today Peter acknowledges that, as we are presently constituted, suffering is useful as a warning sign (e.g. against further injury). But what about the future?
What if we could eliminate suffering?
Perhaps in the future we will have advanced technological interventions to warn us of danger that will be functionally similar to suffering, but without the nasty raw feels.
Peter Singer, like David Pearce, suggests that if we could eliminate suffering of non-human animals that are capable of suffering, perhaps in some way that is difficult to imagine now – that this would be a good thing.

Video Interview:

I would see no reason to regret the absence of sufferingPeter Singer
Peter can’t see any regret to lament the disappearance of suffering, though perhaps people may say it would be useful to help understand literature of the past. Perhaps there are some indirect uses for suffering – but on balance Peter thinks that the elimination of suffering would be an amazingly good thing to do.

Singer thinks it is interesting to speculate what might be possible for the future of human beings, if we do survive over the longer term. To what extent are we going to be able to enhance ourselves? In particular to what extent are we going to be more ethical human beings – which brings to question ‘Moral Enhancement’.

The Expanding Circle - Peter SingerHave we made Progress in Ethics? Peter argues for the case that our species has expanded the circle of our ethical concern we have in his book ‘The Expanding Circle‘, and more recently Steven Pinker took up this idea in ‘Better Angels Of Our Nature’ – and this has happened over the millennia, beyond initially the tribal group, then to a national level, beyond ethnic groups to all human beings, and now we are starting to expand moral concern to non-human sentient beings as well.

Steven Pinker thinks that increases in our ethical consideration is bound up with increases in our intelligence (as proposed by James Flynn – the Flynn Effect – though this research is controversial (it could be actual increases in intelligence or just the ability to do more abstract reasoning)) and increases in our ability to reason abstractly.

As mentioned earlier there are other ways in which we may increase our ability and tendency to be more moral (see Moral Enhancement), and in the future we may discover genes that may influence us to think more about others, to dwell less on negative emotions like anger or rage. It is hard to say whether people will use these kinds of moral enhancers voluntarily, or whether we need state policies to encourage people to use moral enhances in order to produce better communities – and there are a lot of concerns here that people may legitimately have about how the moral enhancement project takes place. Peter sees this as a fascinating prospect and that it would be great to be around to see how things develop over the next couple of centuries.

Note Steven Pinker said of Peter’s book:

Singer’s theory of the expanding circle remains an enormously insightful concept, which reconciles the existence of human nature with political and moral progress. It was also way ahead of its time. . . . It’s wonderful to see this insightful book made available to a new generation of readers and scholars.Steven Pinker

The Expanding Circle

Abstract: What is ethics? Where do moral standards come from? Are they based on emotions, reason, or some innate sense of right and wrong? For many scientists, the key lies entirely in biology–especially in Darwinian theories of evolution and self-preservation. But if evolution is a struggle for survival, why are we still capable of altruism?

Peter Singer - The Most Good You Should Do - EA Global Melbourne 2015In his classic study The Expanding Circle, Peter Singer argues that altruism began as a genetically based drive to protect one’s kin and community members but has developed into a consciously chosen ethic with an expanding circle of moral concern. Drawing on philosophy and evolutionary psychology, he demonstrates that human ethics cannot be explained by biology alone. Rather, it is our capacity for reasoning that makes moral progress possible. In a new afterword, Singer takes stock of his argument in light of recent research on the evolution of morality.

References:
The Expanding Circle book page at Princeton University: http://press.princeton.edu/titles/9434.html

The Flynn Effect: http://en.wikipedia.org/wiki/Flynn_effect

Peter Singer – Ethics, Evolution & Moral Progress – https://www.youtube.com/watch?v=91UQAptxDn8

For more on Moral Enhancement see Julian Savulescu’s and others writings on the subject.

Subscribe to this Channel: http://youtube.com/subscription_center?add_user=TheRationalFuture

Science, Technology & the Future: http://scifuture.org

Should We Re-Engineer Ourselves to Phase Out our Violent Nature?

team-david-pearceDavid Pearce reflects on the motivation for human enhancement to phase out our violent nature. Do we want to perpetuate the states of experience which are beholden to our violent default biological imperatives .. or re-engineer ourselves?

Crudely speaking – and inevitably this is very crudely speaking – that nature designed men, males, to be hunters and warriors – and we still have to a very large degree a hunter/warrior psychology. This is why men are fascinated by conflict & violence – why we enjoy watching competitive sports.
Now although ordinary everyday life for many of us in the world is no longer involves the kind of endemic violence that was once the case (goodness knows how many deaths one will witness on screen in the course of a lifetime) one still enjoys violence and quite frequently watch men being very nasty towards each other – competing against each other.
Do we want to perpetuate these states of mind indefinitely? Or do we want to re-engineer ourselves?David Pearce

David-Pearce---Should-We-Re-Engineer-Ourselves-quote

Peter Singer & David Pearce on Utilitarianism, Bliss & Suffering

Moral philosophers Peter Singer & David Pearce discuss some of the long term issues with various forms of utilitarianism, the future of predation and utilitronium shockwaves.

Topics Covered

Peter Singer

– long term impacts of various forms of utilitarianism
– Consciousness
– Artificial Intelligence
– Reducing suffering in the long run and in the short term
– Practical ethics
– Pre-implantation genetic screening to reduce disease and low mood
– Lives today are worth the same as lives in the future – though uncertainty must be brought to bear in deciding how one weighs up the importance of life
– The Hedonistic Imperative and how people react to it
– Correlation to high hedonic set points with productivity
existential risks and global catastrophic risks
– Closing factory farms

David Pearce

– Veganism and reducitarianism
– Red meat vs white meat – many more chickens are killed per ton of meat than beef
– Valence research
– Should one eliminate the suffering? And should we eliminate emotions of happiness?
– How can we answer the question of how far suffering is present in different life forms (like insects)?

Talk of moral progress can make one sound naive. But even the darkest cynic should salute the extraordinary work of Peter Singer to promote the interests of all sentient beings.David Pearce
 

 

Many thanks for watching!
– Support me via Patreon: https://www.patreon.com/scifuture
– Please Subscribe to this Channel: http://youtube.com/subscription_cente…
– Science, Technology & the Future website: http://scifuture.org

Is there a Meaningful Future for Non-Optimal Moral Agents?

In an interview last year, I had a discussion with John Danaher on the Hedonistic Imperative & Superintelligence – a concern he has with HI is that it denies or de-emphasises some kind of moral agency – in moral theory there is a distinction between moral agents (being a responsible actor able to make moral decisions, influence direction of moral progress, shapes its future, and owes duties to others) and moral patients who may be deemed to have limited or no grounds for moral agency/autonomy/responsibility – they are simply a recipient of moral benefits – in contrast to humans, animals could be classified as moral patients – (see Stanford writing on Grounds for Moral Status).

As time goes on, the notion of strong artificial intelligence leading to Superintelligence (which may herald in something like an Intelligence Explosion) and ideas like the hedonistic imperative becomes less sensational sci-fi concepts and more like visions of realizable eventualities. Thinking about moral endpoints comes to me a paradoxical feeling of triumph and disempowerment.

John’s concern is that ensuring the well-being of humans (conscious entities) is consistent with denying their moral agency – minimizing their capacity to act – that there is a danger that the outcome of HI or an Intelligence Explosion may result in sentient life being made very happy forever, but unable to make choices – with a focus on a future entirely based on bliss whilst ignoring other aspects of what makes for a valuable or worthwhile existence.

Artificial Heart chipsSo even if we have a future where a) we are made very happy and b) we are subject to a wide variety of novelty (which I argue for in Novelty Utilitarianism) without some kind of self-determination we may not be able to enjoy part of what arguably makes for a worthwhile existence.

If the argument for moral agency is completely toppled by the argument against free will then I can see why there would be no reason for it – and that bliss/novelty may be enough – though I personally haven’t been convinced that this is the case.

Also the idea that moral agency and novelty should be ranked as auxiliary aspects to the main imperative of reducing suffering/increasing bliss seems problematic – I get the sense that they (agency/novelty) could easily be swapped out for most non-optimal moral agents in the quest for -suffering/+bliss troublesome.
The idea that upon evaluating grounds for moral status, our ethical/moral quotient may not match or even come close to a potential ethical force of a superintelligence is also troubling. If we are serious about the best ethical outcomes, when the time comes, should we be committed to resigning all moral agency to agents that are more adept at producing peek moral outcomes?
ancillary-one-esk-glitchIs it really possible for non-optimal agents to have a meaningful moral input in a universe where they’ve been completely outperformed by moral machines? Is a life of novelty & bliss the most optimal outcome we can hope for?

There probably should be some more discussion on trade-offs between moral agency, peek experience and novelty.

Discussion in this video here starts at 24:02

Below is the whole interview with John Danaher:

Wireheading with David Pearce

Is the Hedonistic Imperative equivalent to wire-heading?
People are often concerned about the future being a cyber-puink dystopia where people are hard wired into pleasure centers like smacked out like lotus eating milk-sops devoid of meaningful existence. Does David Pearce’s Hedonistic Imperative entail a future where we are all in thrall to permanent experiential orgasms – intravenously hotwired into our pleasure centers via some kind of soma like drug turning us into blissful-idiots?

Adam Ford: I think some people often conflate or distill the Hedonistic Imperative to mean ‘wireheading’ – what do you (think)?

David Pearce: Yes, I mean, clearly if one does argue that were going to phase out the biology of suffering and live out lives of perpetual bliss then it’s very natural to assimilate this to something like ‘wireheading’ – but for all sorts of reasons I don’t think wireheading (i.e. intercrainial self-stimulation of the reward centers and it’s pharmacological equivalent) is a plausible scenario for our future. Not least there will presumably always be selection pressure against wireheading – wireheads do not want to have baby wireheads and raise wirehead children.
I think a much more credible scenario is the idea that were going to re-calibrate the hedonic treadmill and allow ourselves and our future children to enjoy lives based on gradients of intelligent bliss. And one of the advantages of re-calibration rather than straight forward hedonic maximization is that by urging recalibration one isn’t telling people they ought to be giving up their existing preferences or values is that if your hedonic set-point (i.e. your average state of wellbeing) is much higher than it is now your quality wireheads - white of life will really be much higher – but it doesn’t involve any sacrifice of the values you hold most dear.
As a rather simplistic way of putting it – clearly where one lies basically on the hedonic axis will impose serious cognitive biases (i.e. someone who is let’s say depressive or prone to low mood) at least will have a very different set of biases from someone who is naturally cheerful. But none-the-less it doesn’t entail, so long as we aim for a motivational architecture of gradients of bliss, it doesn’t entail giving up anything you want to hold onto. I think that’s really important because a lot of people will be worried that somehow that if, yes, we do enter into some kind of secular paradise – it will involve giving up their normal relationships, their ordinary values and what they hold most dear. Re-calibration does not entail this (wireheading).

Adam Ford: That’s interesting – people think that you know as soon as you turn on the Hedonistic Imperative you are destined for a very narrow set of values – that could be just one peek experience being replayed over and over again – in some narrow local maximum.

wirehead-utility-function-hijacking1024x448David Pearce: Yes – I suppose one thinks of (kind of) crazed wirehead rats – in fairness, if one does imagine orgasmic bliss most people don’t complain that their orgasms are too long (and I’m not convinced that there is something desperately wrong with orgasmic bliss that lasts weeks, months, years or even centuries) but one needs to examine the wider sociological picture – and ask ‘is it really sustainable for us to become blissed out as distinct form blissful’.

Adam Ford: Right – and by blissed out you mean something like the lotus eaters found in Odysseus?

David Pearce: Yes, I mean clearly it is one version of paradise and bliss – they call it meditative tranquility (not doing anything) – but there are other versions of bliss in which one is hyper-motivated. It seems that, crudely speaking, motivation (which is mediated by the meso-limbic dopamene system) and raw bliss (which is associated with mu-opiod activation of our twin hedonic-hotspots) – the axis are orthogonal. Now they’re very closely interrelated (thanks to natural selection) – but in principle we can amplify one or damp down the other. Empirically, at any rate it seems to be the case today that the happiest people are also the most motivated – they have the greatest desires – I mean, this runs counter to the old buddhist notion that desire is suffering – but if you actually look at people who are depressive or chronically depressed quite frequently they have an absence of desire or motivation. But the point is we should be free to choose – yes it is potentially hugely liberatery – this control over our reward architecture, our pleasure circuitry that biotechnology offers – but let’s get things right. We don’t want to mess things up and produce the equivalent of large numbers of people on Heroin – and this is why I so strenuously urge the case for re-calibration – in the long run genetically, in the short run by various no-recreational drugs.

Clearly it is one version of paradise and bliss – they call it meditative tranquility (not doing anything) – but there are other versions of bliss in which one is hyper-motivated.David Pearce

Adam Ford: Ok… People may be worried that re-calibrating someone is akin to disrupting the continuum of self (or this enduring metaphysical ego) – so the person at the other end wouldn’t be really a continuation of the person at the beginning. What do you think? How would you respond to that sort of criticism?

wireheading - static David PearceDavid Pearce: It depends how strict ones conception of what personal identity is. Now, would you be worried if to learn tomorrow that you had won the national lottery (for example)? It would transform your lifestyle, your circle of friends – would this trigger the anxiety that the person who was living the existence of a multi-millionaire wasn’t really you? Well perhaps you should perhaps you should be worried about this – but on the whole most people would be relatively relaxed at the prospect. I would see this more as akin to a small child growing up – yes in one sense as one becomes a mature adult one has killed the toddler or lost the essence of what it was to be a toddler – but only in a very benign sense. And by aiming for re-calibration and hedonic enrichment rather than maximization, there is much less of a risk of loosing anything that you think is really valuable or important.

Adam Ford: Okay – well that’s interesting – we’ll talk about value. In order to not loose forms of value – even if you don’t use it (the values) much – you might have some values that you leave up in the attic to gather dust – like toys that you don’t play with anymore – but you might want to pick up once in a thousand years or what not. How do you then preserve complexity of value while also achieving high hedonic states – do you think they can go hand in hand? Or do you think preserving complexity of value reduces the likelihood that you will be able to achieve optimal hedonic states?

David Pearce: As an empirical matter – and I stress empirical here – it seems to be the case that the happiest are responsive to the broadest possible range of rewarding stimuli – it tends to be depressives who get stuck in a rut. So other things being equal – by re-calibrating ourselves, becoming happy and then superhappy – we can potentially at any rate, yes, enrich the complexity of our lives with a range of rewarding stimuli – it makes getting stuck in a rut less likely both for the individual and for civilization as a whole.
I think one of the reasons we are afraid of some kind of loss of complexity is that the idea of heaven – including in traditional christian heaven – it can sound a bit monotonous, and for happy people at least one of the experiences they find most unpleasant is boredom. But essentially it should be a matter of choice – yes, someone who is very happy to, let’s say, listen to a piece of music or contemplate or art, should be free to do so, and not forced into leading a very complex or complicated life – but equally folk who want to do a diverse range of things – well that’s feasible too.

For all sorts of reasons I don’t think wireheading… is a plausible scenario for our future. Not least there will presumably always be selection pressure against wireheading – wireheads do not want to have baby wireheads and raise wirehead children.David Pearce

– video/audio interview continues on past 10:00

The Knowledge Argument Applied to Ethics

A group of interested AI enthusiasts have been discussing Engineering Machine Consciousness in Melbourne for over a decade. In a recent interview with Jamais Cascio on Engineering Happy People & Global Catastrophic Risks, we discussed the benefits of amplifying empathy without the nasty side effects (possibly through cultural progress or technological intervention – a form of moral enhancement). I have been thinking further about how an agent might think and act differently if it had no ‘raw feels’ – any self-intimating conscious experience.

I posted to the Hedonistic Imperative Facebook group:

Is the limitations of empathy in humans distracting us from the in principle benefits of empathy?
The side effects of empathy in humans include increased distrust of the outgroup – and limitations in the amount of people we humans can feel strong empathy for – though in principle the experience of understanding another person’s condition from their perspective seems quite useful – at least while we are still motivated by our experience.
But what of the future? Are our post human descendants likely to be motivated by their ‘experiences of’ as well as their ‘knowledge about’ in making choices regarding others and about the trajectories of civilizational progress?

I wonder whether all the experiences of can be understood in terms of knowledge about – can the whole of ethics be explained without being experienced – though knowledge about without any experience of? Reminds me of the Mary’s Room/Knowledge Argument* thought experiment. I leaned towards the position that Mary could with a fully working knowledge of the visual system and relevant neuroscience wouldn’t ‘learn’ anything new when walking outside the grey-scale room and into the colourful world outside.
Imagine an adaptation of the Mary’s Room thought experiment – for the time being let’s call it Autistic Savant Angela’s Condition – in that:

class 1

Angela is a brilliant ethicist and neuroscientist (an expert in bioethics, neuroethics etc), whom (for whatever reason) is an Autistic savant with congenital insensitivity to pain and pleasure – she can’t at all feel pain, suffering or experience what it is like to be someone else who does experience pain or suffering – she has no intuition of ethics. Throughout her whole life she has been forced to investigate the field of ethics and the concepts of pleasure, bliss, pain and suffering through theory alone. She has a complete mechanical understanding of empathy, and brain states of subjects participating on various trolley thought experiments, hundreds of permutations of Milgrim experiments, is an expert in philosophies of ethics from Aristotle to Hume to Sidgwick etc. Suddenly there is a medical breakthrough in gene-therapy that would guarantee normal human function to feel without impairing cognitive ability at all. If Angela were to undergo this gene-therapy, would she learn anything more about ethics?

class 2

Same as above except Angela has no concept of other agents.

class 3

Same as class 2 except Angela is a superintelligent AI, and instead of gene-therapy, the AI recieves a software/hardware upgrade that allows the AI access to ‘fire in the equations’, to experience. Would the AI learn anything more about ethics? Would it act in a more ethical way? Would it produce more ethical outcomes?

 

Implications

Should an effective altruist support a completely dispassionate approach to cause prioritization?

If we were to build an ethical superintelligence – would having access to visceral experiences (i.e. pain/pleasure) change it’s ethical outcomes?
If a superintelligence were to perform Coherent Extrapolated Volition, or Coherent Aggregated Volition, would the kind of future which it produced differ if it could experience? Would likelihoods of various ethical outcomes change?

Is experience required to fully understand ethics? Is experience required to effectively implement ethics?
Robo Brain

 

footnotes

The Knowledge Argument Thought Experiment

jacksons-knowledge-argumentMary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like ‘red’, ‘blue’, and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal cords and expulsion of air from the lungs that results in the uttering of the sentence ‘The sky is blue’. […] What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not?