The Antispeciesist Revolution – read by David Pearce

The Antispeciesist Revolution

[Original text found here]

When is it ethically acceptable to harm another sentient being? On some fairly modest(1) assumptions, to harm or kill someone simply on the grounds they belong to a different gender, sexual orientation or ethnic group is unjustified. Such distinctions are real but ethically irrelevant. On the other hand, species membership is normally reckoned an ethically relevant criterion. Fundamental to our conceptual scheme is the pre-Darwinian distinction between “humans” and “animals”. In law, nonhuman animals share with inanimate objects the status of property. As property, nonhuman animals can be bought, sold, killed or otherwise harmed as humans see fit. In consequence, humans treat nonhuman animals in ways that would earn a life-time prison sentence without parole if our victims were human. From an evolutionary perspective, this contrast in status isn’t surprising. In our ancestral environment of adaptedness, the human capacity to hunt, kill and exploit sentient beings of other species was fitness-enhancing(2). Our moral intuitions have been shaped accordingly. Yet can we ethically justify such behaviour today?

Naively, one reason for disregarding the interests of nonhumans is the dimmer-switch model of consciousness. Humans matter more than nonhuman animals because (most) humans are more intelligent. Intuitively, more intelligent beings are more conscious than less intelligent beings; consciousness is the touchstone of moral status.

The problem with the dimmer-switch model is that it’s empirically unsupported, among vertebrates with central nervous systems at least. Microelectrode studies of the brains of awake human subjects suggest that the most intense forms of experience, for example agony, terror and orgasmic bliss, are mediated by the limbic system, not the prefrontal cortex. Our core emotions are evolutionarily ancient and strongly conserved. Humans share the anatomical and molecular substrates of our core emotions with the nonhuman animals whom we factory-farm and kill. By contrast, distinctively human cognitive capacities such as generative syntax, or the ability to do higher mathematics, are either phenomenologically subtle or impenetrable to introspection. To be sure, genetic and epigenetic differences exist between, say, a pig and a human being that explain our adult behavioural differences, e.g. the allele of the FOXP2(1) gene implicated in the human capacity for recursive syntax. Such mutations have little to do with raw sentience(1).

So what is the alternative to traditional anthropocentric ethics? Antispeciesism is not the claim that “All Animals Are Equal”, or that all species are of equal value, or that a human or a pig is equivalent to a mosquito. Rather the antispeciesist claims that, other things being equal, conscious beings of equivalent sentience deserve equal care and respect. A pig, for example, is of comparable sentience to a prelinguistic human toddler. As it happens, a pig is of comparable (or superior) intelligence to a toddler as well(5). However, such cognitive prowess is ethically incidental. If ethical status is a function of sentience, then to factory-farm and slaughter a pig is as ethically abhorrent as to factory-farm and slaughter a human baby. To exploit one and nurture the other expresses an irrational but genetically adaptive prejudice.

On the face of it, this antispeciesist claim isn’t just wrong-headed; it’s absurd. Philosopher Jonathan Haidt speaks of “moral dumfounding”(6), where we just know something is wrong but can’t articulate precisely why. Haidt offers the example of consensual incest between an adult brother and sister who use birth control. For evolutionary reasons, we “just know” such an incestuous relationship is immoral. In the case of any comparisons of pigs with human infants and toddlers, we “just know” at some deep level that any alleged equivalence in status is unfounded. After all, if there were no ethically relevant distinction between a pig and a toddler, or between a battery-farmed chicken and a human infant, then the daily behaviour of ordinary meat-eating humans would be sociopathic – which is crazy. In fact, unless the psychiatrists’ bible, Diagnostic and Statistical Manual of Mental Disorders, is modified explicitly to exclude behaviour towards nonhumans, most of us do risk satisfying its diagnostic criteria for the disorder. Even so, humans often conceive of ourselves as animal lovers. Despite the horrors of factory-farming, most consumers of meat and animal products are clearly not sociopaths in the normal usage of the term; most factory-farm managers are not wantonly cruel; and the majority of slaughterhouse workers are not sadists who delight in suffering. Serial killers of nonhuman animals are just ordinary men doing a distasteful job – “obeying orders” – on pain of losing their livelihoods.

Should we expect anything different? Jewish political theorist Hannah Arendt spoke famously of the “banality of evil”(7). If twenty-first century humans are collectively doing something posthuman superintelligence will reckon monstrous, akin to the [human] Holocaust or Atlantic slave trade, then it’s easy to assume our moral intuitions would disclose this to us. Our intuitions don’t disclose anything of the kind; so we sleep easy. But both natural selection and the historical record offer powerful reasons for doubting the trustworthiness of our naive moral intuitions. So the possibility that human civilisation might be founded upon some monstrous evil should be taken seriously – even if the possibility seems transparently absurd at the time.

One possible speciesist response is to raise the question of “potential”. Even if a pig is as sentient as a human toddler, there is a fundamental distinction between human toddlers and pigs. Only a toddler has the potential to mature into a rational adult human being.

The problem with this response is that it contradicts our treatment of humans who lack “potential”. Thus we recognise that a toddler with a progressive disorder who will never live to celebrate his third birthday deserves at least as much love, care and respect as his normally developing peers – not to be packed off to a factory-farm on the grounds it’s a shame to let good food go to waste. We recognise a similar duty of care for mentally handicapped adult humans and cognitively frail old people. For sure, historical exceptions exist to this perceived duty of care for vulnerable humans, e.g. the Nazi “euthanasia” program, with its eugenicist conception of “life unworthy of life”. But by common consent, we value young children and cognitively challenged adults for who they are, not simply for who they may – or may not – one day become. On occasion, there may controversially be instrumental reasons for allocating more care and resources to a potential genius or exceptionally gifted child than to a normal human. Yet disproportionate intraspecies resource allocation may be justified, not because high IQ humans are more sentient, but because of the anticipated benefits to society as a whole.

Practical Implications.
1. Invitrotarianism.

The greatest source of severe, chronic and readily avoidable suffering in the world today is man-made: factory farming. Humans currently slaughter over fifty billion sentient beings each year. One implication of an antispeciesist ethic is that factory farms should be shut and their surviving victims rehabilitated.

In common with most ethical revolutions in history, the prospect of humanity switching to a cruelty-free diet initially strikes most practically-minded folk as utopian dreaming. “Realists” certainly have plenty of hard evidence to bolster their case. As English essayist William Hazlitt observed, “The least pain in our little finger gives us more concern and uneasiness than the destruction of millions of our fellow-beings.” Without the aid of twenty-first century technology, the mass slaughter and abuse of our fellow animals might continue indefinitely. Yet tissue science technology promises to allow consumers to become moral agents without the slightest hint of personal inconvenience. Lab-grown in vitro meat produced in cell culture rather than a live animal has long been a staple of science fiction. But global veganism – or its ethical invitrotarian equivalent – is no longer a futuristic fantasy. Rapid advances in tissue engineering mean that in vitro meat will shortly be developed and commercialised. Today’s experimental cultured mincemeat can be supplanted by mass-manufactured gourmet steaks for the consumer market. Perhaps critically for its rapid public acceptance, in vitro meat does not need to be genetically modified – thereby spiking the guns of techno-luddites who might otherwise worry about “FrankenBurgers”. Indeed, cultured meat products will be more “natural” in some ways than their antibiotic-laced counterparts derived from factory-farmed animals.

Momentum for commercialisation is growing. Non-profit research organisations like New Harvest(8), working to develop alternatives to conventionally-produced meat, have been joined by hard-headed businessmen. Visionary entrepreneur and Stanford academic Peter Thiel has just funnelled $350,000 into Modern Meadow, a start-up that aims to combine 3D printing with in vitro meat cultivation. Within the next decade or so, gourmet steaks could be printed out from biological materials. In principle, the technology should be scalable.

Tragically, billions of nonhuman animals will grievously suffer and die this century at human hands before the dietary transition is complete. Humans are not obligate carnivores; eating meat and animal products is a lifestyle choice. “But I like the taste!” is not a morally compelling argument. Vegans and animal advocates ask whether we are ethically entitled to wait on a technological fix? The antispeciesist answer is clear: no.

2. Compassionate Biology.
If and when humans stop systematically harming other sentient beings, will our ethical duties to members of other species have been discharged? Not if the same ethical considerations as apply to members of other human races or age-groups apply also to members of other species of equivalent sentience. Thus if famine breaks out in sub-Saharan Africa and young human children are starving, then we recognise we have a duty to send aid; or better still, to take proactive measures to ensure famines do not arise in the first instance, i.e. to provide not just food aid but family planning. So why not assist, say, starving free-living elephants? Until recently, no comparable interventions were feasible for members of other species. The technical challenges were insurmountable. Not least, the absence of cross-species fertility control technologies would have often made bad problems worse. Yet thanks to the exponential growth of computer power, every cubic metre of the planet will shortly be computationally accessible to micro-management, surveillance and control. Harnessed to biotechnology, nanotechnology and robotics, such tools confer unprecedented power over Nature. With unbridled power comes complicity. Ethically speaking, how many of the traditional cruelties of the living world do we wish to perpetuate? Orthodox conservation biologists argue we should not “interfere”: humans can’t “police” Nature. Antispeciesists disagree. Advocates of compassionate biology argue that humans and nonhumans alike should not be parasitised, starved, disembowelled, asphyxiated, or eaten alive.

As always, bioconservatives insist such miseries are “natural”; status quo bias runs deep. “Custom will reconcile people to any atrocity”, observed George Bernard Shaw. Snuff movies in the guise of Nature documentaries are quite popular on Youtube, a counterpoint to the Disneyfied wildlife shows aired on mainstream TV. Moreover even sympathetic critics of compassionate biology might respond that helping free-living members of other species is prohibitively expensive. An adequate welfare safety-net scarcely exists for humans in many parts of the world. So how can we contemplate its extension to nonhumans – even just to large-brained, long-lived vertebrates in our Nature reserves? Provision of comprehensive healthcare for all free-living elephants(10), for example, might cost between two or three billion dollars annually. Compassionate stewardship of the living world would be technically daunting too, entailing ecosystem management, cross-species fertility control via immunocontraception, veterinary care, emergency famine-relief, GPS tracking and monitoring, and ultimately phasing out or genetically “reprogramming”(11) carnivorous predators. The notional bill could approach the world’s 1.7 trillion-dollar annual arms budget. But irrespective of cost or timescale, if we are to be consistently non-speciesist, then decisions about resource allocation should be based not on species membership, but directly or indirectly on sentience. An elephant, for example, is at least as sentient as a human toddler. If it is ethically obligatory to help sick or starving children, then it’s ethically obligatory to help sick or starving elephants – not just via crisis interventions but via long-term healthcare support.

A traditional conservation biologist might respond that elephants helped by humans are no longer truly wild. Yet on such a criterion, clothes-wearing humans or beneficiaries of food aid and family planning aren’t “wild” humans either. Why should this matter? “Free-living” and “wild” are conceptually distinct. To assume that the civilising process should be confined to our own species is mere speciesist prejudice. Humans, transhumans and posthumans must choose what forms of sentience we want to preserve and create on Earth and beyond. Humans already massively intervene in Nature, whether though habitat destruction, captive breeding programs for big cats, “rewilding”, etc. So the question is not whether humans should “interfere”, but rather what ethical principles should govern our interventions(12).

Speciesism and Superintelligence.
Why should transhumanists care about the suffering of nonhuman animals? This is not a “feel-good” issue. One reason we should care cuts to the heart of the future of life in the universe. Transhumanists differ over whether our posthuman successors will most likely be nonbiological artificial superintelligence; or cyborgs who effectively merge with our hyperintelligent machines; or our own recursively self-improving biological descendents who modify their own genetic source code and bootstrap their way to full-spectrum superintelligence(13). Regardless of the dominant lifeform of the posthuman era, biological humans have a vested interest in the behaviour of intellectually advanced beings towards cognitively humble creatures – if we survive at all. Compared to posthuman superintelligence, archaic humans may be no smarter than pigs or chickens – or perhaps worms. This does not augur well for Homo sapiens. Western-educated humans tend to view Jains as faintly ridiculous for practising ahimsa, or harmlessness, sweeping the ground in front of them to avoid inadvertently treading on insects. How quixotic! Yet the fate of sentient but cognitively humble lifeforms in relation to vastly superior intelligence is precisely the issue at stake as we confront the prospect of posthuman superintelligence. How can we ensure a Jain-like concern for comparatively simple-minded creatures such as ourselves? Why should superintelligences care any more than humans about the well-being of their intellectual inferiors? Might distinctively human-friendly superintelligence turn out to be as intellectually-incoherent as, say, Aryan-friendly superintelligence? If human primitives are to prove worthy of conservation, how can we implement technologies of impartial friendliness towards other sentients? And if posthumans do care, how do we know that a truly benevolent superintelligence wouldn’t turn Darwinian life into utilitronium with a communal hug?

Viewed in such a light, biological humanity’s prospects in a future world of superintelligence might seem dire. However, this worry expresses a one-dimensional conception of general intelligence. No doubt the nature of mature superintelligence is humanly unknowable. But presumably full-spectrum(14) superintelligence entails, at the very least, a capacity to investigate, understand and manipulate both the formal and the subjective properties of mind. Modern science aspires to an idealised “view from nowhere”(15), an impartial, God-like understanding of the natural universe, stripped of any bias in perspective and expressed in the language of mathematical physics. By the same token, a God-like superintelligence must also be endowed with the capacity impartially to grasp all possible first-person perspectives – not a partial and primitive Machiavellian cunning of the kind adaptive on the African savannah, but an unimaginably radical expansion of our own fitfully growing circle of empathy.

What such superhuman perspective-taking ability might entail is unclear. We are familiar with people who display abnormally advanced forms of “mind-blind”(16), autistic intelligence in higher mathematics and theoretical physics. Less well known are hyper-empathisers who display unusually sophisticated social intelligence. Perhaps the most advanced naturally occurring hyper-empathisers exhibit mirror-touch synaesthesia(17). A mirror-touch synaesthete cannot be unfriendly towards you because she feels your pain and pleasure as if it were her own. In principle, such unusual perspective-taking capacity could be generalised and extended with reciprocal neuroscanning technology and telemetry into a kind of naturalised telepathy, both between and within species. Interpersonal and cross-species mind-reading could in theory break down hitherto invincible barriers of ignorance between different skull-bound subjects of experience, thereby eroding the anthropocentric, ethnocentric and egocentric bias that has plagued life on Earth to date. Today, the intelligence-testing community tends to treat facility at empathetic understanding as if it were a mere personality variable, or at best some sort of second-rate cognition for people who can’t do IQ tests. But “mind-reading” can be a highly sophisticated, cognitively demanding ability. Compare, say, the sixth-order intentionality manifested by Shakespeare. Thus we shouldn’t conceive superintelligence as akin to God imagined by someone with autistic spectrum disorder. Rather full-spectrum superintelligence entails a God’s-eye capacity to understand the rich multi-faceted first-person perspectives of diverse lifeforms whose mind-spaces humans would find incomprehensibly alien.

An obvious objection arises. Just because ultra-intelligent posthumans may be capable of displaying empathetic superintelligence, how do we know such intelligence will be exercised? The short answer is that we don’t: by analogy, today’s mirror-touch synaesthetes might one day neurosurgically opt to become mind-blind. But then equally we don’t know whether posthumans will renounce their advanced logico-mathematical prowess in favour of the functional equivalent of wireheading. If they do so, then they won’t be superintelligent. The existence of diverse first-person perspectives is a fundamental feature of the natural world, as fundamental as the second law of thermodynamics or the Higgs boson. To be ignorant of fundamental features of the world is to be an idiot savant: a super-Watson(18) perhaps, but not a superintelligence(19).

High-Tech Jainism?
Jules Renard once remarked, “I don’t know if God exists, but it would be better for His reputation if He didn’t.” God’s conspicuous absence from the natural world needn’t deter us from asking what an omniscient, omnipotent, all-merciful deity would want humans to do with our imminent God-like powers. For we’re on the brink of a momentous evolutionary transition in the history of life on Earth. Physicist Freeman Dyson predicts we’ll soon “be writing genomes as fluently as Blake and Byron wrote verses”(20). The ethical risks and opportunities for apprentice deities are huge.

On the one hand, Karl Popper warns, “Those who promise us paradise on earth never produced anything but a hell”(21). Twentieth-century history bears out such pessimism. Yet for billions of sentient beings from less powerful species, existing life on Earth is hell. They end their miserable lives on our dinner plates: “for the animals it is an eternal Treblinka”, writes Jewish Nobel laureate Isaac Bashevis Singer(22).

In a more utopian vein, some utterly sublime scenarios are technically feasible later this century and beyond. It’s not clear whether experience below Sidgwick’s(23) “hedonic zero” has any long-term future. Thanks to molecular neuroscience, mastery of the brain’s reward circuitry could make everyday life wonderful beyond the bounds of normal human experience. There is no technical reason why the pitiless Darwinian struggle of the past half billion years can’t be replaced by an earthly paradise for all creatures great and small. Genetic engineering could allow “the lion to lie down with the lamb.” Enhancement technologies could transform killer apes into saintly smart angels. Biotechnology could abolish suffering throughout the living world. Artificial intelligence could secure the well-being of all sentience in our forward light-cone. Our quasi-immortal descendants may be animated by gradients of intelligent bliss orders of magnitude richer than anything physiologically feasible today.

Such fantastical-sounding scenarios may never come to pass. Yet if so, this won’t be because the technical challenges prove too daunting, but because intelligent agents choose to forgo the molecular keys to paradise for something else. Critically, the substrates of bliss don’t need to be species-specific or rationed. Transhumanists believe the well-being of all sentience(24) is the bedrock of any civilisation worthy of the name.

Also see this related interview with David Pearce on ‘Antispecism & Compassionate Stewardship’:

* * *

1. How modest? A venerable tradition in philosophical meta-ethics is anti-realism. The meta-ethical anti-realist proposes that claims such as it’s wrong to rape women, kill Jews, torture babies (etc) lack truth value – or are simply false. (cf. JL Mackie, Ethics: Inventing Right and Wrong, Viking Press, 1977.) Here I shall assume that, for reasons we simply don’t understand, the pain-pleasure axis discloses the world’s inbuilt metric of (dis)value. Meta-ethical anti-realists may instead wish to interpret this critique of speciesism merely as casting doubt on its internal coherence rather than a substantive claim that a non-speciesist ethic is objectively true.

2. Extreme violence towards members of other tribes and races can be fitness-enhancing too. See, e.g. Richard Wrangham & Dale Peterson, Demonic Males: Apes and the Origins of Human Violence, Houghton Mifflin, 1997.

3. Fisher SE, Scharff C (2009). “FOXP2 as a molecular window into speech and language”. Trends Genet. 25 (4): 166–77. doi:10.1016/j.tig.2009.03.002. PMID 19304338.

4. Interpersonal and interspecies comparisons of sentience are of course fraught with problems. Comparative studies of how hard a human or nonhuman animal will work to avoid or obtain a particular stimulus give one crude behavioural indication. Yet we can go right down to the genetic and molecular level, e.g. interspecies comparisons of SCN9A genotype. (cf. content/early/2010/02/23/?0913181107.full.pdf) We know that in humans the SCN9A gene modulates pain-sensitivity. Some alleles of SCN9A give rise to hypoalgesia, others alleles to hyperalgesia. Nonsense mutations yield congenital insensitivity to pain. So we could systematically compare the SCN9A gene and its homologues in nonhuman animals. Neocortical chauvinists will still be sceptical of non-mammalian sentience, pointing to the extensive role of cortical processing in higher vertebrates. But recall how neuroscanning techniques reveal that during orgasm, for example, much of the neocortex effectively shuts down. Intensity of experience is scarcely diminished.

5. Held S, Mendl M, Devereux C, and Byrne RW. 2001. “Studies in social cognition: from primates to pigs”. Animal Welfare 10:S209-17.

6. Jonathan Haidt, The Righteous Mind: Why Good People Are Divided by Politics and Religion, Pantheon Books, 2012.

7. Hannah Arendt, Eichmann in Jerusalem: A Report on the Banality of Evil, Viking Press, 1963.


9. “PayPal Founder Backs Synthetic Meat Printing Company”, Wired, August 16 2012.



12. The scholarly literature on the problem of wild animal suffering is still sparse. But perhaps see Arne Naess, “Should We Try To Relieve Clear Cases of Suffering in Nature?”, published in The Selected Works of Arne Naess, Springer, 2005; Oscar Horta, “The Ethics of the Ecology of Fear against the Nonspeciesist Paradigm: A Shift in the Aims of Intervention in Nature”, Between the Species, Issue X, August 2010. ; Brian Tomasik, “The Importance of Wild-Animal Suffering”, ; and the first print-published plea for phasing out carnivorism in Nature, Jeff McMahan’s “The Meat Eaters”, The New York Times. September 19, 2010.

13. Singularity Hypotheses, A Scientific and Philosophical Assessment, Eden, A.H.; Moor, J.H.; Søraker, J.H.; Steinhart, E. (Eds.) Spinger 2013.

14. David Pearce, The Biointelligence Explosion. (preprint), 2012.

15. Thomas Nagel, The View From Nowhere , OUP, 1989.

16. Simon Baron-Cohen (2009). “Autism: the empathizing–systemizing (E-S) theory” (PDF). Ann N Y Acad Sci 1156: 68–80. doi:10.1111/j.1749-6632.2009.04467.x. PMID 19338503.

17. Banissy, M. J. & Ward, J. (2007). Mirror-touch synesthesia is linked with empathy. Nature Neurosci. doi: 10.1038/nn1926.

18. Stephen Baker. Final Jeopardy: Man vs. Machine and the Quest to Know Everything. Houghton Mifflin Harcourt. 2011.

19. Orthogonality or convergence? For an alternative to the convergence thesis, see Nick Bostrom, “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents”, 2012,; and Eliezer Yudkowsky, Carl Shulman, Anna Salamon, Rolf Nelson, Steven Kaas, Steve Rayhawk, Zack Davis, and Tom McCabe. “Reducing Long-Term Catastrophic Risks from Artificial Intelligence”, 2010.

20. Freeman Dyson, “When Science & Poetry Were Friends”, New York Review of Books, August 13, 2009.

21. As quoted in Jon Winokur, In Passing: Condolences and Complaints on Death, Dying, and Related Disappointments, Sasquatch Books, 2005.

22. Isaac Bashevis Singer, The Letter Writer, 1964.

23. Henry Sidgwick, The Methods of Ethics. London, 1874, 7th ed. 1907.

24. The Transhumanist Declaration (1998, 2009).

David Pearce
September 2012

Link to video

Moral Enhancement – Are we morally equipped to deal with humanities grand challenges? Anders Sandberg

The topic of Moral Enhancement is controversial (and often misrepresented); it is considered by many to be repugnant – provocative questions arise like “who’s morals?”, “who are the ones to be morally enhanced?”, “will it be compulsory?”, “won’t taking a morality pill decrease the value of the intended morality if it skips the difficult process we normally go through to become better people?”, “Shouldn’t people be concerned that use of enhancements which alter character traits might consumer’s authenticity?”

Humans have a built in capacity of learning moral systems from their parents and other people. We are not born with any particular moral [code] – but with the ability to learn it just like we learn languages. The problem is of course this built in facility might have worked quite well back in the Stone Age when we were evolving in small tribal communities – but doesn’t work that well when surrounded with a high-tech civilization, millions of other people and technology that could be
potentially very dangerous. So we might need to update our moral systems and that is the interesting question of moral enhancement: can we make ourselves more fit for a current work?Anders Sandberg - Are we morally equipped for the future?
Humans have an evolved capacity to learn moral systems – we became more adept at learning moral systems that aided our survival in the ancestral environment – but are our moral instincts fit for the future?

Illustration by Daniel Gray

Let’s build some context. For millennia humans have lived in complex social structures constraining and encouraging certain types of behaviour. More recently for similar reasons people go through years of education at the end of which (for the most part) are more able to morally function in the modern world – though this world is very different from that of our ancestors, and when considering the possibilities for vastly radical change at breakneck speed in the future, it’s hard to know how humans will keep up both intellectually and ethically. This is important to consider as the degree to which we shape the future for the good depends both on how well and how ethically we solve the problems needed to achieve change that on balance (all things equal) benefits humanity (and arguably all morally relevant life-forms).

Can we engineer ourselves to be more ethically fit for the future?

Peter Singer discussed how our circles of care and compassion have expanded over the years – through reason we have been able to expand our natural propensity to act morally and the circumstances in which we act morally.

We may need to expand our circle of ethical consideration to include artificial life – considering certain types of software as moral patients.

So, if we think we could use a boost in our propensity for ethical progress,

How do we actually achieve ideal Moral Enhancement?

That’s a big topic (see a list of papers on the subject of ME here) – the answers may depend on what our goals and  preferences. One idea (among many others) is to regulate the level of Oxytocin (the cuddle hormone) – though this may come with the drawback of increasing distrust in the out-group.
Since morality depends on us being able to make accurate predictions and solve complex ethical problems, ‘Intelligence Enhancement‘ could be an effective aspect of moral enhancement. 

Morality is dependent on us being able to predict what’s going to happen when we do something. So various forms of intelligence enhancement might be very useful also for becoming more moral. Our ability to control our reactions that allow our higher-order values to control our lower order values is also important, that might actually require us to literally rewire or have biochips that help us do it.Anders Sandberg - Are we morally equipped for the future?

How we decide whether to use Moral Enhancement Therapy will be interesting – it may be needed to help solve global coordination problems; to increase the likelihood that we will, as a civilization, cooperate and cope with many known and as yet to be realised complex ethical quandaries as we move through times of unprecedented social and technological change.

This interview is part of a larger series that was completed in Oxford, UK late 2012.

Interview Transcript

Anders Sandberg

So humans have a kind of built-in capacity of learning moral systems from their parents and other people we’re not born with any particular moral [code] but the ability to learn it just like we can learn languages. The problem is of course this built-in facility might have worked quite well back in the Stone Age when we were evolving in small tribal communities – but doesn’t work that well when surrounded with a high-tech civilization, millions of other people and technology that could be potentially very dangerous. So we might need to update our moral systems. And that is the interesting question of moral enhancement:

  • can we make ourselves more fit for a current work?
  • And what kind of fitness should we be talking about?

For example we might want to improve on altruism – that we should be coming to strangers. But in a big society, in a big town – of course there are going to be some stranger’s that you shouldn’t trust. So it’s not just blind trust you want to enhance – you actually want to enhance ability to make careful judgements; to figure out what’s going to happen on whom you can trust. So maybe you want to have some other aspect, maybe the care – the circle of care – is what you want to expand.

Peter Singer pointed out that there are circles of care and compassion have been slowly expanding from our own tribe and their own gender, to other genders, to other people and eventually maybe to other species. But this is still biologically based a lot of it is going on here in the brain and might be modified. Maybe we should artificially extend these circles of care to make sure that we actually do care about those entities we ought to be caring about. This might be a problem of course, because some of these agents might be extremely different for what we used to.

For example machine intelligence might produce more machines or software that is a ‘moral patient’ – we actually ought to be caring about the suffering of software. That might be very tricky because our pattern receptors up in the brain are not very tuned for that – we tend to think that if it’s got a face and the speaks then it’s human and then we can care about it. But who thinks about Google? Maybe we could get super-intelligences that we actually ought to care a lot about, but we can’t recognize them at all because they’re so utterly different from ourselves.

So there are some easy ways of modifying how we think and react – for example by taking a drug. So the hormone oxytocin is sometimes called ‘the cuddle hormone’ – it’s released when breastfeeding and when having bodily contact with your loved one, and it generally seems to be making us more altruistic; more willing to trust strangers. You can kind of sniff it and run an economic game and you can immediately see a change in response. It might also make you a bit more ego-centric. It does enlarge feelings of comfort and family friendliness – except that it’s
only within what you consider to be your family. So we might want to tweak that.

Similarly we might think about adding links to our brains that allow us to think in better ways. After all, morality is dependent on us being able to predict what’s going to happen when we do something. So various forms of intelligence enhancement might be very useful also for becoming more moral. Our ability to control our reactions that allow our higher-order values to control our lower order values is also important, that might actually require us to literally rewire or have biochips that help us do it.

But most important is that we need the information we need to retrain the subtle networks in a brain in order to think better. And that’s going to require something akin to therapy – it might not necessarily be about lying on a sofa and telling your psychologist about your mother. It might very well be a bit of training, a bit of cognitive enhancement, maybe a bit of brain scanning – to figure out what actually ails you. It’s probably going to look very very different from anything Freud or anybody else envisioned for the future.

But I think in the future we’re actually going to try to modify ourselves so we’re going to be extra certain, maybe even extra moral, so we can function in a complex big world.


Related Papers

Neuroenhancement of Love and Marriage: The Chemicals Between Us

Anders contributed to this paper ‘Neuroenhancement of Love and Marriage: The Chemicals Between Us‘. This paper reviews the evolutionary history and biology of love and marriage. It examines the current and imminent possibilities of biological manipulation of lust, attraction and attachment, so called neuroenhancement of love. We examine the arguments for and against these biological interventions to influence love. We argue that biological interventions offer an important adjunct to psychosocial interventions, especially given the biological limitations inherent in human love.

Human Engineering and Climate Change

Anders also contributed to the paper “Human Engineering and Climate Change” which argues that cognitive, moral and biological enhancement could increase human ecological sustainability.

Many thanks for watching!

Consider supporting me by:
a) Subscribing to my YouTube channel:
b) Donating via Patreon: and/or
c) Sharing the media I create

Kind regards,
Adam Ford
– Science, Technology & the Future:

Amazing Progress in Artificial Intelligence – Ben Goertzel

At a recent conference in Beijing (the Global Innovators Conference) – I did yet another video interview with the legendary AGI guru – Ben Goertzel. This is the first part of the interview, where he talks about some of the ‘amazing’ progress in AI over recent years, including Deep Mind’s AlphaGo sealing a 4-1 victory over Go grandmaster Lee Sedol, progress in hybrid architectures in AI (Deep Learning, Reinforcement Learning, etc), interesting academic research in AI being taken up by tech giants, and finally providing some sobering remarks on the limitations of deep neural networks.

Can we build AI without losing control over it? – Sam Harris

San Harris (author of The Moral Landscape and host of the Waking Up podcast) discusses the need for AI Safety – while fun to think about, we are unable to “martial an appropriate emotional response” to improvements in AI and automation and the prospect of dangerous AI – it’s a failure of intuition to respond to it like one would a sci-fi like doom scenario.

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris — and not just in some theoretical way. We’re going to build superhuman machines, says Harris, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.

The Simulation Argument – How likely is it that we are living in a simulation?

The simulation hypothesis doesn’t seem to be a terse parsimonious explanation for the universe we live in. If what is most important is to simulate ancestors, what’s the motivation for all the hugely detailed rendering of space? Why not just simulate earth or our solar system or our galaxy?

People often jump to the conclusions and assume* that the great simulators have infinite computing power. Infinity – another thing we have never been able to measure 🙂 Max Tegmark wrote an interesting piece about why infinity is probably not real. Until we do have evidence of infinities in the real world, I believe we should treat all thought experiments that rely on infinities as mere intuition pumps.

Without the assumption that potential simulators have infinite computing power, but assume instead they have a finite amount – it seems logical that there would be a cost/benefit trade-off between computation and simulation, detail/number of sims that would need to be taken into account. Limits to available computation would decrease the motivation for building huge amounts of simulations and/or highly detailed simulations.

People think their way around the astronomical computational waste and add yet another extra assumption* that the simulation may grow to fill all the spaces we probe and interact with – though this would still increase the computational requirements to run the simulation. With this assumption, we should believe that if we are in a simulation, compared to just 500 years ago, it is costing the simulators a whole lot more to run now that we can stare into the depths of physics and peer about the universe. It has been argued that we should avoid building big computers or perform certain experiments because the simulators may decide to turn off our simulation because it begins costing them to much to run.

If we are in a simulation – many argue for the most part, it probably doesn’t matter. Based on Newcomb’s problem – even if we are in an elegant simulation, then the simulated laws of physics will behave just as they would if they were actual laws
If we feel compelled to put an estimate on it – the more we develop empirically informed naturalistic explanations for the universe, the lower our estimates should be that we are in a simulation.

If there are considerable costs to creating simulations with the detail of our universe – why simulate ancestors if it costs so much?
What is so important about ancestor simulations to justify the expense?

* the more assumptions we add to a hypothesis, the less certain we should be about it

The Seminal Nick Bostrom Interview

Here is the interview I did with Bostrom in 2012:

Why so much confidence that we are in a simulation?

I hear reports that Bostrom’s confidence that we are in a simulation have decreased over the years (less than 10% I heard recently – can’t find a direct reference right now) – while others, after he wrote the seminal paper, have increased their confidence quite dramatically. Based on various article headlines I am fairly certain that many latch onto a surface level understanding of the arguments that support their existing biases. So its probably best to read the paper and understand the Simulation Hypothesis and the Simulation Argument before hand waving about what Bostrom thinks.

How much credence should we give sound arguments that are empirically unfalsifiable?

I’d say some – not everything can be falsified – generally I rank arguments with empirical evidence higher than those that don’t.

I Wonder what do the Intelligent Design movement think of this?

Some atheists may be worried that such a philosophical implications – but most seem to think the Simulation Argument is cool.



Various links on the simulation argument and hypothesis curated by Bostrom – including the original paper:

All Aboard The Ship of Theseus with Keith Wiley

An exploration of the philosophical concept of metaphysical identity, using numerous variations on the infamous Ship of Theseus thought experiment.

Video interview with Keith Wiley

Note: a separate text interview is below.



Keith Wiley is the author of A Taxonomy and Metaphysics of Mind-Uploading, available on Amazon.

The ship of Theseus, also known as Theseus’ paradox, is a thought experiment that raises the question of whether an object that has had all of its components replaced remains fundamentally the same object. The paradox is most notably recorded by Plutarch in Life of Theseus from the late first century. Plutarch asked whether a ship that had been restored by replacing every single wooden part remained the same ship.

The paradox had been discussed by other ancient philosophers such as Heraclitus and Plato prior to Plutarch’s writings, and more recently by Thomas Hobbes and John Locke. Several variants are known, including the grandfather’s axe, which has had both head and handle replaced.
See more at Wikipedia…

Text Interview

Note this is not a transcription of the video/audio interview.

The Ship of Theseus Metaphor

Adam Ford: Firstly, what is the story or metaphor of the Ship of Theseus intended to convey?

Keith Wiley: Around the first century AD, Plutarch wrote several biographies, including one of the king Theseus entitled Life of Theseus, in which he wrote the following passage:

The ship on which Theseus sailed with the youths and returned in safety, the thirty-oared galley, was preserved by the Athenians down to the time of Demetrius Phalereus. They took away the old timbers from time to time, and put new and sound ones in their places, so that the vessel became a standing illustration for the philosophers in the mooted question of growth, some declaring that it remained the same, others that it was not the same vessel.Plutarch

People sometimes erroneously believe that Plutarch presents the scenario (replacing a ship piecemeal style until all original material is absent) with a conclusion or judgment, i.e., that it makes some prescription of the “correct” way to interpret the scenario (as to, yes or no, is the ship’s identity preserved). However, as you see from the passage above, this is not the case. Plutarch left the question open. He mere poses the question and leaves it to the reader to ruminate on an actual answer.

The specific questions in that scenario are:

  • Does identity require maintaining the same material components? Aka, is identity tied and indicated by specific sets of atoms?
  • If not, then does preservation of identity require some sort of temporally overlapping sequence of closely connected parts?

The more general question being asked is: What is the nature of identity? What are its properties? What are its requirements (to claim preservation under various circumstances)? What traits specify identity and indicate the transformations under which identity may be preserved and under which it is necessarily lost?

Here is a video explainer by Keith Wiley (intended to inspire viewers to think about identity preservation)

Adam Ford: How does this story relate to mind uploading?

Keith Wiley: The identity of relatively static objects, and of objects not possessing minds or consciousness, is an introduction to the thornier question of metaphysical personal identity, i.e., the identity of persons. The goal in considering how various theories of identity describe what is happening in the Ship of Theseus is to prime our thinking about what happens to personal identity of people in analogous scenarios. For example, in a most straightforward manner, the Ship of Theseus asks us to consider how our identity would be affected if we replaced, piecemeal style, all the material in our own bodies. The funny thing is, this is already the case! It is colloquially estimated that our bodies turn over their material components approximately every seven years (whether this is precisely accurate is beside the point). The intent is not that a conclusion drawn from the Ship of Theseus definitively resolves the question concerning personal identity, because the former is a much simpler scenario. The critical distinction is that people are more obviously dynamic across time than static physical objects because our minds undergo constant psychological change. This raises the question of whether some sort of “temporal continuity” is at play in people that does not take effect in ships. There is also the question of whether consciousness somehow changes the discussion in radical ways. So the Ship of Theseus is not conclusive on personal identity. It is just a way to get us started in thinking about such issues.

Adam Ford: Fishing for clarification on how you use the term ‘identity’, Robin Hanson (scenario of uploads in the future in Age of Em) enquired about what kind of identity concept you are interested in. That is, what function do you intend this concept to serve?

Keith Wiley: Sure. First, and this might not be what Robin meant, there are different fundamental kinds of identity, two big ones being quantitative and numerical. Two things quantitatively identified possess the same properties, but are not necessarily “the same entity”. Two things numerically identical are somehow “the same thing”, which is problematic in its phrasing since they were admitted to be “two things” to begin with. The crucial distinction is in whether numerical identity makes any difference, or whether quantitative identity is all the fundamentally matters.

For me, I phrase the crucial question of personal identity relative to mind uploading in the following way: Do we grant equal primacy to claims to the original single identity to all minds (people) who psychologically descend from that common ancestral mind (person)? I always phrase it this way: granting primacy in claims to a historical identity. Do we tolerate the metaphysical interpretation that all descendant minds are equal in the primacy of their claim to the identity they perceive themselves to be? Alternatively, do we disregard such claims, dictating to others that they are not, in fact, who they believe themselves to be, and that they are not entitled to the rights of the people they claim to be? My concern is of:
bias (differing assignments of traits to various people),
prejudice (differing assignments of values, claims, or rights resulting from bias),
and discrimination (actions favoring and dismissing various people, resulting from prejudices).

Adam Ford: Is ‘identity’ the most appropriate word to be using here?

Keith Wiley: Well, identity certainly doesn’t seem to fully “work”. There’s always some boundary case or exception that undermines any identity theory we attempt to assign. My primary concern, such as it is on an entirely abstract philosophical musing (at this point in history when mind uploading isn’t remotely possible yet) is only secondarily the nature of identity. The primary concern, justified by those secondary aspects of identity, is whether we should regard uploads in some denigrated fashion. Should we dismiss their claims that they are the original person, that they should be perceived as the original person, that they should be treated and entitled and “enrighted” as the original person? I don’t just mean from a legal standpoint. We can pass all sorts of laws that force people to be respectful, but that’s an uninteresting question to me. I’m asking if it is fundamentally right or wrong to regard an upload in a denigrated way when judging its identity claims.

Ontology, Classification & Reality

Adam Ford: As we move forward the classification of identity will likely be fraught with struggle. We might need to invent new words to clarify the difference between distinct concepts. Do you have any ideas for new words?

Keith Wiley: The terminology I generally use is that of mind descendants and mind ancestors. In this way we can ask whether all minds descending from a common ancestral mind should be afforded equal primacy in their claim to the ancestral identity, or alternatively, whether there is a reasonable justification to exhibit biases, prejudices, and discriminations against some minds over such such questions. Personally, I don’t believe any such asymmetry in our judgment of persons and their identity claims can be grounded on physical or material traits (such as whose brain is composed of more matter from the ancestral brain, which comes up when debating nondestructive uploading scenarios).

Adam Ford: An appropriate definition for legal reasons?

Keith Wiley: I find legal distinctions to be uninteresting. It used to be illegal for whites and blacks to marry. Who cares what the law says from a moral, much less metaphysical, perspective. I’m interested in finding the most consistent, least arbitrary, and least paradoxical way to comprehend reality, including the aspect of reality that describes how minds relate to their mental ancestors.

Adam Ford: For scientific reasons?

Keith Wiley: I don’t believe this is a scientific question. How to procedurally accomplish uploading is a scientific question. Whether it can be done in a nondestructive way, leaving the original body and brain unharmed, is a scientific question. Whether multi-uploading (producing multiple uploads at once) is technically possible is a scientific question, say via an initial scan that can be multi-instantiated. I think those are crucial scientific endeavors that will be pursued in the future, and I participate in some of the discussions around that research. But at this point in history, when nothing like mind uploading is possible yet, I am pursuing other aspects, nonscientific aspects, namely the philosophical question of whether we have the correct metaphysical notion of identity in the first place, and whether we are applying identity theories in an irrational, or even discriminatory, fashion.

Implications for Brain Preservation

Adam Ford: Potential brain preservation (inc cryonics) customers may be interested in knowing the possible likely science of reanimation (which, it has been suggested, includes mind uploading) – and the type of preservation which will most likely achieve the best results. Even though we don’t have mind uploading yet – people are committing their brains to preservation strategies that are to some degree based on strategies for revival. Mummification? No – that probably won’t work. Immersion in saline based solution? Yeah for short periods of time. Plastination? Yes but only if it’s the connectome we are after… And then there is different methods of cryonic suspension that may be tailored to different intended outcomes – do you want to destructively scan the brain layer by layer and be uploaded in the future? Do you want to be able to fully revive the actual brain in the (potentially in a longer term future)?

Keith Wiley: People closer to the cryonics community than myself, such as some of my fellow BPF board members, claim that most current cryonics enthusiasts (and paying members or current subjects) are not of the mind uploading persuasion, preferring biological revival instead. Perhaps because they tend to be older (baby boomer generation) they have not bought into computerization of brains and minds. Their passion for cryonics is far more aligned with the prospect of future biological revival. I suspect there will be a shift toward those of a mind uploading persuasion as the newer generations, more comfortable with computers, enter the cryonics community.

As you described above, there are few categories of preservation and a few paths of potential revival. Preservation is primarily of two sorts: cryogenic and at least conceivably reversible, and room temperature and inconceivably reversible. The former is amenable to both biological revival and mind uploading. The latter is exclusively amenable to mind uploading. Why would one ever choose the latter option then? Simple: it might be the better method of preservation! It might preserve the connectome in greater detail for longer periods of time with lesser rates of decay — or it might simply be cheaper or otherwise easier to maintain over the long term. After all, cryonic storage requires cryonic facilities and constant nitrogen reintroduction as it boils off. Room temperature storage can be put on the shelf and forgotten about for millennia.

Adam Ford: What about for social (family) reasons?

Keith Wiley: This is closer to the area where I think and write, although not necessarily in a family-oriented way. But social in terms of whether our social contracts with one another should justify treating certain people in a discriminatory fashion and whether there is a rational basis for such prejudices. Not that any of this will be a real-world issue with which to tackle for quite some time. But perhaps some day…

Adam Ford: If the intended outcomes of BP are for subjective personal reasons?

Keith Wiley: I would admit that much of my personal interest here is to try to grind out the absolutely most logical way to comprehend minds and identity relative to brains, especially under the sorts of physical transformations that brains could hypothetically experience (Parfit’s hemispherical fission, teleportation, gradual nanobot replacement, freeze-slice-scan-and-emulate, etc.).


Adam Ford: In relation to appropriate definitions of ‘identity’ for scientific reasons – what are your thoughts on the whole map/territory ‘is science real’ debate? Where do you sit – scientific realism, anti-realism and structural realism (epistemic or ontic)? what’s your favorite?

Keith Wiley: I suppose I lean toward scientific realism (to my understanding: scientific claims and truths hold real truth value, not just current societal “perspective”, and further they can be applied to yet-to-be observed phenomena), although antirealism is a nifty idea (scientific truths are essentially those which we have yet to disprove, but expect to with some future overturning, or furthermore, unobserved phenomena are not reasonable subjects of scientific inquiry). The reason I don’t like the latter is it leads to antiintellectualism, which is a huge problem for our society. Rather than overturning or disregarding scientific theories, I prefer to interpret it as that we refine them, saying that new theories apply in corners where the old ones didn’t fit well (Newton’s laws are fine in many circumstances, but are best appended by quantum mechanics at the boundary’s of their applicability). Structural and ontic realism are currently vague to me. I’ve read about them but haven’t really grinded through their implications yet.

Adam Ford: If we are concerned about our future and the future of things we value we perhaps should ask a fundamental question: How do things actually persist? (Whether you’re a perdurantist or an endurantist – this is still a relevant question – see 5.2 ‘How Things Persist?’ in ‘Endurantism and Perdurantism’)

Keith Wiley: Perdurantism and Endurantism are not terms I have come across before. I do like the idea of conceptualizing objects as 4D temporal “worms”. I describe brains that way in my book for example. If this is the “right” way (or at least a good way) to conceive of the existence of physical objects, then it partially solves the persistence or preservation-of-identity problem: preservation of identity is the temporal stream of physical continuity. The problem is, I reject any physical requirement for explicitly *personal* identity of minds, because there appears to be no associated physical trait — plus that would leave open how to handle brain fission, ala Parfit, so worms just *can’*t solve the problem of personal identity, only of physical objects.

Adam Ford: Cybernetics – signal is more important than substrate – has cybernetics influenced your thinking? If so, how?

Keith Wiley: If by signal, you mean function, then I’ve always held that the functional traits of the brain are far more important (it not entirely more important) than mere material components.

Adam Ford: “signal is more important than substrate” – Yet the signal quality depends on the substrate – surely a ship’s substrate is not as tightly coupled to its function of moving across a body of water (wood, fiberglass, even steel will work) than a conscious human mind is to its biological brain. in terms of the granularity of replacement part – how much is needed?

Keith Wiley: Good question. I have no idea. I tend to presume the requisite level is action potential processing and generation, which is a pretty popular assumption I think. We should be open on this question at this time in history and current state of scientific knowledge.

Adam Ford: What level of functional representation is needed in order to be preserve ‘selfhood’?

Keith Wiley: Short answer: We don’t know yet. Long answer, it is widely presumed that the action-potential patterns of the connectome are where the crucial stuff is happening, but this is a supposition. We don’t know for sure.

Adam Ford: A Trolley Problem applied to Mind Uploaded Clones: As with the classic trolley problem, a trolley is hurtling down a track towards 5 people. As in the classic case, you can divert it onto a separate track by pulling a nearby leaver. However, suddenly 5 functionally equivalent carbon copies* of the original 5 people appear on the separate track. Would you pull the lever to save the originals but kill the copies? Or leave the originals to die, saving the copies? (*assume you just know the copies are functionally equivalent to the originals)

Keith Wiley: Much of my writing focuses on mind uploading and the related question of what minds are and what personal identity is. My primary claim is that uploads are wholly human in their psychological traits and human rights, and furthermore that they have equal primacy in their claim to the identity of the person who preceded an uploading procedure — even if the bio-original body and brain survive! The upload is still no less “the original person” than the person housed in the materially original body, precisely because bodies and material preservation are irrelevant to who we are, by my reckoning. If this is not the case, then how can we solve the fission paradox? Who gets to “be the original” if we split someone in two? The best solution is that only psychological traits matter and material traits are simply irrelevant.

So, for those reasons, I would rephrase your trolley scenario thusly: track one has five people, track two has five other people. Coincidentally, pairs of people from each track have very recently diverging memories, but the scenario is psychologically symmetrical between the two tracks even if there is some physical asymmetry in terms of how old the various material compositions (bodies) are. So we can disregard notions of asymmetry for the purpose of analyzing the moral or identity-preserving-killing implications of the trolley problem. It is simply “Five people on one track, five on another. Should you pull the lever, killing those on the diverted track to save those on the initial track?” That’s how I rephrase it.

Adam Ford: I wonder if the experiment would yield different results if there were 5 individuals on one track and 6 copies of 1 person on the other? (As some people suggest that copies are actually identical to the original – eg for voting purposes)

Keith Wiley: But they clearly aren’t identical in the scenario you described. The classic trolley problem has always implied that the subjects are reasonably alert and mentally dynamic (thinking). It isn’t carefully described so as to imply that the people involved are explicitly unconscious, to say nothing of the complexities involved in rendering them as physically static objects (preserved brains undergoing essentially no metabolic or signal-processing (action potentials) activity. The problem is never posed that way. Consequently, they are all awake and therefore divergent from one another, distinct individuals with all the rights of individual personhood. So it’s just five against six in your example. That’s all there is to it. People might suggest, as you said above, that copies are identical to each other (or to the original), but those people are just wrong.

So an interesting question then, is what if the various subjects involved actually are unconscious or even rigidly preserved? Can we say their psychological sequences have not diverged and that they therefore represent redundant physical instantiations of a given mind? I explore this exact question in my book by the way. I think a case could be made that until psychological divergence (until the brains are rolling forward through time, accumulating experiences and memories) we can say they are redundant in terms of identity and associated person-value. But to be clear, if the bio-original was statically preserved, then uploaded or duplicated, and then both people were put on the train tracks in their preserved state, physically identical, frozen with no ongoing psychological experience, then I would be clear to state that while it might not matter if we kill the upload, it *also* doesn’t matter if we choose the other way and kill the bio-original! That is the obvious implication of my reasoning here. And in your case above, if we have five distinct people on one track (let’s stay everyone involved is statically preserved) and six uploads of one of those people on the other track, then we could recast the problem as “five on one track and one on the other”. The funny thing is, if we save the six and revive them, then, after the fact, we have granted life to six distinct individuals, but we can only say that after we revive them, not at the time of the trolley experiment when they are statically preserved. So now we are speculating on the “tentative” split personhood of a set of identical but static minds based on a later time when they might be revived. Does that tentative individuality grant them individuality while they are still preserved? Does the mere potential to diverge and individualize grant them full-blown distinct identity before the divergence has occurred? I don’t know. Fascinating question. I guess the anti-abortion-choice and pro-abortion-choice debate has been trying to sort out the personhood of tentative, potential, or possible persons for a long time (and by extension, whether contraception is acceptable hits the same question). We don’t seem to have all agreed on a solution there yet, so we probably won’t agree in this case either.

Philosophy of identity

Adam Ford: Retention of structure across atomic change – is identity the structure, the atomic composition, the atomic or structural continuum through change, or a mixture?

Keith Wiley: Depends on one’s chosen theory of identity of course. Body theory, psychological theory, psychological branching theory, closest continuer theory, 4D spacetime “worm” theory. There’s several to choose from — but I find that some more paradox-prone than others, and I generally take that as an indication of a weak theory. I’m a branchest, although the distinction from worm theory is, on some accounts, virtually indistinguishable.

Adam Ford: Leibniz thought about the Identity of indiscernibles (principle in ontology that no two things can have all properties the same) – if objX and objY share all the same properties, are they the same thing? If KeithX and KeithY share the same functional characteristics are they the same person?

Keith Wiley: But do they really share the same properties to begin with, or is the premise unfounded? When people casually analyze these sorts of scenarios, the two people are standing there, conscious, wondering if someone is about to pass judgment on them and kill them. They are experiencing the world from slightly different sensorial vantage points (vision, sound, etc.) Their minds are almost certainly diverged in their psychological state mere fractions of a second upon regaining consciousness. So they aren’t functionally identical in the first place. Thus the question is flawed, right? The question can only be applied if they are unconscious and rigidly preserved (frozen perhaps). Although I believe a case could be made that mere lack of consciousness is sufficient to designate them *psychologically* identical even if they are not necessarily physically identical due to microscopic metabolic variations — but I leave that subtly as an open question for the time being.

Adam Ford: Here is a Symmetric Universe counterexample – Max Black – two distinct perfect spheres (or two Ship of Theseuses) are two separate objects even though they share all the same properties – but don’t share the same space-time. What are your thoughts?

Keith Wiley: This is very close to worm theory. It distinguishes seemingly identical entities by considering their spacetime worms, which squiggle their way through different spacetime paths and are therefore not identical in the first place. They never were. The reason they appeared to be identical is that we only considered 3D space projection of their truly 4D spacetime structure. You can easily alias pairs of distinct higher-dimensional entities by looking only at their projections onto lower dimensions and thereby wrongly conclude that they are identical when, in fact, they never were to begin with in their true higher dimensional structure. For example, consider two volumes, a sphere and a cylinder. They are 3D. But project them onto a 2D plane (at the right angle) and you get two circles. You might wrongly conclude they are identical, but they weren’t to begin with! You simply ignored an entire dimension of their nature. That’s what the 4D spacetime worm says about the identity of physical objects.

However, once we dismiss any relevance or importance of physical traits anyway (because I reject body identity on the matter of personal identity, favoring psychological identity), then the 4D worm becomes more convoluted. The question then becomes, what sort of “time worm” describes psychological changes over time instead of physical, structure, and material changes over time? I think it’s as simple as: take an information pattern instantiated in a physical system (a brain), produce a second physical instantiation, and now readily conclude that the psychological temporal worm (just a temporal sequence of psychological states frankly) has diverged.

Adam Ford: Nice answer! – I’m certainly interested in hearing more about worm theory – I think this wikipedia source is about the same thing:
Do you have any personal writings I can point at in the text form of the interview?

Keith Wiley: Ah, I hadn’t heard that term before. Thanks for the reference. Well, I always refer to my book of course, and more recently Randal Koene and I published a paper in the Journal of Consciousness Studies this past March.

(See Free near-final version on arxiv

Adam Ford: David Pearce is skeptical that our we as in our subjects of experience are actually enduring metaphysical egos – he seems more of a stage theorist – that each moment of subjective experience is fleeting – only persisting through one cycle of quantum cohesion delimited by decoherence.

Keith Wiley: Hmmm, I see the distinction in the link to stage theorist you provided above, and I do not believe I am committed to a position on that question. I go both ways in my own writing, sometimes describing things as true 4D entities (I describe brains that way in my book) but also writing quite frequently in terms of “mind descendants of mind ancestors”. That phrasing admits that perhaps identity does not span time in a temporal worm, but rather that it consists of instantaneous time slices of momentary identity connected in a temporal sequence. Like I said, I am uncommitted on this distinction, at least for now.

Identity: Accidental properties vs Essential properties

Adam Ford: Is the sense of an enduring metaphysical ego really an ‘accidental property’ (based on our intuitions of self) rather than an ‘essential property’ of identity?

Keith Wiley: It is possible we don’t yet know what a mind is in sufficient detail to answer such a question. I confess to not being entirely sure what the question is asking. That said, it is possible that conscious and cognitively rich aliens have come up with a fairly different way of comprehending what their minds actually are, and consequently may also have rather bizarre notions of what personal identity is.

Note that in the video, I sometimes offer an answer to the question “Did we preserve the ship in this scenario?” and I sometimes don’t, simply asking the viewer “So did we preserve it or not? What do you think?” This is because I’m certainly not sure of all the answers to this question in all the myriad scenarios yet.

Adam Ford: This argument is criticized by some modern philosophers on the grounds that it allegedly derives a conclusion about what is true from a premise about what people know. What people know or believe about an entity, they argue, is not really a characteristic of that entity.
There may be a problem in that what is true about a phenomenon or object (like identity) shouldn’t be derived from how we label or what we know about it – the label or description isn’t a characteristic of the identity (map not the territory etc).

Keith Wiley: I would essentially agree that identity shouldn’t merely be a convention of how we arbitrarily label things (i.e., that labeling grants or determines identity), but rather the reverse, that we are likely to label things so as to indicate how we perceive their identity. The question is, does our perception of identity indicate truth, which we then label, or does our perception determine or choose identity, which we then label? I would like to think reality is more objective than that, that there at least some aspects of identity that aren’t merely our choices, but rather traits of the world that we discover, observe, and finally label.




A Taxonomy and Metaphysics of Mind-Uploading
The Fallacy of Favouring Gradual Replacement Mind Uploading Over Scan-and-Copy Research Gate:

The Endurance/Perdurance Distinction By Neil Mckinnon
Endurantism and Perdurantism for a discussion on 3 different ways on what these terms have been taken to mean :


Perdure – remain in existence throughout a substantial period of time; persisting in virtue of having both temporal and spatial parts (alternatively the thesis that objects are four dimensional and have temporal parts)
Endure – being wholly present at all times at which it exists (endurance distinct from perducance in that endurance has strict identity and perdurance has a looser unity relation (genidentity))
Genidentity – is an existential relationship underlying the genesis of an object from one moment to the next.
Gunk – In mereology, an area of philosophical logic, the term gunk applies to any whole whose parts all have further proper parts. That is, a gunky object is not made of indivisible atoms or simples. Because parthood is transitive, any part of gunk is itself gunk.


Keith Wiley has a Ph.D. in Computer Science from the University of New Mexico and was one of the original members of MURG, the Mind Uploading Research Group, an online community dating to the mid-90s that discussed issues of consciousness with an aim toward mind-uploading. He has written multiple book chapters, peer-reviewed journal articles, and magazine articles, in addition to several essays on a broad array of topics, available on his website. Keith is also an avid rock-climber and a prolific classical piano composer.

Also see Jennifer Wang’s (Stanford University) video as she introduces us to the Ship of Theseus puzzle that has bedeviled philosophy since the ancient Greeks. She tells the Ship of Theseus story, and draws out the more general question behind it: what does it take for an object to persist over time? She then breaks this ancient problem down with modern clarity and rigor.

Longevity Day with Aubrey de Grey!

“Longevity Day” (based on the UN International Day of Older Persons – October 1) is a day of support for biomedical aging and longevity research. This has been a worldwide international campaign successfully adopted by many longevity activists groups. In this interview Aubrey de Grey lends support to Longevity Day and covers a variety of points, including:
– Updates: on progress at SENS (achievements, and predictions based on current support), funding campaigns, the recent Rejuvenation Biotechnology conference, and exciting news in health and medicine as it applies to longevity
– Advocacy: What advocates for longevity research need to know
– Effective Altruism and Science Philanthropy – giving with impact – cause prioritization and uncertainty – how to go about measuring estimates on impacts of dollars or units of effort given to research organizations
– Action: High impact areas, including more obvious steps to take, and some perhaps less obvious/underpopulated areas
– Leveraging Longevity Day: What to do in preparation to leverage Longevity Day? Once one has celebrated Longevity Day, what to do next?

“Longevity Day” (based on the UN International Day of Older Persons – October 1st) is a day of support for biomedical aging and longevity research. This has been a worldwide international campaign successfully adopted by many longevity activists groups.

Here is the Longevity Day Facebook Page.


Anders Sandberg -The Technological Singularity

Anders Sandberg.00_23_53_16.Still031Anders gives a short tutorial on the Singularity – clearing up confusion and highlighting important aspects of the Technological Singularity and related ideas, such as accelerating change, horizons of predictability, self-improving artificial intelligence, and the intelligence explosion.

Tutorial Video:

Points covered in the tutorial:

  • The Mathematical Singularity
  • The Technological Singularity: A Horizon of predictability
  • Confusion Around The Technological Singularity
  • Drivers of Accelerated Growth
  • Technology Feedback Loops
  • A History of Coordination
  • Technological Inflection Points
  • Difficult of seeing what happens after an Inflection Point
  • The Intelligence Explosion
  • An Optimisation Power Applied To Itself
  • Group Minds
  • The HIVE Singularity: A Networked Global Mind
  • The Biointelligence explosion
  • Humans are difficult to optimise

An Overview of Models of the Technological Singularity

anders-sandberg-technology-feedback-loopsSee Anders’ paper ‘An overview of models of technological singularity
This paper reviews different definitions and models of technological singularity. The models range from conceptual sketches to detailed endogenous growth models, as well as attempts to fit empirical data to quantitative models. Such models are useful for examining the dynamics of the world-system and possible types of future crisis points where fundamental transitions are likely to occur. Current models suggest that, generically, even small increasing returns tends to produce radical growth. If mental capital becomes copyable (such as would be the case for AI or brain emulation) extremely rapid growth would also become likely.

[The] Technological singularity is of increasing interest among futurists both as a predicted possibility in the midterm future and as subject for methodological debate. The concept is used in a variety of contexts, and has acquired an unfortunately large number of meanings. Some versions stress the role of artificial intelligence, others refer to more general technological change. These multiple meanings can overlap, and many writers use combinations of meanings: even Vernor Vinge’s seminal essay that coined the term uses several meanings. Some of these meanings may imply each other but often there is a conflation of different elements that likely (but not necessarily) occur in parallel. This causes confusion and misunderstanding to the extent that some critics argue that the term should be avoided altogether. At the very least the term ‘singularity’ has led to many unfortunate assumptions that technological singularity involves some form of mathematical singularity and can hence be ignored as unphysical.Anders Sandberg

A list of models described in the paper:

A. Accelerating change

Exponential or superexponential technological growth (with linked economical growth and social change) (Ray Kurzweil (Kur05), John Smart (Smang))

B. Self improving technology

Better technology allows faster development of new and better technology. (Flake (Fla06))

C. Intelligence explosion

Smarter systems can improve themselves, producing even more intelligence in a strong feedback loop. (I.J. Good (Goo65), Eliezer Yudkowsky)

D. Emergence of superintelligence

(Singularity Institute) 1

E. Prediction horizon

Rapid change or the emergence of superhuman intelligence makes the future impossible to predict from our current limited knowledge and experience. (Vinge, (Vin93))

F. Phase transition

The singularity represents a shift to new forms of organisation. This could be a fundamental difference in kind such as humanity being succeeded by posthuman or artificial intelligences,
a punctuated equilibrium transition or the emergence of a new meta-system level. (Teilhard de Chardin, Valentin Turchin (Tur77), Heylighen (Hey07))

G. Complexity disaster

Increasing complexity and interconnectedness causes increasing payoffs, but increases instability. Eventually this produces a crisis, beyond which point the dynamics must be different.
(Sornette (JS01), West (BLH+07))

H. Inflexion point

Large-scale growth of technology or economy follows a logistic growth curve. The singularity represents the inflexion point where change shifts from acceleration to de-acceleration. (Extropian
FAQ, T. Modis (Mod02))

I. Infinite progress

The rate of progress in some domain goes to infinity in nite time. (Few, if any, hold this to be plausible 2 )


Many thanks for watching!

Consider supporting SciFuture by:
a) Subscribing to the YouTube channel:
b) Donating via Patreon: and/or
c) Sharing the media SciFuture creates

Science, Technology & the Future:

Zombie Rights

andrew-dun-zombie-rightsAndrew Dun provides an interesting discussion on the rights of sentient entities. Drawing inspiration from quantum complementarity, defends a complementary notion of ontological dualism, countering zombie hypotheses. Sans zombie concerns, ethical discussions should therefore focus on assessing consciousness purely in terms of the physical-functional properties of any putatively conscious entity.

Below is the video of the presentation:

At 12:17 point, Andrew introduces the notion of Supervenience (where high level properties supervene on low-level properties) – do zombies have supervenience? Is consciousness merely a supervenient property that supervenes on characteristics of brain states? If so, we should be able to compute whether a system is conscious (if we do know its full physical characterization). The zombie hypothesis suggests that consciousness does not logically supervene on the physical.

Slides for presentation can be found on slide-share!

Andrew Dun spoke at the Singularity Summit. Talk title : “Zombie Rights”.

Andrew’s research interest relates to both the ontology and ethics of consciousness. Andrew is interested in the ethical significance of consciousness, including the way in which our understanding of consciousness impacts our treatment of other humans, non-human animals, and artifacts. Andrew defends the view that the relationship between physical and conscious properties is one of symmetrical representation, rather than supervenience. Andrew argues that on this basis we can confidently approach ethical questions about consciousness from the perspective of ‘common-sense’ materialism.

Andrew also composes and performs original music.

Extending Life is Not Enough

Dr Randal Koene covers the motivation for human technological augmentation and reasons to go beyond biological life extension.

randal_koene_squareCompetition is an inescapable occurrence in the animate and even in the inanimate universe. To give our minds the flexibility to transfer and to operate in different substrates bestows upon our species the most important competitive advantage.” I am a neuroscientist and neuroengineer who is currently the Science Director at Foundation 2045, and the Lead Scientist at Kernel, and I head the organization, which is the outreach and roadmapping organization for the development of substrate-independent minds (SIM) and also previously participated in the ambitious and fascinating efforts of the nanotechnology startup Halcyon Molecular in Silicon Valley.

Slides of talk online here
Video of Talk:

Points discussed in the talk:
1. Biological Life-Extension is Not Enough Randal A. Koene
3. No one wants to live longer just to live longer. Motivation informs Method.
4. Having an Objective, a Goal, requires that you have some notion of success.
5. Creating (intelligent) machines that have the capabilities we do not — is not as good as being able to experience them ourselves… Imagine… creating/playing music. Imagine… being the kayak.Imagine… perceiving the background radiation of the universe.
6. Is being out of the loop really your goal?
7. Near-term goals: Extended lives without expanded minds are in conflict with creative development.
8. Social
9. Gene survival is extremely dependent on an environment — it is unlikely to survive many changes.Worse… gene replication does not sustain that which we care most about!
10. Is CTGGAGTAC better than GTTGACTGAC? We are vessels for that game — but for the last10,000 years something has been happening!
11. Certain future experiences are desirable, others are not — these are your perspectives, the memes you champion…Death keeps stealing our champions, our experts.
12. Too early to do uploading? – No! The big perspective is relevant now. We don’t like myopic thinking in our politicians, lets not be myopic about world issues ourselves.
14. Life-extension in biology may increase the fragility of our species & civilization… More people? – Resources. Less births? – Fewer novel perspectives. Expansion? – Environmental limitation.
15. Biological life-extension within the same evolutionary niche = further specialization to the same performance “over-training” in conflict with generalization
16. Aubrey de Grey: Ultimately, desires “uploading”
18. Significant biological life-extension is incredibly difficult and beset by threats. Reality vs. popular perception.
19. Life-extension and Substrate-Independence are two different objectives
20. Developing out of a “catchment area” (S. Gildert) may demand iterations of exploration — and exploration involves risk.Hard-wired delusions and drives. What would an AGI do? Which types of AGI would exist in the long run?
21. “Uploading” is just one step of many — but a necessary step — for a truly advanced species
22. Thank You

There is a short promo-interview for the Singularity Summit AU 2012 conference that Adam Ford did with Dr. Koene, though unfortunately the connection was a bit unreliable, which is noticeable in the video:

Most of those videos are available through the SciFuture YouTube channel: