Posts

Moral Enhancement – Are we morally equipped to deal with humanities grand challenges? Anders Sandberg

The topic of Moral Enhancement is controversial (and often misrepresented); it is considered by many to be repugnant – provocative questions arise like “who’s morals?”, “who are the ones to be morally enhanced?”, “will it be compulsory?”, “won’t taking a morality pill decrease the value of the intended morality if it skips the difficult process we normally go through to become better people?”, “Shouldn’t people be concerned that use of enhancements which alter character traits might consumer’s authenticity?”

Humans have a built in capacity of learning moral systems from their parents and other people. We are not born with any particular moral [code] – but with the ability to learn it just like we learn languages. The problem is of course this built in facility might have worked quite well back in the Stone Age when we were evolving in small tribal communities – but doesn’t work that well when surrounded with a high-tech civilization, millions of other people and technology that could be
potentially very dangerous. So we might need to update our moral systems and that is the interesting question of moral enhancement: can we make ourselves more fit for a current work?Anders Sandberg - Are we morally equipped for the future?
Humans have an evolved capacity to learn moral systems – we became more adept at learning moral systems that aided our survival in the ancestral environment – but are our moral instincts fit for the future?

Illustration by Daniel Gray

Let’s build some context. For millennia humans have lived in complex social structures constraining and encouraging certain types of behaviour. More recently for similar reasons people go through years of education at the end of which (for the most part) are more able to morally function in the modern world – though this world is very different from that of our ancestors, and when considering the possibilities for vastly radical change at breakneck speed in the future, it’s hard to know how humans will keep up both intellectually and ethically. This is important to consider as the degree to which we shape the future for the good depends both on how well and how ethically we solve the problems needed to achieve change that on balance (all things equal) benefits humanity (and arguably all morally relevant life-forms).

Can we engineer ourselves to be more ethically fit for the future?

Peter Singer discussed how our circles of care and compassion have expanded over the years – through reason we have been able to expand our natural propensity to act morally and the circumstances in which we act morally.

We may need to expand our circle of ethical consideration to include artificial life – considering certain types of software as moral patients.

So, if we think we could use a boost in our propensity for ethical progress,

How do we actually achieve ideal Moral Enhancement?

That’s a big topic (see a list of papers on the subject of ME here) – the answers may depend on what our goals and  preferences. One idea (among many others) is to regulate the level of Oxytocin (the cuddle hormone) – though this may come with the drawback of increasing distrust in the out-group.
Since morality depends on us being able to make accurate predictions and solve complex ethical problems, ‘Intelligence Enhancement‘ could be an effective aspect of moral enhancement. 

Morality is dependent on us being able to predict what’s going to happen when we do something. So various forms of intelligence enhancement might be very useful also for becoming more moral. Our ability to control our reactions that allow our higher-order values to control our lower order values is also important, that might actually require us to literally rewire or have biochips that help us do it.Anders Sandberg - Are we morally equipped for the future?

How we decide whether to use Moral Enhancement Therapy will be interesting – it may be needed to help solve global coordination problems; to increase the likelihood that we will, as a civilization, cooperate and cope with many known and as yet to be realised complex ethical quandaries as we move through times of unprecedented social and technological change.

This interview is part of a larger series that was completed in Oxford, UK late 2012.

Interview Transcript

Anders Sandberg

So humans have a kind of built-in capacity of learning moral systems from their parents and other people we’re not born with any particular moral [code] but the ability to learn it just like we can learn languages. The problem is of course this built-in facility might have worked quite well back in the Stone Age when we were evolving in small tribal communities – but doesn’t work that well when surrounded with a high-tech civilization, millions of other people and technology that could be potentially very dangerous. So we might need to update our moral systems. And that is the interesting question of moral enhancement:

  • can we make ourselves more fit for a current work?
  • And what kind of fitness should we be talking about?

For example we might want to improve on altruism – that we should be coming to strangers. But in a big society, in a big town – of course there are going to be some stranger’s that you shouldn’t trust. So it’s not just blind trust you want to enhance – you actually want to enhance ability to make careful judgements; to figure out what’s going to happen on whom you can trust. So maybe you want to have some other aspect, maybe the care – the circle of care – is what you want to expand.

Peter Singer pointed out that there are circles of care and compassion have been slowly expanding from our own tribe and their own gender, to other genders, to other people and eventually maybe to other species. But this is still biologically based a lot of it is going on here in the brain and might be modified. Maybe we should artificially extend these circles of care to make sure that we actually do care about those entities we ought to be caring about. This might be a problem of course, because some of these agents might be extremely different for what we used to.

For example machine intelligence might produce more machines or software that is a ‘moral patient’ – we actually ought to be caring about the suffering of software. That might be very tricky because our pattern receptors up in the brain are not very tuned for that – we tend to think that if it’s got a face and the speaks then it’s human and then we can care about it. But who thinks about Google? Maybe we could get super-intelligences that we actually ought to care a lot about, but we can’t recognize them at all because they’re so utterly different from ourselves.

So there are some easy ways of modifying how we think and react – for example by taking a drug. So the hormone oxytocin is sometimes called ‘the cuddle hormone’ – it’s released when breastfeeding and when having bodily contact with your loved one, and it generally seems to be making us more altruistic; more willing to trust strangers. You can kind of sniff it and run an economic game and you can immediately see a change in response. It might also make you a bit more ego-centric. It does enlarge feelings of comfort and family friendliness – except that it’s
only within what you consider to be your family. So we might want to tweak that.

Similarly we might think about adding links to our brains that allow us to think in better ways. After all, morality is dependent on us being able to predict what’s going to happen when we do something. So various forms of intelligence enhancement might be very useful also for becoming more moral. Our ability to control our reactions that allow our higher-order values to control our lower order values is also important, that might actually require us to literally rewire or have biochips that help us do it.

But most important is that we need the information we need to retrain the subtle networks in a brain in order to think better. And that’s going to require something akin to therapy – it might not necessarily be about lying on a sofa and telling your psychologist about your mother. It might very well be a bit of training, a bit of cognitive enhancement, maybe a bit of brain scanning – to figure out what actually ails you. It’s probably going to look very very different from anything Freud or anybody else envisioned for the future.

But I think in the future we’re actually going to try to modify ourselves so we’re going to be extra certain, maybe even extra moral, so we can function in a complex big world.

 

Related Papers

Neuroenhancement of Love and Marriage: The Chemicals Between Us

Anders contributed to this paper ‘Neuroenhancement of Love and Marriage: The Chemicals Between Us‘. This paper reviews the evolutionary history and biology of love and marriage. It examines the current and imminent possibilities of biological manipulation of lust, attraction and attachment, so called neuroenhancement of love. We examine the arguments for and against these biological interventions to influence love. We argue that biological interventions offer an important adjunct to psychosocial interventions, especially given the biological limitations inherent in human love.

Human Engineering and Climate Change

Anders also contributed to the paper “Human Engineering and Climate Change” which argues that cognitive, moral and biological enhancement could increase human ecological sustainability.

Many thanks for watching!

Consider supporting me by:
a) Subscribing to my YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating via Patreon: https://www.patreon.com/scifuture and/or
c) Sharing the media I create

Kind regards,
Adam Ford
– Science, Technology & the Future: http://scifuture.org

Peter Singer – Ethics, Utilitarianism & Effective Altruism

Peter Singer at UMMS - Ethics Utilitarianism Effective Altruism
Peter Singer discusses Effective Altruism, including Utilitarianism as a branch of Ethics. Talk was held as a joint event between the University of Melbourne Secular Society and Melbourne University Philosophy Community.

Is philosophy, as a grounds to help decide how good an action is, something you spend time thinking about?

Audio of Peter’s talk can be found here at the Internet Archive.

In his 2009 book ‘The Life You Can Save’, Singer presented the thought experiment of a child drowning in a pond before our eyes, something we would all readily intervene to prevent, even if it meant ruining an expensive pair of shoes we were wearing. He argued that, in fact, we are in a very similar ethical situation with respect to many people in the developing world: there are life-saving interventions, such as vaccinations and clean water, that can be provided at only a relatively small cost to ourselves. Given this, Singer argues that we in the west should give up some of our luxuries to help those in the world who are most in need.

If you want to do good, and want to be effective at doing good, how do you go about getting better at it?

UMMS - James Fodor - Peter Singer

Nick, James, and Peter Singer during Q&A

Around this central idea a new movement has emerged over the past few years known as Effective Altruism, which seeks to use the best evidence available in order to help the most people and do the most good with the limited resources that we have available. Associated with this movement are organisations such as GiveWell, which evaluates the relative effectiveness of different charities, and Giving What We Can, which encourages members to pledge to donate 10% or more of their income to effective poverty relief programs.

Peter-Singer--Adam-Ford-1I was happy to get a photo with Peter Singer on the day – we organised to do an interview, and for Peter to come and speak at the Effective Altruism Global conference later in 2015.
Here you can find number of videos I have taken at various events where Peter Singer has addressed Effective Altruism and associated philosophical angles.

New Book ‘The Point of View of the Universe – Sidgwick and Contemporary Ethics‘ – by Katarzyna de Lazari-Radek and Peter Singer

Subscribe to the Science, Technology & the Future YouTube Channel

My students often ask me if I think their parents did wrong to pay the $44,000 per year that it costs to send them to Princeton. I respond that paying that much for a place at an elite university is not justified unless it is seen as an investment in the future that will benefit not only one’s child, but others as well. An outstanding education provides students with the skills, qualifications, and understanding to do more for the world than would otherwise be the case. It is good for the world as a whole if there are more people with these qualities. Even if going to Princeton does no more than open doors to jobs with higher salaries, that, too, is a benefit that can be spread to others, as long as after graduating you remain firm in the resolve to contribute a percentage of that salary to organizations working for the poor, and spread this idea among your highly paid colleagues. The danger, of course, is that your colleagues will instead persuade you that you can’t possibly drive anything less expensive than a BMW and that you absolutely must live in an impressively large apartment in one of the most expensive parts of town.Peter Singer, The Life You Can Save: Acting Now to End World Poverty, London, 2009, pp. 138-139

 

Playlist of video interviews and talks by Peter Singer:

 

Science, Technology & the Future

 

What is Technoprogressivism?

Rejecting the two extremes of bioconservatism and libertarian transhumanism, Hughes argues for a third way, “democratic transhumanism,” a radical form of techno-progressivism which asserts that the best possible “posthuman future” is achievable only by ensuring that human enhancement technologies are safe, made available to everyone, and respect the right of individuals to control their own bodies.

James-Hughes---raysAppearing several times in Hughes’ work, the term “radical” (from Latin rādīx, rādīc-, root) is used as an adjective meaning of or pertaining to the root or going to the root. His central thesis is that emerging technologies and radical democracy can help citizens overcome some of the root causes of inequalities of power.

The following video interview defines and describes the technoprogressive stance in biopolitics. It addresses the questions: What is Technoprogressivism? b) What is the history of the idea? c) What does the word mean when broken down into it’s parts ‘Techno’ & ‘Progressive’? c) How does it relate to Transhumanism? d) What are the benefits (emancipatory uses etc)? e) Who should we trust to regulate? f) What accounts for progress according to Technoprogressives? g) What are some contrasting ideological stances (i.e. Bioconcervatism & Libertarian Transhumanism) h)

A Definition of Technoprogressivism

James Hughes

  • Technoprogressivism is an ideological stance with roots in Enlightenment thought which focuses on how human flourishing is advanced by the convergence of technological progress and democratic social change. Technoprogressives argue that technological innovations can be profoundly empowering and emancipatory when they are democratically and transparently regulated for safety and efficacy, and then made universally and equitably available.
  • Technoprogressives maintain that accounts of “progress” should focus on ethical and social as well as scientific and technical dimensions. For most technoprogressives, then, the growth of scientific knowledge or the accumulation of technological powers will not represent the achievement of proper progress unless and until it is accompanied by a just distribution of the costs, risks, and benefits of these new knowledges and capacities. At the same time, for most technoprogressives the achievement of better democracy, greater fairness, less violence, and a wider rights culture are all desirable, but inadequate in themselves to confront the quandaries of contemporary technological societies unless and until they are accompanied by progress in science and technology to support and implement these values.
  • Technoprogressives support the rights of persons to either maintain or modify his or her own mind and body, on his or her own terms, through informed, consensual recourse to, or refusal of, available therapeutic or enabling biomedical technology. Technoprogressivism extends beyond cognitive liberty and morphological rights to views on safe, accountable and liberatory uses of emerging technologies such as genomic choice in reproduction, GMOs, nanotechnology, artificial intelligence, surveillance and geoengineering.

 

Another Interview with James Hughes – Director if IEET

In this video interview, James discusses biopolitics with focus on Technoprogressivism, how we came to this political stance (among other things).

James J. Hughes Ph.D. is a sociologist and bioethicist teaching health policy at Trinity College in Hartford, Connecticut in the United States.
http://internet2.trincoll.edu/facProfiles/Default.aspx?fid=1004332
http://ieet.org/index.php/IEET/bio/hughesHughes holds a doctorate in sociology from the University of Chicago, where he served as the assistant director of research for the MacLean Center for Clinical Medical Ethics. Before graduate school he was temporarily ordained as a Buddhist monk in 1984 while working as a volunteer in Sri Lanka for the development organization Sarvodaya from 1983 to 1985.
Hughes served as the executive director of the World Transhumanist Association (which has since changed its name to Humanity+) from 2004 to 2006, and currently serves as the executive director of the Institute for Ethics and Emerging Technologies, which he founded with Nick Bostrom. He also produces the syndicated weekly public affairs radio talk show program Changesurfer Radio and contributed to the Cyborg Democracy blog. Hughes’ book Citizen Cyborg: Why Democratic Societies Must Respond to the Redesigned Human of the Future was published by Westview Press in November 2004.

The emergence of biotechnological controversies, however, is giving rise to a new axis, not entirely orthogonal to the previous dimensions but certainly distinct and independent of them. I call this new axis biopolitics, and the ends of its spectrum are transhumanists (the progressives) and, at the other end, the bio-Luddites or bio-fundamentalists. Transhumanists welcome the new biotechnologies, and the choices and challenges they offer, believing the benefits can outweigh the costs. In particular, they believe that human beings can and should take control of their own biological destiny, individually and collectively enhancing our abilities and expanding the diversity of intelligent life. Bio-fundamentalists, however, reject genetic choice technologies and “designer babies,” “unnatural” extensions of the life span, genetically modified animals and food, and other forms of hubristic violations of the natural order. While transhumanists assert that all intelligent “persons” are deserving of rights, whether they are human or not, the biofundamentalists insist that only “humanness,” the possession of human DNA and a beating heart, is a marker of citizenship and rights.James Hughes, Democratic Transhumanism 2.0, 2002

Other Resources

An Overview of Biopolitics (inc Libertarian Transhumanists, Technoprogressives & Left-wing Bioconservatives): http://ieet.org/index.php/IEET/biopolitics

james hughes - what is technoprogressivism smallWikipedia Entry: http://en.wikipedia.org/wiki/Techno-progressivism “Techno-progressivism, technoprogressivism, tech-progressivism or techprogressivism (a portmanteau combining “technoscience-focused” and “progressivism”) is a stance of active support for the convergence of technological change and social change. Techno-progressives argue that technological developments can be profoundly empowering and emancipatory when they are regulated by legitimate democratic and accountable authorities to ensure that their costs, risks and benefits are all fairly shared by the actual stakeholders to those developments”

Article on ‘What is Technoprogressive?’ by Mike Treder (march 2009): http://ieet.org/index.php/IEET/more/treder20090321
Treader says – A slightly different way to look at the word is to regard it as a portmanteau of “technology aware” and “politically progressive.” Consider these definitions:

  • Technology Aware—Follows trends in emerging technologies; often eager to acquire and master newest gadgets; knows history of technology development and cultural integration; recognizes necessity for caution and responsibility.
  • Politically Progressive—Follows trends in emerging politics, both national and global; supports better democracy, greater fairness, less violence, and wider rights; enjoys learning about and sometimes participating in political action; knows history of political development and cultural integration; recognizes necessity for caution and responsibility.

And let’s add one more definition that will help sort things out:

  • Transhumanist—Supports the use of science and technology to improve human physical and mental characteristics and capacities; regards aspects of the human condition, such as disability, suffering, disease, aging, and involuntary death as unnecessary and undesirable; looks to biotechnologies and other emerging technologies for these purposes; may believe that humans eventually will be able to transform themselves into beings with such greatly expanded abilities as to merit the label “posthuman.”

I originally posted this article on H+ Magazine in 2014.