Posts

Australian Humanist Convention 2017

Ethics In An Uncertain World

After an incredibly successful convention in Brisbane in May, 2016, the Humanist Society of Victoria together with the Council of Australian Humanist Societies will be hosting Australian Humanists at the start of April to discuss and learn about some of the most pressing issues facing society today and how Humanists and the world view we hold can help to shape a better future for all of society.

Official Conference LinkGet Tickets Here | Gala Dinner | FAQs | Meetup Link | Google Map Link

Lineup

AC Grayling – Humanism, the individual and society
Peter Singer – Public Ethics in the Trump Era
Clive Hamilton – Humanism and the Anthropocene
Meredith Doig – Interbelief presentations in schools
Monica Bini – World-views in the school curriculum
James Fodor – ???
Adam Ford – Humanism & Population Axiology

SciFuture supports and endorses the Humanist Convention in 2017 in efforts to explore ethics foundational in enlightenment values, march against prejudice, and help make sense of the world. SciFuture affirms that human beings (and indeed many other nonhuman animals) have the right to flourish, be happy, and give meaning and shape to their own lives.

Peter Singer wrote about Taking Humanism Beyond Speciesism – Free Inquiry, 24, no. 6 (Oct/Nov 2004), pp. 19-21

AC Grayling’s talk on Humanism at the British Humanists Association:

 

Narratives, Values & Progress – Anders Sandberg

Anders Sandberg discusses ideas & values and where we get them from, mindsets for progress, and that we are living in a unique era of technological change but also, importantly we are aware that we are living in an era of great change. Is there a direction in ethics? Is morality real? If so, how do we find it? What will our descendants think of our morals today – will they be weird to future generations?

One of the interesting things about our current world is that we are aware that a lot of ideas about morality are things going on in our culture and in our heads – and are not just the laws of nature – that’s very useful. Some people of course think that there is some ideal or best moral system – and maybe there is – but we’re not very good at finding it. It might turn out that in the long run if there is some kind of ultimate sensible moral – we’re going to find it – but that might take a very long time and might take brains much more powerful than ours – it might turn out that all sufficiently advanced alien civilizations eventually figure out the right thing to do – and do it. But it could also turn out actually when we meet real advanced aliens they’re going to be as confused about philosophy as we are – that’s one of the interesting things to find out about the universe.Anders Sandberg

Points covered:
– Technologies of the Future
– Efficient sustainability, in-vitro meat
– Living in an era of awareness of change
– Values have changed over time
– Will our morals be weird to future generations?
– Where is ethics going?
– Does moral relativism adequately explain reductions in violence?
– Is there an ideal ‘best moral system’? and if so, how do we find it?

Transcript

I grew up reading C.S. Lewis and his Narnia Stories. And at that time I didn’t get what was going on – I think it was when finally I was reading one, I then started thinking ‘this seems like an allegory’ and then sort of realizing ‘a christian allegory’ and then I felt ‘oh dear!’. I had to of course read all of them. In the end I was quite cross at Lewis for trying to foist that kind of stuff on children. He of course was unashamed – he was arguing in his letters ‘of course, if you are a christian you should make christian stories and try to tell them’ – but then of course he hides everything – so instead of having Jesus he turns him into a lion and so on.
But there’s an interesting problem in general of course ‘where do we get our ideas from?’. I grew up in boring Sweden in the 70’s so I had to read a lot of science fiction in order to get excited. That science fiction story reading made me interested in the technology & science and made it real – but it also gave me a sort of libertarian outlook accidentally. I realised that well, maybe our current rules for society are arbitrary – we could change them into something better. And aliens are people too, as well as robots. So in the end that kind of education also set me on my path.
So in general what we read as children effects us in sometimes very subtle ways – I was reading one book about technologies of the future by a German researcher – today of course it is very laughably 60ish – very much thinking about cybernetics and the big technologies, fusion reactions and rockets – but it also got me thinking about ‘we can change the world completely’ – there is no reason to think that it works out that only 700 billion people can live on earth – we rebuild it to house trillions – it wouldn’t be a particularly nice world, it would be nightmarish by our current standards – but it would actually be possible to do. It’s rather that we have a choice of saying ‘maybe we want to keep our world rather small scale with just a few billion people on it’. Other would say ‘we can’t event sustain a few billion people on the planet – we’re wearing out the biosphere’ – but again it’s based on a certain assumption about how the biosphere functions – we can produce the food more efficiently than we currently do. If we went back to the primitive hunter gatherers we would need several hundred earths to sustain us all simply hunter gatherers need enormous areas of land in order to get enough prey to hunt down in order to survive. Agriculture is much more effective – and we can go far beyond that – things like hydroponics and in-vitro meat might actually in the future mean that we would say it’s absolutely disgusting, or rather weird to culture farmland or eat animals! ‘Why would you actually eat animals? Well only disgusting people back in the stone-age did that’. In that stone age they were using silicone of course.
Dividing history into ages is very fraught because when you declare that ‘this is the atomic age’ you make certain assumptions – so the atomic age didn’t turn out so well because people lost their faith in their friend the atom – the space age didn’t turn out to be a space age because people found better ways of using the money – in a sense we went out into space prematurely before there was a good business case for it. The computer age on the other hand – well now computers are so everywhere that we could just as well call it the air age – it’s everywhere. Similarly the internet – that’s just the latest innovation – probably as people in the future look back we’re going to call it something completely different – just like we want to divide history into things like the Medieval age, or the Renaissance, which are not always more than just labels. What I think is unique about our era in history is that we’re very aware that we are living in a changing world; that is not going to be the same in 100 years, that is going to be utterly utterly different from what it was 100 years back. So many historical eras people have been thinking ‘oh we’re on the cusp of greatness or a great disaster’. But we actually have objective good reasons for thinking things cannot remain as they were. There are too many people, too many brains, too much technology – and a lot of these technologies are very dangerous and very transformative – so if we can get through this without too much damage to ourselves and the planet, I think we are going to have a very interesting future. But it’s also probably going to be a future that is somewhat alien from what we can foresee.
If we took an ancient roman and put him into the modern society he would absolutely shocked – not just by our technology, but by our values. We are very clear that compassion is a good virtue, and he would say the opposite and say ‘compassion is for old ladies’ – and of course a medieval knight would say ‘you have no honor in the 21st century’ and we’d say ‘oh yes, honor killings and all that – that’s bad, yeah actually a lot of those medieval honorable ideals they’re actually immoral by our standards’. So we should probably take that our moral standards are going to be regarded by the future as equally weird and immoral – and this is of course a rather chilling thought because our personal information is going to be available in the future to our descendants or even ourselves as older people with different values – a lot of advanced technologies we are worrying about are going to be wielded by our children, or by an older version of ourselves in ways we might not approve – but they’re going to say ‘yes but we’ve actually figured out the ethics now’.
The problem of course of where ethics is ever going is a really interesting question in itself – so people say oh yes, it’s just relative, it’s just societies making up rules to live by – but I do think we learned a few things – the reduction in violence over historical eras shows that we are getting something right. I don’t think that our relatives could just say that ‘violence is arbitrarily sometimes good and sometimes bad’ – I think it’s very clearly a bad thing. So I think we are making moral progress in some sense – we are figuring out better ways of thinking about morality. One of the interesting things about our current world is that we are aware that a lot of ideas about morality are things going on in our culture and in our heads – and are not just the laws of nature – that’s very useful. Some people of course think that there is some ideal or best moral system – and maybe there is – but we’re not very good at finding it. It might turn out that in the long run if there is some kind of ultimate sensible moral – we’re going to find it – but that might take a very long time and might take brains much more powerful than ours – it might turn out that all sufficiently advanced alien civilizations eventually figure out the right thing to do – and do it. But it could also turn out actually when we meet real advanced aliens they’re going to be as confused about philosophy as we are – that’s one of the interesting things to find out about the universe.

anders-sandberg-will-our-morals-be-weird-to-future-generations

Points covered:
– Technologies of the Future
– Efficient sustainability, in-vitro meat
– Living in an era of awareness of change
– Values have changed over time
– Will our morals be weird to future generations?
– Where is ethics going?
– Does moral relativism adequately explain reductions in violence?
– Is there an ideal ‘best moral system’? and if so, how do we find it?

Suffering, and Progress in Ethics – Peter Singer

Peter Singer_profileSuffering is generally bad – Peter Singer (who is a Hedonistic Utilitarian), and most Effective Altruists would agree with this. Though in addressing the need for suffering today Peter acknowledges that, as we are presently constituted, suffering is useful as a warning sign (e.g. against further injury). But what about the future?
What if we could eliminate suffering?
Perhaps in the future we will have advanced technological interventions to warn us of danger that will be functionally similar to suffering, but without the nasty raw feels.
Peter Singer, like David Pearce, suggests that if we could eliminate suffering of non-human animals that are capable of suffering, perhaps in some way that is difficult to imagine now – that this would be a good thing.

Video Interview:

I would see no reason to regret the absence of sufferingPeter Singer
Peter can’t see any regret to lament the disappearance of suffering, though perhaps people may say it would be useful to help understand literature of the past. Perhaps there are some indirect uses for suffering – but on balance Peter thinks that the elimination of suffering would be an amazingly good thing to do.

Singer thinks it is interesting to speculate what might be possible for the future of human beings, if we do survive over the longer term. To what extent are we going to be able to enhance ourselves? In particular to what extent are we going to be more ethical human beings – which brings to question ‘Moral Enhancement’.

The Expanding Circle - Peter SingerHave we made Progress in Ethics? Peter argues for the case that our species has expanded the circle of our ethical concern we have in his book ‘The Expanding Circle‘, and more recently Steven Pinker took up this idea in ‘Better Angels Of Our Nature’ – and this has happened over the millennia, beyond initially the tribal group, then to a national level, beyond ethnic groups to all human beings, and now we are starting to expand moral concern to non-human sentient beings as well.

Steven Pinker thinks that increases in our ethical consideration is bound up with increases in our intelligence (as proposed by James Flynn – the Flynn Effect – though this research is controversial (it could be actual increases in intelligence or just the ability to do more abstract reasoning)) and increases in our ability to reason abstractly.

As mentioned earlier there are other ways in which we may increase our ability and tendency to be more moral (see Moral Enhancement), and in the future we may discover genes that may influence us to think more about others, to dwell less on negative emotions like anger or rage. It is hard to say whether people will use these kinds of moral enhancers voluntarily, or whether we need state policies to encourage people to use moral enhances in order to produce better communities – and there are a lot of concerns here that people may legitimately have about how the moral enhancement project takes place. Peter sees this as a fascinating prospect and that it would be great to be around to see how things develop over the next couple of centuries.

Note Steven Pinker said of Peter’s book:

Singer’s theory of the expanding circle remains an enormously insightful concept, which reconciles the existence of human nature with political and moral progress. It was also way ahead of its time. . . . It’s wonderful to see this insightful book made available to a new generation of readers and scholars.Steven Pinker

The Expanding Circle

Abstract: What is ethics? Where do moral standards come from? Are they based on emotions, reason, or some innate sense of right and wrong? For many scientists, the key lies entirely in biology–especially in Darwinian theories of evolution and self-preservation. But if evolution is a struggle for survival, why are we still capable of altruism?

Peter Singer - The Most Good You Should Do - EA Global Melbourne 2015In his classic study The Expanding Circle, Peter Singer argues that altruism began as a genetically based drive to protect one’s kin and community members but has developed into a consciously chosen ethic with an expanding circle of moral concern. Drawing on philosophy and evolutionary psychology, he demonstrates that human ethics cannot be explained by biology alone. Rather, it is our capacity for reasoning that makes moral progress possible. In a new afterword, Singer takes stock of his argument in light of recent research on the evolution of morality.

References:
The Expanding Circle book page at Princeton University: http://press.princeton.edu/titles/9434.html

The Flynn Effect: http://en.wikipedia.org/wiki/Flynn_effect

Peter Singer – Ethics, Evolution & Moral Progress – https://www.youtube.com/watch?v=91UQAptxDn8

For more on Moral Enhancement see Julian Savulescu’s and others writings on the subject.

Subscribe to this Channel: http://youtube.com/subscription_center?add_user=TheRationalFuture

Science, Technology & the Future: http://scifuture.org

Should We Re-Engineer Ourselves to Phase Out our Violent Nature?

team-david-pearceDavid Pearce reflects on the motivation for human enhancement to phase out our violent nature. Do we want to perpetuate the states of experience which are beholden to our violent default biological imperatives .. or re-engineer ourselves?

Crudely speaking – and inevitably this is very crudely speaking – that nature designed men, males, to be hunters and warriors – and we still have to a very large degree a hunter/warrior psychology. This is why men are fascinated by conflict & violence – why we enjoy watching competitive sports.
Now although ordinary everyday life for many of us in the world is no longer involves the kind of endemic violence that was once the case (goodness knows how many deaths one will witness on screen in the course of a lifetime) one still enjoys violence and quite frequently watch men being very nasty towards each other – competing against each other.
Do we want to perpetuate these states of mind indefinitely? Or do we want to re-engineer ourselves?David Pearce

David-Pearce---Should-We-Re-Engineer-Ourselves-quote

Peter Singer & David Pearce on Utilitarianism, Bliss & Suffering

Moral philosophers Peter Singer & David Pearce discuss some of the long term issues with various forms of utilitarianism, the future of predation and utilitronium shockwaves.

Topics Covered

Peter Singer

– long term impacts of various forms of utilitarianism
– Consciousness
– Artificial Intelligence
– Reducing suffering in the long run and in the short term
– Practical ethics
– Pre-implantation genetic screening to reduce disease and low mood
– Lives today are worth the same as lives in the future – though uncertainty must be brought to bear in deciding how one weighs up the importance of life
– The Hedonistic Imperative and how people react to it
– Correlation to high hedonic set points with productivity
existential risks and global catastrophic risks
– Closing factory farms

David Pearce

– Veganism and reducitarianism
– Red meat vs white meat – many more chickens are killed per ton of meat than beef
– Valence research
– Should one eliminate the suffering? And should we eliminate emotions of happiness?
– How can we answer the question of how far suffering is present in different life forms (like insects)?

Talk of moral progress can make one sound naive. But even the darkest cynic should salute the extraordinary work of Peter Singer to promote the interests of all sentient beings.David Pearce
 

 

Many thanks for watching!
– Support me via Patreon: https://www.patreon.com/scifuture
– Please Subscribe to this Channel: http://youtube.com/subscription_cente…
– Science, Technology & the Future website: http://scifuture.org

Is there a Meaningful Future for Non-Optimal Moral Agents?

In an interview last year, I had a discussion with John Danaher on the Hedonistic Imperative & Superintelligence – a concern he has with HI is that it denies or de-emphasises some kind of moral agency – in moral theory there is a distinction between moral agents (being a responsible actor able to make moral decisions, influence direction of moral progress, shapes its future, and owes duties to others) and moral patients who may be deemed to have limited or no grounds for moral agency/autonomy/responsibility – they are simply a recipient of moral benefits – in contrast to humans, animals could be classified as moral patients – (see Stanford writing on Grounds for Moral Status).

As time goes on, the notion of strong artificial intelligence leading to Superintelligence (which may herald in something like an Intelligence Explosion) and ideas like the hedonistic imperative becomes less sensational sci-fi concepts and more like visions of realizable eventualities. Thinking about moral endpoints comes to me a paradoxical feeling of triumph and disempowerment.

John’s concern is that ensuring the well-being of humans (conscious entities) is consistent with denying their moral agency – minimizing their capacity to act – that there is a danger that the outcome of HI or an Intelligence Explosion may result in sentient life being made very happy forever, but unable to make choices – with a focus on a future entirely based on bliss whilst ignoring other aspects of what makes for a valuable or worthwhile existence.

Artificial Heart chipsSo even if we have a future where a) we are made very happy and b) we are subject to a wide variety of novelty (which I argue for in Novelty Utilitarianism) without some kind of self-determination we may not be able to enjoy part of what arguably makes for a worthwhile existence.

If the argument for moral agency is completely toppled by the argument against free will then I can see why there would be no reason for it – and that bliss/novelty may be enough – though I personally haven’t been convinced that this is the case.

Also the idea that moral agency and novelty should be ranked as auxiliary aspects to the main imperative of reducing suffering/increasing bliss seems problematic – I get the sense that they (agency/novelty) could easily be swapped out for most non-optimal moral agents in the quest for -suffering/+bliss troublesome.
The idea that upon evaluating grounds for moral status, our ethical/moral quotient may not match or even come close to a potential ethical force of a superintelligence is also troubling. If we are serious about the best ethical outcomes, when the time comes, should we be committed to resigning all moral agency to agents that are more adept at producing peek moral outcomes?
ancillary-one-esk-glitchIs it really possible for non-optimal agents to have a meaningful moral input in a universe where they’ve been completely outperformed by moral machines? Is a life of novelty & bliss the most optimal outcome we can hope for?

There probably should be some more discussion on trade-offs between moral agency, peek experience and novelty.

Discussion in this video here starts at 24:02

Below is the whole interview with John Danaher:

Wireheading with David Pearce

Is the Hedonistic Imperative equivalent to wire-heading?
People are often concerned about the future being a cyber-puink dystopia where people are hard wired into pleasure centers like smacked out like lotus eating milk-sops devoid of meaningful existence. Does David Pearce’s Hedonistic Imperative entail a future where we are all in thrall to permanent experiential orgasms – intravenously hotwired into our pleasure centers via some kind of soma like drug turning us into blissful-idiots?

Adam Ford: I think some people often conflate or distill the Hedonistic Imperative to mean ‘wireheading’ – what do you (think)?

David Pearce: Yes, I mean, clearly if one does argue that were going to phase out the biology of suffering and live out lives of perpetual bliss then it’s very natural to assimilate this to something like ‘wireheading’ – but for all sorts of reasons I don’t think wireheading (i.e. intercrainial self-stimulation of the reward centers and it’s pharmacological equivalent) is a plausible scenario for our future. Not least there will presumably always be selection pressure against wireheading – wireheads do not want to have baby wireheads and raise wirehead children.
I think a much more credible scenario is the idea that were going to re-calibrate the hedonic treadmill and allow ourselves and our future children to enjoy lives based on gradients of intelligent bliss. And one of the advantages of re-calibration rather than straight forward hedonic maximization is that by urging recalibration one isn’t telling people they ought to be giving up their existing preferences or values is that if your hedonic set-point (i.e. your average state of wellbeing) is much higher than it is now your quality wireheads - white of life will really be much higher – but it doesn’t involve any sacrifice of the values you hold most dear.
As a rather simplistic way of putting it – clearly where one lies basically on the hedonic axis will impose serious cognitive biases (i.e. someone who is let’s say depressive or prone to low mood) at least will have a very different set of biases from someone who is naturally cheerful. But none-the-less it doesn’t entail, so long as we aim for a motivational architecture of gradients of bliss, it doesn’t entail giving up anything you want to hold onto. I think that’s really important because a lot of people will be worried that somehow that if, yes, we do enter into some kind of secular paradise – it will involve giving up their normal relationships, their ordinary values and what they hold most dear. Re-calibration does not entail this (wireheading).

Adam Ford: That’s interesting – people think that you know as soon as you turn on the Hedonistic Imperative you are destined for a very narrow set of values – that could be just one peek experience being replayed over and over again – in some narrow local maximum.

wirehead-utility-function-hijacking1024x448David Pearce: Yes – I suppose one thinks of (kind of) crazed wirehead rats – in fairness, if one does imagine orgasmic bliss most people don’t complain that their orgasms are too long (and I’m not convinced that there is something desperately wrong with orgasmic bliss that lasts weeks, months, years or even centuries) but one needs to examine the wider sociological picture – and ask ‘is it really sustainable for us to become blissed out as distinct form blissful’.

Adam Ford: Right – and by blissed out you mean something like the lotus eaters found in Odysseus?

David Pearce: Yes, I mean clearly it is one version of paradise and bliss – they call it meditative tranquility (not doing anything) – but there are other versions of bliss in which one is hyper-motivated. It seems that, crudely speaking, motivation (which is mediated by the meso-limbic dopamene system) and raw bliss (which is associated with mu-opiod activation of our twin hedonic-hotspots) – the axis are orthogonal. Now they’re very closely interrelated (thanks to natural selection) – but in principle we can amplify one or damp down the other. Empirically, at any rate it seems to be the case today that the happiest people are also the most motivated – they have the greatest desires – I mean, this runs counter to the old buddhist notion that desire is suffering – but if you actually look at people who are depressive or chronically depressed quite frequently they have an absence of desire or motivation. But the point is we should be free to choose – yes it is potentially hugely liberatery – this control over our reward architecture, our pleasure circuitry that biotechnology offers – but let’s get things right. We don’t want to mess things up and produce the equivalent of large numbers of people on Heroin – and this is why I so strenuously urge the case for re-calibration – in the long run genetically, in the short run by various no-recreational drugs.

Clearly it is one version of paradise and bliss – they call it meditative tranquility (not doing anything) – but there are other versions of bliss in which one is hyper-motivated.David Pearce

Adam Ford: Ok… People may be worried that re-calibrating someone is akin to disrupting the continuum of self (or this enduring metaphysical ego) – so the person at the other end wouldn’t be really a continuation of the person at the beginning. What do you think? How would you respond to that sort of criticism?

wireheading - static David PearceDavid Pearce: It depends how strict ones conception of what personal identity is. Now, would you be worried if to learn tomorrow that you had won the national lottery (for example)? It would transform your lifestyle, your circle of friends – would this trigger the anxiety that the person who was living the existence of a multi-millionaire wasn’t really you? Well perhaps you should perhaps you should be worried about this – but on the whole most people would be relatively relaxed at the prospect. I would see this more as akin to a small child growing up – yes in one sense as one becomes a mature adult one has killed the toddler or lost the essence of what it was to be a toddler – but only in a very benign sense. And by aiming for re-calibration and hedonic enrichment rather than maximization, there is much less of a risk of loosing anything that you think is really valuable or important.

Adam Ford: Okay – well that’s interesting – we’ll talk about value. In order to not loose forms of value – even if you don’t use it (the values) much – you might have some values that you leave up in the attic to gather dust – like toys that you don’t play with anymore – but you might want to pick up once in a thousand years or what not. How do you then preserve complexity of value while also achieving high hedonic states – do you think they can go hand in hand? Or do you think preserving complexity of value reduces the likelihood that you will be able to achieve optimal hedonic states?

David Pearce: As an empirical matter – and I stress empirical here – it seems to be the case that the happiest are responsive to the broadest possible range of rewarding stimuli – it tends to be depressives who get stuck in a rut. So other things being equal – by re-calibrating ourselves, becoming happy and then superhappy – we can potentially at any rate, yes, enrich the complexity of our lives with a range of rewarding stimuli – it makes getting stuck in a rut less likely both for the individual and for civilization as a whole.
I think one of the reasons we are afraid of some kind of loss of complexity is that the idea of heaven – including in traditional christian heaven – it can sound a bit monotonous, and for happy people at least one of the experiences they find most unpleasant is boredom. But essentially it should be a matter of choice – yes, someone who is very happy to, let’s say, listen to a piece of music or contemplate or art, should be free to do so, and not forced into leading a very complex or complicated life – but equally folk who want to do a diverse range of things – well that’s feasible too.

For all sorts of reasons I don’t think wireheading… is a plausible scenario for our future. Not least there will presumably always be selection pressure against wireheading – wireheads do not want to have baby wireheads and raise wirehead children.David Pearce

– video/audio interview continues on past 10:00

The Knowledge Argument Applied to Ethics

A group of interested AI enthusiasts have been discussing Engineering Machine Consciousness in Melbourne for over a decade. In a recent interview with Jamais Cascio on Engineering Happy People & Global Catastrophic Risks, we discussed the benefits of amplifying empathy without the nasty side effects (possibly through cultural progress or technological intervention – a form of moral enhancement). I have been thinking further about how an agent might think and act differently if it had no ‘raw feels’ – any self-intimating conscious experience.

I posted to the Hedonistic Imperative Facebook group:

Is the limitations of empathy in humans distracting us from the in principle benefits of empathy?
The side effects of empathy in humans include increased distrust of the outgroup – and limitations in the amount of people we humans can feel strong empathy for – though in principle the experience of understanding another person’s condition from their perspective seems quite useful – at least while we are still motivated by our experience.
But what of the future? Are our post human descendants likely to be motivated by their ‘experiences of’ as well as their ‘knowledge about’ in making choices regarding others and about the trajectories of civilizational progress?

I wonder whether all the experiences of can be understood in terms of knowledge about – can the whole of ethics be explained without being experienced – though knowledge about without any experience of? Reminds me of the Mary’s Room/Knowledge Argument* thought experiment. I leaned towards the position that Mary could with a fully working knowledge of the visual system and relevant neuroscience wouldn’t ‘learn’ anything new when walking outside the grey-scale room and into the colourful world outside.
Imagine an adaptation of the Mary’s Room thought experiment – for the time being let’s call it Autistic Savant Angela’s Condition – in that:

class 1

Angela is a brilliant ethicist and neuroscientist (an expert in bioethics, neuroethics etc), whom (for whatever reason) is an Autistic savant with congenital insensitivity to pain and pleasure – she can’t at all feel pain, suffering or experience what it is like to be someone else who does experience pain or suffering – she has no intuition of ethics. Throughout her whole life she has been forced to investigate the field of ethics and the concepts of pleasure, bliss, pain and suffering through theory alone. She has a complete mechanical understanding of empathy, and brain states of subjects participating on various trolley thought experiments, hundreds of permutations of Milgrim experiments, is an expert in philosophies of ethics from Aristotle to Hume to Sidgwick etc. Suddenly there is a medical breakthrough in gene-therapy that would guarantee normal human function to feel without impairing cognitive ability at all. If Angela were to undergo this gene-therapy, would she learn anything more about ethics?

class 2

Same as above except Angela has no concept of other agents.

class 3

Same as class 2 except Angela is a superintelligent AI, and instead of gene-therapy, the AI recieves a software/hardware upgrade that allows the AI access to ‘fire in the equations’, to experience. Would the AI learn anything more about ethics? Would it act in a more ethical way? Would it produce more ethical outcomes?

 

Implications

Should an effective altruist support a completely dispassionate approach to cause prioritization?

If we were to build an ethical superintelligence – would having access to visceral experiences (i.e. pain/pleasure) change it’s ethical outcomes?
If a superintelligence were to perform Coherent Extrapolated Volition, or Coherent Aggregated Volition, would the kind of future which it produced differ if it could experience? Would likelihoods of various ethical outcomes change?

Is experience required to fully understand ethics? Is experience required to effectively implement ethics?
Robo Brain

 

footnotes

The Knowledge Argument Thought Experiment

jacksons-knowledge-argumentMary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like ‘red’, ‘blue’, and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal cords and expulsion of air from the lungs that results in the uttering of the sentence ‘The sky is blue’. […] What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not?

The Life You Can Save – Interview with Peter Singer by Adam Ford

Transcript of interview with Peter Singer on ‘The Life You Can Save‘:

The Life You Can Save Amazon 41Cnq4M0rzL._SX322_BO1,204,203,200_I’ve been writing & thinking about the question of global poverty and what affluent people ought to do about it for more than four years now. The first article I published on that ‘Famine Affluence & Morality’ came out in 1972.
But I’d never written a book on that topic until a few years ago when I was encouraged by greater interest in the question of global poverty and greater interest in the question of the obligations of the affluent, and some very positive responses I had to an article that appeared in the New York Times Sunday Magazine. So it was then that I decided that it was time for a book on this topic – I wrote “The Life You Can Save” which came out in 2009 – essentially to encourage people to think that – if you really want to think of yourself as ethical – you want to say “I’m an ethical person, I’m living an ethical life” – it’s not enough to say “I don’t kill.. I don’t steal.. I don’t cheat.. I don’t assault people..” and so on and all those sorts of ‘thou shalt not’ kind of commands – but if you are an affluent person, or even just a middle class person in an affluent country (that’s undoubtedly affluent by global standards) then you’re not really living an ethical life unless you’re prepared to share some of the good fortune that you have with people who are less fortunate – and you know, just the fact that you are able to live in an affluent country means that you are fortunate.
One of the points that I try to emphasize is that the differences in the amount of wealth that people who are middle class or above in affluent societies as compared to people in extreme poverty have is so great that things that are really pretty frivolous in our lives (that we spend money on that are not significant) could make a difference in the lives of other people. So, we spend money on all kinds of things from going to a cafe and having coffee or buying a bottle of water when we could drink water out of a tap, to going on vacations, making sure we have the latest iPhones – a whole lot of different things that really are not at all life changing to us. But the amount that we spend on those things could be life changing to people in extreme poverty who are not able to afford minimal health care for themselves or their family, who don’t get enough to eat, to perhaps are not able to send their children to school because they need them to work in the fields – a whole of different things that we can help people with.
I think we ought to be helping – I think to do nothing is wrong. Now, you can then debate “well, how much ought we to be doing?” which is something I do discuss in the book – but I think that it’s clear that we ought to be setting aside some of what we have to donate to organizations that are effective and that have been proven to be effective in helping the global poor.
So that’s really what the book is about – and perhaps the other thing I ought to say is that many people feel that somehow world poverty is like a bottomless pit that we are pouring money down into – but I think that’s a mistake – I think that it’s clear that we are making a difference – that we’ve dramatically reduced the number of people who die from preventable poverty related illnesses. Even just since my book as been published (in the last 5 years for instance) the number people – the number of children who die each year (children under 5) has dropped from nearly 10 million a year to 6.6 million a year – so, you know, that real progress in just 5 years – and we can make progress with poverty if we are careful about the way in which we do it and fund organizations that are open to evidence about what works and what doesn’t.

Many people feel that somehow world poverty is like a bottomless pit that we are pouring money down into – but I think that’s a mistake – I think that it’s clear that we are making a difference.

Peter Singer – The Life You Can Save – Video inteview by Adam Ford


The Life You Can Save

Acting Now to End World Poverty is a 2009 book by Australian philosopher Peter Singer. The author argues that citizens of affluent nations are behaving immorally if they do not act to end the poverty they know to exist in developing nations.

The book is focused on giving to charity, and discusses philosophical considerations, describes practical and psychological obstacles to giving, and lists available resources for prospective donors (e.g. charity evaluators). Singer concludes the book by proposing a minimum ethical standard of giving.

Christian Barry and Gerhard Overland (both from the Centre for Applied Philosophy and Public Ethics) described the widespread acceptance for the notion that “the lives of all people everywhere are of equal fundamental worth when viewed impartially”. They then wonder, during the book review in the Journal of Bioethical Inquiry, why “the affluent do so little, and demand so little of their governments, while remaining confident that they are morally decent people who generally fulfil their duties to others?” The reviewers agree with Singer, and say they see a conflict between the behaviours of the affluent and the claims of the affluent to being morally decent people. The reviewers also discuss other practical ways to fight poverty.

Support The Life You Can SaveHistorically, every dollar donated to The Life You Can Save typically moves an additional three to four dollars to our effective charities. Our recommended charities provide services and support to men, women, and children in needy communities across the globe. Your support for our work means that you'll get the most out of your charitable giving!

Can I Really Save Lives?  The good news, in fact the great news, is that you can! While there are endless problems in the world that you as an individual cannot solve, you can actually save lives and reduce unnecessary suffering and premature death. Should you do it? Watch this video and decide for yourself. The information on our website can help you give most effectively to become a life saver.

Also see the Life You Can Save Infograph Video

Buy The Life You Can Save on Amazon

The Life You Can Save Facebook Page

Philosophy & Effective Altruism – Peter Singer, David Pearce, Justin Oakley, Hilary Greaves

Panelists ([from left to right] Hilary Greaves, Peter Singer, Justin Oakley & David Pearce) discuss what they believe are important philosophical aspects of the Effective Altruism movement – from practical philosophy we can use today to possible endpoints implied by various frameworks in applied ethics. The panelists navigate through a wide range of fascinating and important topics that aspiring effective altruists and anyone whom is philosophically inclined will find both useful and enjoyable.

Panel moderated by Kerry Vaughan.

Panel Transcript

(in progress)

0:35 Question “What are the hot topics in philosophy that might change what effective altruists might focus on?”
Hilary Greaves – So, my answer to that – the one I’m most directly familiar with is the one I already mentioned in my talk earlier. I think that population ethics can make a massive difference to a significant proportion of the things we should worry about as EAs. In particular, the thing that gives rise to this is the situation where – at the moment we have lots of moral philosophers who really like their ivory tower abstract theorising – those people have done a lot of discussing this abstract question of ‘ok what is my theory of population ethics’ – then at the real world extreme we have lots of people engaging directly with the real world issues thinking, ok, how should we do our cost-benefit analysis, for example family planning. We have a big gap in the middle – we don’t really have a well developed community of people are both in touch with the background moral philosophy and who are interested in applying it to the real world. So because there is that gap I think there’s a lot of low hanging fruit at the moment for people who have a background in moral philosophy and who are also plugged into the EA community to build these bridges from theory to practise and see what it all means for the real world.

01:56 Peter Singer – I actually agree with that – that population ethics is an important area – and another place that connects to what Hilary was talking about earlier is for the existential risk questions. Because, we need to think about – suppose that the world were destroyed – is what’s so bad about that the fact that 7.5 people have lost their lives or is it the loss of the untold billions of people that Nick Bostrom has (10^56 or something, I don’t know – some vastly unimaginable number) of possible future lives that could have been good, and that would be lost. So that seems to me to be a real issue. If you want something that’s a little more nitty gritty towards what we are talking about today – another issue is – how do we get a grip on the nature and extent of animal suffering? (something that we will be talking a bit about in a moment) It’s really just hard to say – David just talked about factory farming and the vast amount of billions of animals suffering in factory farms – and I totally agree that this is a top priority issue – but in terms of assessing priorities, how do we compare the suffering of a chicken in a factory farm to, let’s say, a mother who has to watch her child dying of malaria? Is there some way we can get a better handle on that?

03:23 Justin Oakley – For me, I think, one of the key issues in ethics at the moment that bears on Effective Altruism at the moment is what’s known as a ‘situationists critique of virtue ethics’ – so trying to understand not only on how having a better character helps people to act well but also what environment they are in. Subtle environmental influences that might either support or subvert a person acting well – in particular having the virtue perhaps of liberality – so there is lots of interesting work being done on that – some people think that debate is beginning to die down – but it seems to be just starting up again with a couple of new books that are coming out looking at a new twist on that. So for me, I’m keen to do that – I guess my own work at Monash I teach a lot of health professionals so keen to look at what environmental influences there are on doctors that impede them having a theraputic relationship on patients – not only thinking about how to help them be more virtuous – I suppose which is not the only thing I aim to do with the doctors that I teach but I hope to have that influence to some extent.

Panel Greaves Singer Oakley Pearce - Orgasmatronium - 1

The Utilitarianism at the End of the Universe – Panelists Hilary Greaves, Peter Singer, Justin Oakley & David Pearce laugh about the possible ‘end games’ of classical utilitarianism.

04:25 David Pearce – Yes well I’m perhaps slightly out of touch now with analytic philosophy – but one issue that I haven’t really seen tackled by analytic philosophy is this disguised implication of classical utilitarianism of what we ought to be doing, which is essentially optimising matter and energy throughout the world – and perhaps the accessible universe – for maximum bliss. A questioner earlier was asking ‘Well, as a negative utilitarian, do you accept this apparent counter-intuitive consequence that one ought to wipe out the world to prevent the suffering of one person.’ But if one is a classical utilitarian then it seems to be a disguised consequence that it’s not good enough to aim merely for a civilization in which all sentient beings could flourish and enjoy gradients of intelligent bliss – instead one must go on remorselessly to when matter and energy is nothing but pure orgasmic bliss.
05:35 Peter Singer – I find it a remorseless and unusual term to describe it
[laughter…] 5:40 David Pearce – Well, I think this is actually rather an appealing idea to me but I know not everyone shares this intuition
[laughter…] 5:50 Question “So Peter I’d be interested to know if you have thoughts on whether you think that’s an implication of classical utilitarianism” – Peter Singer – If I accept that implication? Well – David and I talked about this a little bit earlier over lunch – I sort of, I guess, maybe I accept it but I have difficulty in grasping what it is to talk about converting matter and energy into bliss – unless we assume that there are going to be conscious minds that are going to be experiencing this bliss. And, of course then David would then very strongly agree that conscious minds not only have to experience bliss but also not experience any suffering certainly, presumably minimize anything that they experience other than bliss (because that’s not converting matter and energy into bliss) – so if what I’m being asked to imagine is a universe with a vast number of conscious minds that are experiencing bliss – yeah, maybe I do accept that implication.

06:43 Question “So this is a question mostly for Justin – effective altruists often talk about doing the ‘most good’ – should EAs be committed to doing ‘enough good’ instead of the ‘most good’?”

Justin Oakley – Yeah, that’s a good question to ask – one of the thinks I didn’t emphasize in my talk on virtue ethics is that standardly virtue ethics thinks that we should strive to be an excellent human being, which can fall a little way short of producing the maximum good.  So if you produce an excellent level of liberality or perhaps good or benefit to someone else then that’s enough for virtue.  I guess in some of the examples I was giving in my talk you might choose a partner who – although you’re not the ultimate power couple (you are the sub-optimal power couple) but you are none the less attracted to that other person very strongly – from the perspective of effective altruism it might sound like you are doing the wrong thing – but intuitively it doesn’t seem to be wrong.  That’s one example.

07:51 Hilary Graves – Surely, I mean – something I can say a bit about a similar issue looks like from a more consequentialist perspective – when people think of consequentialism they sometimes assume that consequentialists think that there is a moral imperative to maximize it – you absolutely have to do the most good and anything less than that is wrong.  But it’s worth emphasising that not all consequentialists emphasise that at all – not all consequentialists think that it’s even helpful to buy into this language of right and wrong.  So you don’t have to be a virtue ethicist to somewhat feel alienated from a moral imperative to do the absolute most good – you could just think something like the following: never mind right and wrong, never mind what I should vs not allowed to be doing.  I might just want to make the world better – I might just think I could order all the things I could possibly do in terms of better or worse.   And then you know, if I give away 1% of my income, that’s better thank giving away nothing – if I give 5% that’s better than giving away 1% – if I give away 50% that’s better than anything less – but I don’t have to impose some sharp cutoff and say that I’m doing something morally wrong if I give less than that – I think if we think in this way then we tend to alienate both ourselves and other people less – there’s something very alienating about holding up a very high standard and saying that anybody including ourselves who is falling short of this very high standard is doing something wrong with a capital ‘R’.

Panel including Peter Singer

09.14 Peter Singer – So, in a way I agree – you are talking about a spectrum view (of where we have a spectrum of black to white – or maybe we don’t want to use those terms for it) from one end to the other and you’re somewhere on the spectrum and you try and work your way further up the spectrum perhaps – which, I’m reasonably comfortable with that.  Another way of looking at it (and this goes back to something that Sidgewick also said) is that we ought to be clearer about distinguishing when we regard the act as the right act or the wrong act and when we regard it appropriate to praise or blame people for doing it.  And these are really separate things – especially if you are a consequentialist because praising or blaming someone is an act – and you ought to only do that if it will have good consequences.  So I suppose that we think that somebody in particular personal circumstances ought to  be giving 50% of his earnings away – but he is only giving 10% – but he is living in a society like ours in which by giving 10% he is at the top 99.99% of what people are giving.  Well to blame him saying ‘oh well your only giving 10% – you should be giving more’ looks like it’s going to be very counter productive – you really want to praise him in front of other people so that more people will give 10%.  So I think if we understand it that way that’s another way of looking at it – I’m not sure if it’s nesseccarily better than the spectrum view that you [Hilary] was suggesting – but it is another way of, if you like, softening this black white morality idea that is either is right or is wrong.

10:54 Question “A question for Hilary – You mention that one might find the ‘uncertainty’ that your talk generates kind of paralyzing – but you mention that wasn’t your conclusion – can you expland on why this is (paralyzation) not your conclusion?”
Hilary Greaves

 

  • Transcribed by Adam Ford

Biographies

Hilary Greaves - LectureHilary Greaves is an Associate Professor in Philosophy, at Somerville College in the University of Oxford. My current research focusses on various issues in ethics. Hilary’s interests include: foundational issues in consequentialism (‘global’ and ‘two-level’ forms of consequentialism), the debate between consequentialists and contractualists, aggregation (utilitarianism, prioritarianism and egalitarianism), moral psychology and selective debunking arguments, population ethics, the interface between ethics and economics, the analogies between ethics and epistemology, and formal epistemology. Hilary currently (2014-17) directs the project Population Ethics: Theory and Practice, based at the Future of Humanity Institute, and funded by The Leverhulme Trust.

Peter Singer - Non-Human Animal Ethics - EA Global Melbourne 2015Peter Singer is an Australian moral philosopher. He is currently the Ira W. DeCamp Professor of Bioethics at Princeton University, and a Laureate Professor at the Centre for Applied Philosophy and Public Ethics at the University of Melbourne. He specializes in applied ethics and approaches ethical issues from a secular, utilitarian perspective.  He is known in particular for his book, Animal Liberation (1975), a canonical text in animal rights/liberation theory. For most of his career, he supported preference utilitarianism, but in his later years became a classical or hedonistic utilitarian, when co-authoring The Point of View of the Universe with Katarzyna de Lazari-Radek.

Justin Oakley - Virtue & Effective Altruism - EA Global Melbourne 2015Justin Oakley is an Associate Professor at Monash University – the School of Philosophical, Historical & International Studies, and Centre for Human Bioethics. Justin has been part of the revival of the ethical doctrine known as virtue ethics, an Aristotelian doctrine which has received renewed interest in the past few decades.  Oakley is particularly well known for his work on professional ethics and also the so-called ‘problem’ of friendship. The problem of friendship looks at how a strict application of impartialist ethical doctrines, such as utilitarianism and Kantianism, conflicts with our notions of friendship or ‘true friendship’.

David PearceDavid Pearce is a British philosopher who promotes the idea that there exists a strong ethical imperative for humans to work towards the abolition of suffering in all sentient life. His book-length internet manifesto The Hedonistic Imperative outlines how technologies such as genetic engineering, nanotechnology, pharmacology, and neurosurgery could potentially converge to eliminate all forms of unpleasant experience among human and non-human animals, replacing suffering with gradients of well-being, a project he refers to as “paradise engineering”.