Philosophy & Effective Altruism – Peter Singer, David Pearce, Justin Oakley, Hilary Greaves
Panelists ([from left to right] Hilary Greaves, Peter Singer, Justin Oakley & David Pearce) discuss what they believe are important philosophical aspects of the Effective Altruism movement – from practical philosophy we can use today to possible endpoints implied by various frameworks in applied ethics. The panelists navigate through a wide range of fascinating and important topics that aspiring effective altruists and anyone whom is philosophically inclined will find both useful and enjoyable.
Panel moderated by Kerry Vaughan.
Panel Transcript
(in progress)
Hilary Greaves – So, my answer to that – the one I’m most directly familiar with is the one I already mentioned in my talk earlier. I think that population ethics can make a massive difference to a significant proportion of the things we should worry about as EAs. In particular, the thing that gives rise to this is the situation where – at the moment we have lots of moral philosophers who really like their ivory tower abstract theorising – those people have done a lot of discussing this abstract question of ‘ok what is my theory of population ethics’ – then at the real world extreme we have lots of people engaging directly with the real world issues thinking, ok, how should we do our cost-benefit analysis, for example family planning. We have a big gap in the middle – we don’t really have a well developed community of people are both in touch with the background moral philosophy and who are interested in applying it to the real world. So because there is that gap I think there’s a lot of low hanging fruit at the moment for people who have a background in moral philosophy and who are also plugged into the EA community to build these bridges from theory to practise and see what it all means for the real world.
01:56 Peter Singer – I actually agree with that – that population ethics is an important area – and another place that connects to what Hilary was talking about earlier is for the existential risk questions. Because, we need to think about – suppose that the world were destroyed – is what’s so bad about that the fact that 7.5 people have lost their lives or is it the loss of the untold billions of people that Nick Bostrom has (10^56 or something, I don’t know – some vastly unimaginable number) of possible future lives that could have been good, and that would be lost. So that seems to me to be a real issue. If you want something that’s a little more nitty gritty towards what we are talking about today – another issue is – how do we get a grip on the nature and extent of animal suffering? (something that we will be talking a bit about in a moment) It’s really just hard to say – David just talked about factory farming and the vast amount of billions of animals suffering in factory farms – and I totally agree that this is a top priority issue – but in terms of assessing priorities, how do we compare the suffering of a chicken in a factory farm to, let’s say, a mother who has to watch her child dying of malaria? Is there some way we can get a better handle on that?
03:23 Justin Oakley – For me, I think, one of the key issues in ethics at the moment that bears on Effective Altruism at the moment is what’s known as a ‘situationists critique of virtue ethics’ – so trying to understand not only on how having a better character helps people to act well but also what environment they are in. Subtle environmental influences that might either support or subvert a person acting well – in particular having the virtue perhaps of liberality – so there is lots of interesting work being done on that – some people think that debate is beginning to die down – but it seems to be just starting up again with a couple of new books that are coming out looking at a new twist on that. So for me, I’m keen to do that – I guess my own work at Monash I teach a lot of health professionals so keen to look at what environmental influences there are on doctors that impede them having a theraputic relationship on patients – not only thinking about how to help them be more virtuous – I suppose which is not the only thing I aim to do with the doctors that I teach but I hope to have that influence to some extent.
04:25 David Pearce – Yes well I’m perhaps slightly out of touch now with analytic philosophy – but one issue that I haven’t really seen tackled by analytic philosophy is this disguised implication of classical utilitarianism of what we ought to be doing, which is essentially optimising matter and energy throughout the world – and perhaps the accessible universe – for maximum bliss. A questioner earlier was asking ‘Well, as a negative utilitarian, do you accept this apparent counter-intuitive consequence that one ought to wipe out the world to prevent the suffering of one person.’ But if one is a classical utilitarian then it seems to be a disguised consequence that it’s not good enough to aim merely for a civilization in which all sentient beings could flourish and enjoy gradients of intelligent bliss – instead one must go on remorselessly to when matter and energy is nothing but pure orgasmic bliss.
05:35 Peter Singer – I find it a remorseless and unusual term to describe it
[laughter…]
5:40 David Pearce – Well, I think this is actually rather an appealing idea to me but I know not everyone shares this intuition
[laughter…]
5:50 Question “So Peter I’d be interested to know if you have thoughts on whether you think that’s an implication of classical utilitarianism” – Peter Singer – If I accept that implication? Well – David and I talked about this a little bit earlier over lunch – I sort of, I guess, maybe I accept it but I have difficulty in grasping what it is to talk about converting matter and energy into bliss – unless we assume that there are going to be conscious minds that are going to be experiencing this bliss. And, of course then David would then very strongly agree that conscious minds not only have to experience bliss but also not experience any suffering certainly, presumably minimize anything that they experience other than bliss (because that’s not converting matter and energy into bliss) – so if what I’m being asked to imagine is a universe with a vast number of conscious minds that are experiencing bliss – yeah, maybe I do accept that implication.
06:43 Question “So this is a question mostly for Justin – effective altruists often talk about doing the ‘most good’ – should EAs be committed to doing ‘enough good’ instead of the ‘most good’?”
Justin Oakley – Yeah, that’s a good question to ask – one of the thinks I didn’t emphasize in my talk on virtue ethics is that standardly virtue ethics thinks that we should strive to be an excellent human being, which can fall a little way short of producing the maximum good. So if you produce an excellent level of liberality or perhaps good or benefit to someone else then that’s enough for virtue. I guess in some of the examples I was giving in my talk you might choose a partner who – although you’re not the ultimate power couple (you are the sub-optimal power couple) but you are none the less attracted to that other person very strongly – from the perspective of effective altruism it might sound like you are doing the wrong thing – but intuitively it doesn’t seem to be wrong. That’s one example.
07:51 Hilary Graves – Surely, I mean – something I can say a bit about a similar issue looks like from a more consequentialist perspective – when people think of consequentialism they sometimes assume that consequentialists think that there is a moral imperative to maximize it – you absolutely have to do the most good and anything less than that is wrong. But it’s worth emphasising that not all consequentialists emphasise that at all – not all consequentialists think that it’s even helpful to buy into this language of right and wrong. So you don’t have to be a virtue ethicist to somewhat feel alienated from a moral imperative to do the absolute most good – you could just think something like the following: never mind right and wrong, never mind what I should vs not allowed to be doing. I might just want to make the world better – I might just think I could order all the things I could possibly do in terms of better or worse. And then you know, if I give away 1% of my income, that’s better thank giving away nothing – if I give 5% that’s better than giving away 1% – if I give away 50% that’s better than anything less – but I don’t have to impose some sharp cutoff and say that I’m doing something morally wrong if I give less than that – I think if we think in this way then we tend to alienate both ourselves and other people less – there’s something very alienating about holding up a very high standard and saying that anybody including ourselves who is falling short of this very high standard is doing something wrong with a capital ‘R’.
09.14 Peter Singer – So, in a way I agree – you are talking about a spectrum view (of where we have a spectrum of black to white – or maybe we don’t want to use those terms for it) from one end to the other and you’re somewhere on the spectrum and you try and work your way further up the spectrum perhaps – which, I’m reasonably comfortable with that. Another way of looking at it (and this goes back to something that Sidgewick also said) is that we ought to be clearer about distinguishing when we regard the act as the right act or the wrong act and when we regard it appropriate to praise or blame people for doing it. And these are really separate things – especially if you are a consequentialist because praising or blaming someone is an act – and you ought to only do that if it will have good consequences. So I suppose that we think that somebody in particular personal circumstances ought to be giving 50% of his earnings away – but he is only giving 10% – but he is living in a society like ours in which by giving 10% he is at the top 99.99% of what people are giving. Well to blame him saying ‘oh well your only giving 10% – you should be giving more’ looks like it’s going to be very counter productive – you really want to praise him in front of other people so that more people will give 10%. So I think if we understand it that way that’s another way of looking at it – I’m not sure if it’s nesseccarily better than the spectrum view that you [Hilary] was suggesting – but it is another way of, if you like, softening this black white morality idea that is either is right or is wrong.
10:54 Question “A question for Hilary – You mention that one might find the ‘uncertainty’ that your talk generates kind of paralyzing – but you mention that wasn’t your conclusion – can you expland on why this is (paralyzation) not your conclusion?”
Hilary Greaves –
- Transcribed by Adam Ford
Biographies
Hilary Greaves is an Associate Professor in Philosophy, at Somerville College in the University of Oxford. My current research focusses on various issues in ethics. Hilary’s interests include: foundational issues in consequentialism (‘global’ and ‘two-level’ forms of consequentialism), the debate between consequentialists and contractualists, aggregation (utilitarianism, prioritarianism and egalitarianism), moral psychology and selective debunking arguments, population ethics, the interface between ethics and economics, the analogies between ethics and epistemology, and formal epistemology. Hilary currently (2014-17) directs the project Population Ethics: Theory and Practice, based at the Future of Humanity Institute, and funded by The Leverhulme Trust.
Peter Singer is an Australian moral philosopher. He is currently the Ira W. DeCamp Professor of Bioethics at Princeton University, and a Laureate Professor at the Centre for Applied Philosophy and Public Ethics at the University of Melbourne. He specializes in applied ethics and approaches ethical issues from a secular, utilitarian perspective. He is known in particular for his book, Animal Liberation (1975), a canonical text in animal rights/liberation theory. For most of his career, he supported preference utilitarianism, but in his later years became a classical or hedonistic utilitarian, when co-authoring The Point of View of the Universe with Katarzyna de Lazari-Radek.
Justin Oakley is an Associate Professor at Monash University – the School of Philosophical, Historical & International Studies, and Centre for Human Bioethics. Justin has been part of the revival of the ethical doctrine known as virtue ethics, an Aristotelian doctrine which has received renewed interest in the past few decades. Oakley is particularly well known for his work on professional ethics and also the so-called ‘problem’ of friendship. The problem of friendship looks at how a strict application of impartialist ethical doctrines, such as utilitarianism and Kantianism, conflicts with our notions of friendship or ‘true friendship’.
David Pearce is a British philosopher who promotes the idea that there exists a strong ethical imperative for humans to work towards the abolition of suffering in all sentient life. His book-length internet manifesto The Hedonistic Imperative outlines how technologies such as genetic engineering, nanotechnology, pharmacology, and neurosurgery could potentially converge to eliminate all forms of unpleasant experience among human and non-human animals, replacing suffering with gradients of well-being, a project he refers to as “paradise engineering”.