Posts

The Point of View of the Universe – Peter Singer

Peter Singer discusses the new book ‘The Point Of View Of The Universe – Sidgwick & Contemporary Ethics’ (By Katarzyna de Lazari-Radek and Peter Singer) He also discusses his reasons for changing his mind about preference utilitarianism.

 

Buy the book here: http://ukcatalogue.oup.com/product/97… Bart Schultz’s (University of Chicago) Review of the book: http://ndpr.nd.edu/news/49215-he-poin… “Restoring Sidgwick to his rightful place of philosophical honor and cogently defending his central positions are obviously no small tasks, but the authors are remarkably successful in pulling them off, in a defense that, in the case of Singer at least, means candidly acknowledging that previous defenses of Hare’s universal prescriptivism and of a desire or preference satisfaction theory of the good were not in the end advances on the hedonistic utilitarianism set out by Sidgwick. But if struggles with Singer’s earlier selves run throughout the book, they are intertwined with struggles to come to terms with the work of Derek Parfit, both Reasons and Persons (Oxford, 1984) and On What Matters (Oxford, 2011), works that have virtually defined the field of analytical rehabilitations of Sidgwick’s arguments. The real task of The Point of View of the Universe — the title being an expression that Sidgwick used to refer to the impartial moral point of view — is to defend the effort to be even more Sidgwickian than Parfit, and, intriguingly enough, even more Sidgwickian than Sidgwick himself.”

Nick Bostrom: Why Focus on Existential Risk related to Machine Intelligence?

One can think of Existential Risk as a subcategory of a Global Catastrophic Risk – while GCR’s are really bad, civilization has the potential to recover from such a global catastrophic disaster.
An existential Risk is one in which there is no chance of recoverability. An example of the sort of disaster that fits the category of existential risk is human extinction which reduces the probability of [human] lives worth living – theories of value that imply even relatively small reductions in net existential risk have enormous expected value mostly fall under population ethics that consider an average or total utilitarian view of the well-being of the future of life in the universe.  Since we haven’t seen any convincing evidence of life outside earth’s gravity well, it may be that there is no advanced technologically capable life elsewhere in the observable universe.  If we value lives worth living, and lots of lives worth living, we might also value filling the uninhabited parts of the universe with lives worth living – and arguably we need an advanced technologically able civilization to achieve this.  Hence, if humans become extinct it may be that evolution will never again produce a life form capable of escaping the gravity well and colonizing the universe with valuable life.

Here we focus on the reasons to focus on Existential Risk related to machine intelligence.

Say machine intelligence is created with a theory of value outside of, contradictory to, or simply different enough to one in which valued human existence, or the existence of valuable life in the universe.  Also imagine that this machine intelligence could enact on it’s values in an exacting manner – it may cause humanity to become extinct on purpose, or as a side effect of implementing it’s values.

The paper ‘Existential Risk Prevention as Global Priority‘ by Nick Bostrom clarifies the concept of existential risk further:

Existential risks are those that threaten the entire future of humanity. Many theories of value imply that even relatively small reductions in net existential risk have enormous expected value. Despite their importance, issues surrounding human-extinction risks and related hazards remain poorly understood. In this paper, I clarify the concept of existential risk and develop an improved classification scheme. I discuss the relation between existential risks and basic issues in axiology, and show how existential risk reduction (via the maxipok rule) can serve as a strongly action-guiding principle for utilitarian concerns. I also show how the notion of existential risk suggests a new way of thinking about the ideal of sustainability. http://www.existential-risk.org

Interview with Nick Bostrom on Machine Intelligence and XRisk

I had the pleasure of doing an interview with Oxford philosopher Nick Bostrom on XRisk:

Transcription of interview:

In recent couple of years we’ve been focusing quite heavily on machine intelligence partly because it seems to raise some significant existentialist down the road part also because relatively little attention has been given to this risk. So when we are prioritizing what we want to spend our time researching then one variable that we take into account is how important is this topic that we could research? But another is how many other people are there who are already studying it? Because the more people who already studying it – the smaller the difference that having a few extra minds focusing on that topic.
So, say the topic of peace and war and how you can try to avoid international conflict is a very important topic – and many existential risks will be reduced if there is more global corporation.
However it is also hard to see how a very small group of people could make a substantial difference to today’s risk of arms races and wars. There is a big interest involved in this and so many people already working either on disarmament and peace and/or military strength that it’s an area where it would be great to make a change – but it’s hard to make a change if there are a smaller number people by contrast with something like the risk from machine intelligence and the risk of Super-Intelligence.
Only been a relatively small number of people have been thinking about this and there might be some low-hanging fruit there – some insights that might make a big difference. So that’s one of the criteria.
Now we are also looking at other existential risks and we are also looking at things other than existential risk like – with try to get a better understanding of what humanity’s situation is in the world and so we have been thinking some about the Fermi Paradox for example, some methodological tools that you need like observation selection theory how you can reason about these things. And to some extent also more near term impacts of technology and of course the opportunities involved in all of this – is that always worth to remind oneself that although enormous technological powers will pose great new dangers including existential risks they also of course make it possible to achieve enormous amount of good.
So one should bear in mind this ..the opportunities as well that are unleashed with technological advance.

About Professor Nick Bostrom

Director & James Martin Research Fellow

Bostrom Xrisk 2Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Programme on the Impacts of Future Technology within the Oxford Martin School. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009), and a forthcoming book on Superintelligence. He previously taught at Yale, and he was a Postdoctoral Fellow of the British Academy. Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy.

He is best known for his work in five areas: (i) the concept of existential risk; (ii) the simulation argument; (iii) anthropics (developing the first mathematically explicit theory of observation selection effects); (iv) transhumanism, including related issues in bioethics and on consequences of future technologies; and (v) foundations and practical implications of consequentialism. He is currently working on a book on the possibility of an intelligence explosion and on the existential risks and strategic issues related to the prospect of machine superintelligence.

In 2009, he was awarded the Eugene R. Gannon Award (one person selected annually worldwide from the fields of philosophy, mathematics, the arts and other humanities, and the natural sciences). He has been listed in the FP 100 Global Thinkers list, the Foreign Policy Magazine’s list of the world’s top 100 minds. His writings have been translated into more than 21 languages, and there have been some 80 translations or reprints of his works. He has done more than 500 interviews for TV, film, radio, and print media, and he has addressed academic and popular audiences around the world.

CV: http://www.nickbostrom.com/cv.pdf

Personal Web: http://www.nickbostrom.com

FHI Bio: https://www.fhi.ox.ac.uk/about/the-team/

Also consider joining the Facebook Group on Existential Risk: https://www.facebook.com/groups/ExistentialRisk

Peter Singer & David Pearce on Utilitarianism, Bliss & Suffering

Moral philosophers Peter Singer & David Pearce discuss some of the long term issues with various forms of utilitarianism, the future of predation and utilitronium shockwaves.

Topics Covered

Peter Singer

– long term impacts of various forms of utilitarianism
– Consciousness
– Artificial Intelligence
– Reducing suffering in the long run and in the short term
– Practical ethics
– Pre-implantation genetic screening to reduce disease and low mood
– Lives today are worth the same as lives in the future – though uncertainty must be brought to bear in deciding how one weighs up the importance of life
– The Hedonistic Imperative and how people react to it
– Correlation to high hedonic set points with productivity
existential risks and global catastrophic risks
– Closing factory farms

David Pearce

– Veganism and reducitarianism
– Red meat vs white meat – many more chickens are killed per ton of meat than beef
– Valence research
– Should one eliminate the suffering? And should we eliminate emotions of happiness?
– How can we answer the question of how far suffering is present in different life forms (like insects)?

Talk of moral progress can make one sound naive. But even the darkest cynic should salute the extraordinary work of Peter Singer to promote the interests of all sentient beings.David Pearce
 

 

Many thanks for watching!
– Support me via Patreon: https://www.patreon.com/scifuture
– Please Subscribe to this Channel: http://youtube.com/subscription_cente…
– Science, Technology & the Future website: http://scifuture.org

Peter Singer – Ethics, Utilitarianism & Effective Altruism

Peter Singer at UMMS - Ethics Utilitarianism Effective Altruism
Peter Singer discusses Effective Altruism, including Utilitarianism as a branch of Ethics. Talk was held as a joint event between the University of Melbourne Secular Society and Melbourne University Philosophy Community.

Is philosophy, as a grounds to help decide how good an action is, something you spend time thinking about?

Audio of Peter’s talk can be found here at the Internet Archive.

In his 2009 book ‘The Life You Can Save’, Singer presented the thought experiment of a child drowning in a pond before our eyes, something we would all readily intervene to prevent, even if it meant ruining an expensive pair of shoes we were wearing. He argued that, in fact, we are in a very similar ethical situation with respect to many people in the developing world: there are life-saving interventions, such as vaccinations and clean water, that can be provided at only a relatively small cost to ourselves. Given this, Singer argues that we in the west should give up some of our luxuries to help those in the world who are most in need.

If you want to do good, and want to be effective at doing good, how do you go about getting better at it?

UMMS - James Fodor - Peter Singer

Nick, James, and Peter Singer during Q&A

Around this central idea a new movement has emerged over the past few years known as Effective Altruism, which seeks to use the best evidence available in order to help the most people and do the most good with the limited resources that we have available. Associated with this movement are organisations such as GiveWell, which evaluates the relative effectiveness of different charities, and Giving What We Can, which encourages members to pledge to donate 10% or more of their income to effective poverty relief programs.

Peter-Singer--Adam-Ford-1I was happy to get a photo with Peter Singer on the day – we organised to do an interview, and for Peter to come and speak at the Effective Altruism Global conference later in 2015.
Here you can find number of videos I have taken at various events where Peter Singer has addressed Effective Altruism and associated philosophical angles.

New Book ‘The Point of View of the Universe – Sidgwick and Contemporary Ethics‘ – by Katarzyna de Lazari-Radek and Peter Singer

Subscribe to the Science, Technology & the Future YouTube Channel

My students often ask me if I think their parents did wrong to pay the $44,000 per year that it costs to send them to Princeton. I respond that paying that much for a place at an elite university is not justified unless it is seen as an investment in the future that will benefit not only one’s child, but others as well. An outstanding education provides students with the skills, qualifications, and understanding to do more for the world than would otherwise be the case. It is good for the world as a whole if there are more people with these qualities. Even if going to Princeton does no more than open doors to jobs with higher salaries, that, too, is a benefit that can be spread to others, as long as after graduating you remain firm in the resolve to contribute a percentage of that salary to organizations working for the poor, and spread this idea among your highly paid colleagues. The danger, of course, is that your colleagues will instead persuade you that you can’t possibly drive anything less expensive than a BMW and that you absolutely must live in an impressively large apartment in one of the most expensive parts of town.Peter Singer, The Life You Can Save: Acting Now to End World Poverty, London, 2009, pp. 138-139

 

Playlist of video interviews and talks by Peter Singer:

 

Science, Technology & the Future