Posts

Towards the Abolition of Suffering Through Science

An online panel focusing on reducing suffering & paradise engineering through the lens of science.

Panelists: Andrés Gómez Emilsson, David Pearce, Brian Tomasik and Mike Johnson

Note, consider skipping to to 10:19 to bypass some audio problems in the beginning!!


Topics

Andrés Gómez Emilsson: Qualia computing (how to use consciousness for information processing, and why that has ethical implications)

  • How do we know consciousness is causally efficacious? Because we are conscious and evolution can only recruit systems/properties when they do something (and they do it better than the available alternatives).
  • What is consciousness’ purpose on animals?  (Information processing).
  • What is consciousness’ comparative advantage?  (Phenomenal binding).
  • Why does this matter for suffering reduction? Suffering has functional properties that play a role in the inclusive fitness of organisms. If we figure out exactly what role they play (by reverse-engineering the computational properties of consciousness), we can substitute them by equally (or better) functioning non-conscious or positive hedonic-tone analogues.
  • What is the focus of Qualia Computing? (it focuses on basic fundamental questions and simple experimental paradigms to get at them (e.g. computational properties of visual qualia via psychedelic psychophysics)).

Brian Tomasik:

  • Space colonization “Colonization of space seems likely to increase suffering by creating (literally) astronomically more minds than exist on Earth, so we should push for policies that would make a colonization wave more humane, such as not propagating wild-animal suffering to other planets or in virtual worlds.”
  • AGI safety “It looks likely that artificial general intelligence (AGI) will be developed in the coming decades or centuries, and its initial conditions and control structures may make an enormous impact to the dynamics, values, and character of life in the cosmos.”,
  • Animals and insects “Because most wild animals die, often painfully, shortly after birth, it’s plausible that suffering dominates happiness in nature. This is especially plausible if we extend moral considerations to smaller creatures like the ~1019 insects on Earth, whose collective neural mass outweighs that of humanity by several orders of magnitude.”

Mike Johnson:

  • If we successfully “reverse-engineer” the patterns for pain and pleasure, what does ‘responsible disclosure’ look like? Potential benefits and potential for abuse both seem significant.
  • If we agree that valence is a pattern in a dataset, what’s a good approach to defining the dataset, and what’s a good heuristic for finding the pattern?
  • What order of magnitude is the theoretical potential of mood enhancement? E.g., 2x vs 10x vs 10^10x
  • What are your expectations of the distribution of suffering in the world? What proportion happens in nature vs within the boundaries of civilization? What are counter-intuitive sources of suffering? Do we know about ~90% of suffering on the earth, or ~.001%?
  • Valence Research, The Mystery of Pain & Pleasure.
  • Why is it such an exciting time round about now to be doing valence research?  Are we at a sweet spot in history with this regard?  What is hindering valence research? (examples of muddled thinking, cultural barriers etc?)
  • How do we use the available science to improve the QALY? GiveDirectly has used change in cortisol levels to measure effectiveness, and the EU (what’s EU stand for?) evidently does something similar involving cattle. It seems like a lot of the pieces for a more biologically-grounded QALY- and maybe a SQALY (Species and Quality-Adjusted Life-Year)- are available, someone just needs to put them together. I suspect this one of the lowest-hanging highest-leverage research fruits.

David Pearce: The ultimate scope of our moral responsibilities. Assume for a moment that our main or overriding goal should be to minimise and ideally abolish involuntary suffering. I typically assume that (a) only biological minds suffer and (b) we are probably alone within our cosmological horizon. If so, then our responsibility is “only” to phase out the biology of involuntary suffering here on Earth and make sure it doesn’t spread or propagate outside our solar system. But Brian, for instance, has quite a different metaphysics of mind, most famously that digital characters in video games can suffer (now only a little – but in future perhaps a lot). The ramifications here for abolitionist bioethics are far-reaching.

 

Other:
– Valence research, Qualia computing (how to use consciousness for information processing, and why that has ethical implications),  animal suffering, insect suffering, developing an ethical Nozick’s Experience Machine, long term paradise engineering, complexity and valence
– Effective Altruism/Cause prioritization and suffering reduction – People’s practical recommendations for the best projects that suffering reducers can work on (including where to donate, what research topics to prioritize, what messages to spread). – So cause prioritization applied directly to the abolition of suffering?
– what are the best projects people can work on to reduce suffering? and what to work on first? (including where to donate, what research topics to prioritize, what messages to spread)
– If we successfully “reverse-engineer” the patterns for pain and pleasure, what does ‘responsible disclosure’ look like? Potential benefits and potential for abuse both seem significant
– If we agree that valence is a pattern in a dataset, what’s a good approach to defining the dataset, and what’s a good heuristic for finding the pattern?
– What order of magnitude is the theoretical potential of mood enhancement? E.g., 2x vs 10x vs 10^10x

Panelists

David Pearce: http://hedweb.com/
Mike Johnson: http://opentheory.net/
Andrés Gómez Emilsson: http://qualiacomputing.com/
Brain Tomasik: http://reducing-suffering.org/

 

#hedweb ‪#EffectiveAltruism ‪#HedonisticImperative ‪#AbolitionistProject

The event was hosted on the 10th of August 2015, Venue: The Internet

Towards the Abolition of Suffering Through Science was hosted by Adam Ford for Science, Technology and the Future.

Towards the Abolition of Suffering Through Science

Towards the Abolition of Suffering Through Science

The Point of View of the Universe – Peter Singer

Peter Singer discusses the new book ‘The Point Of View Of The Universe – Sidgwick & Contemporary Ethics’ (By Katarzyna de Lazari-Radek and Peter Singer) He also discusses his reasons for changing his mind about preference utilitarianism.

 

Buy the book here: http://ukcatalogue.oup.com/product/97… Bart Schultz’s (University of Chicago) Review of the book: http://ndpr.nd.edu/news/49215-he-poin… “Restoring Sidgwick to his rightful place of philosophical honor and cogently defending his central positions are obviously no small tasks, but the authors are remarkably successful in pulling them off, in a defense that, in the case of Singer at least, means candidly acknowledging that previous defenses of Hare’s universal prescriptivism and of a desire or preference satisfaction theory of the good were not in the end advances on the hedonistic utilitarianism set out by Sidgwick. But if struggles with Singer’s earlier selves run throughout the book, they are intertwined with struggles to come to terms with the work of Derek Parfit, both Reasons and Persons (Oxford, 1984) and On What Matters (Oxford, 2011), works that have virtually defined the field of analytical rehabilitations of Sidgwick’s arguments. The real task of The Point of View of the Universe — the title being an expression that Sidgwick used to refer to the impartial moral point of view — is to defend the effort to be even more Sidgwickian than Parfit, and, intriguingly enough, even more Sidgwickian than Sidgwick himself.”

Nick Bostrom: Why Focus on Existential Risk related to Machine Intelligence?

One can think of Existential Risk as a subcategory of a Global Catastrophic Risk – while GCR’s are really bad, civilization has the potential to recover from such a global catastrophic disaster.
An existential Risk is one in which there is no chance of recoverability. An example of the sort of disaster that fits the category of existential risk is human extinction which reduces the probability of [human] lives worth living – theories of value that imply even relatively small reductions in net existential risk have enormous expected value mostly fall under population ethics that consider an average or total utilitarian view of the well-being of the future of life in the universe.  Since we haven’t seen any convincing evidence of life outside earth’s gravity well, it may be that there is no advanced technologically capable life elsewhere in the observable universe.  If we value lives worth living, and lots of lives worth living, we might also value filling the uninhabited parts of the universe with lives worth living – and arguably we need an advanced technologically able civilization to achieve this.  Hence, if humans become extinct it may be that evolution will never again produce a life form capable of escaping the gravity well and colonizing the universe with valuable life.

Here we focus on the reasons to focus on Existential Risk related to machine intelligence.

Say machine intelligence is created with a theory of value outside of, contradictory to, or simply different enough to one in which valued human existence, or the existence of valuable life in the universe.  Also imagine that this machine intelligence could enact on it’s values in an exacting manner – it may cause humanity to become extinct on purpose, or as a side effect of implementing it’s values.

The paper ‘Existential Risk Prevention as Global Priority‘ by Nick Bostrom clarifies the concept of existential risk further:

Existential risks are those that threaten the entire future of humanity. Many theories of value imply that even relatively small reductions in net existential risk have enormous expected value. Despite their importance, issues surrounding human-extinction risks and related hazards remain poorly understood. In this paper, I clarify the concept of existential risk and develop an improved classification scheme. I discuss the relation between existential risks and basic issues in axiology, and show how existential risk reduction (via the maxipok rule) can serve as a strongly action-guiding principle for utilitarian concerns. I also show how the notion of existential risk suggests a new way of thinking about the ideal of sustainability. http://www.existential-risk.org

Interview with Nick Bostrom on Machine Intelligence and XRisk

I had the pleasure of doing an interview with Oxford philosopher Nick Bostrom on XRisk:

Transcription of interview:

In recent couple of years we’ve been focusing quite heavily on machine intelligence partly because it seems to raise some significant existentialist down the road part also because relatively little attention has been given to this risk. So when we are prioritizing what we want to spend our time researching then one variable that we take into account is how important is this topic that we could research? But another is how many other people are there who are already studying it? Because the more people who already studying it – the smaller the difference that having a few extra minds focusing on that topic.
So, say the topic of peace and war and how you can try to avoid international conflict is a very important topic – and many existential risks will be reduced if there is more global corporation.
However it is also hard to see how a very small group of people could make a substantial difference to today’s risk of arms races and wars. There is a big interest involved in this and so many people already working either on disarmament and peace and/or military strength that it’s an area where it would be great to make a change – but it’s hard to make a change if there are a smaller number people by contrast with something like the risk from machine intelligence and the risk of Super-Intelligence.
Only been a relatively small number of people have been thinking about this and there might be some low-hanging fruit there – some insights that might make a big difference. So that’s one of the criteria.
Now we are also looking at other existential risks and we are also looking at things other than existential risk like – with try to get a better understanding of what humanity’s situation is in the world and so we have been thinking some about the Fermi Paradox for example, some methodological tools that you need like observation selection theory how you can reason about these things. And to some extent also more near term impacts of technology and of course the opportunities involved in all of this – is that always worth to remind oneself that although enormous technological powers will pose great new dangers including existential risks they also of course make it possible to achieve enormous amount of good.
So one should bear in mind this ..the opportunities as well that are unleashed with technological advance.

About Professor Nick Bostrom

Director & James Martin Research Fellow

Bostrom Xrisk 2Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Programme on the Impacts of Future Technology within the Oxford Martin School. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009), and a forthcoming book on Superintelligence. He previously taught at Yale, and he was a Postdoctoral Fellow of the British Academy. Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy.

He is best known for his work in five areas: (i) the concept of existential risk; (ii) the simulation argument; (iii) anthropics (developing the first mathematically explicit theory of observation selection effects); (iv) transhumanism, including related issues in bioethics and on consequences of future technologies; and (v) foundations and practical implications of consequentialism. He is currently working on a book on the possibility of an intelligence explosion and on the existential risks and strategic issues related to the prospect of machine superintelligence.

In 2009, he was awarded the Eugene R. Gannon Award (one person selected annually worldwide from the fields of philosophy, mathematics, the arts and other humanities, and the natural sciences). He has been listed in the FP 100 Global Thinkers list, the Foreign Policy Magazine’s list of the world’s top 100 minds. His writings have been translated into more than 21 languages, and there have been some 80 translations or reprints of his works. He has done more than 500 interviews for TV, film, radio, and print media, and he has addressed academic and popular audiences around the world.

CV: http://www.nickbostrom.com/cv.pdf

Personal Web: http://www.nickbostrom.com

FHI Bio: https://www.fhi.ox.ac.uk/about/the-team/

Also consider joining the Facebook Group on Existential Risk: https://www.facebook.com/groups/ExistentialRisk

Peter Singer & David Pearce on Utilitarianism, Bliss & Suffering

Moral philosophers Peter Singer & David Pearce discuss some of the long term issues with various forms of utilitarianism, the future of predation and utilitronium shockwaves.

Topics Covered

Peter Singer

– long term impacts of various forms of utilitarianism
– Consciousness
– Artificial Intelligence
– Reducing suffering in the long run and in the short term
– Practical ethics
– Pre-implantation genetic screening to reduce disease and low mood
– Lives today are worth the same as lives in the future – though uncertainty must be brought to bear in deciding how one weighs up the importance of life
– The Hedonistic Imperative and how people react to it
– Correlation to high hedonic set points with productivity
existential risks and global catastrophic risks
– Closing factory farms

David Pearce

– Veganism and reducitarianism
– Red meat vs white meat – many more chickens are killed per ton of meat than beef
– Valence research
– Should one eliminate the suffering? And should we eliminate emotions of happiness?
– How can we answer the question of how far suffering is present in different life forms (like insects)?

Talk of moral progress can make one sound naive. But even the darkest cynic should salute the extraordinary work of Peter Singer to promote the interests of all sentient beings.David Pearce
 

 

Many thanks for watching!
– Support me via Patreon: https://www.patreon.com/scifuture
– Please Subscribe to this Channel: http://youtube.com/subscription_cente…
– Science, Technology & the Future website: http://scifuture.org

Peter Singer – Ethics, Utilitarianism & Effective Altruism

Peter Singer at UMMS - Ethics Utilitarianism Effective Altruism
Peter Singer discusses Effective Altruism, including Utilitarianism as a branch of Ethics. Talk was held as a joint event between the University of Melbourne Secular Society and Melbourne University Philosophy Community.

Is philosophy, as a grounds to help decide how good an action is, something you spend time thinking about?

Audio of Peter’s talk can be found here at the Internet Archive.

In his 2009 book ‘The Life You Can Save’, Singer presented the thought experiment of a child drowning in a pond before our eyes, something we would all readily intervene to prevent, even if it meant ruining an expensive pair of shoes we were wearing. He argued that, in fact, we are in a very similar ethical situation with respect to many people in the developing world: there are life-saving interventions, such as vaccinations and clean water, that can be provided at only a relatively small cost to ourselves. Given this, Singer argues that we in the west should give up some of our luxuries to help those in the world who are most in need.

If you want to do good, and want to be effective at doing good, how do you go about getting better at it?

UMMS - James Fodor - Peter Singer

Nick, James, and Peter Singer during Q&A

Around this central idea a new movement has emerged over the past few years known as Effective Altruism, which seeks to use the best evidence available in order to help the most people and do the most good with the limited resources that we have available. Associated with this movement are organisations such as GiveWell, which evaluates the relative effectiveness of different charities, and Giving What We Can, which encourages members to pledge to donate 10% or more of their income to effective poverty relief programs.

Peter-Singer--Adam-Ford-1I was happy to get a photo with Peter Singer on the day – we organised to do an interview, and for Peter to come and speak at the Effective Altruism Global conference later in 2015.
Here you can find number of videos I have taken at various events where Peter Singer has addressed Effective Altruism and associated philosophical angles.

New Book ‘The Point of View of the Universe – Sidgwick and Contemporary Ethics‘ – by Katarzyna de Lazari-Radek and Peter Singer

Subscribe to the Science, Technology & the Future YouTube Channel

My students often ask me if I think their parents did wrong to pay the $44,000 per year that it costs to send them to Princeton. I respond that paying that much for a place at an elite university is not justified unless it is seen as an investment in the future that will benefit not only one’s child, but others as well. An outstanding education provides students with the skills, qualifications, and understanding to do more for the world than would otherwise be the case. It is good for the world as a whole if there are more people with these qualities. Even if going to Princeton does no more than open doors to jobs with higher salaries, that, too, is a benefit that can be spread to others, as long as after graduating you remain firm in the resolve to contribute a percentage of that salary to organizations working for the poor, and spread this idea among your highly paid colleagues. The danger, of course, is that your colleagues will instead persuade you that you can’t possibly drive anything less expensive than a BMW and that you absolutely must live in an impressively large apartment in one of the most expensive parts of town.Peter Singer, The Life You Can Save: Acting Now to End World Poverty, London, 2009, pp. 138-139

 

Playlist of video interviews and talks by Peter Singer:

 

Science, Technology & the Future