Nick Bostrom: Why Focus on Existential Risk related to Machine Intelligence?
One can think of Existential Risk as a subcategory of a Global Catastrophic Risk – while GCR’s are really bad, civilization has the potential to recover from such a global catastrophic disaster.
An existential Risk is one in which there is no chance of recoverability. An example of the sort of disaster that fits the category of existential risk is human extinction which reduces the probability of [human] lives worth living – theories of value that imply even relatively small reductions in net existential risk have enormous expected value mostly fall under population ethics that consider an average or total utilitarian view of the well-being of the future of life in the universe. Since we haven’t seen any convincing evidence of life outside earth’s gravity well, it may be that there is no advanced technologically capable life elsewhere in the observable universe. If we value lives worth living, and lots of lives worth living, we might also value filling the uninhabited parts of the universe with lives worth living – and arguably we need an advanced technologically able civilization to achieve this. Hence, if humans become extinct it may be that evolution will never again produce a life form capable of escaping the gravity well and colonizing the universe with valuable life.
Here we focus on the reasons to focus on Existential Risk related to machine intelligence.
Say machine intelligence is created with a theory of value outside of, contradictory to, or simply different enough to one in which valued human existence, or the existence of valuable life in the universe. Also imagine that this machine intelligence could enact on it’s values in an exacting manner – it may cause humanity to become extinct on purpose, or as a side effect of implementing it’s values.
The paper ‘Existential Risk Prevention as Global Priority‘ by Nick Bostrom clarifies the concept of existential risk further:
Existential risks are those that threaten the entire future of humanity. Many theories of value imply that even relatively small reductions in net existential risk have enormous expected value. Despite their importance, issues surrounding human-extinction risks and related hazards remain poorly understood. In this paper, I clarify the concept of existential risk and develop an improved classification scheme. I discuss the relation between existential risks and basic issues in axiology, and show how existential risk reduction (via the maxipok rule) can serve as a strongly action-guiding principle for utilitarian concerns. I also show how the notion of existential risk suggests a new way of thinking about the ideal of sustainability. http://www.existential-risk.org
Interview with Nick Bostrom on Machine Intelligence and XRisk
I had the pleasure of doing an interview with Oxford philosopher Nick Bostrom on XRisk:
Transcription of interview:
In recent couple of years we’ve been focusing quite heavily on machine intelligence partly because it seems to raise some significant existentialist down the road part also because relatively little attention has been given to this risk. So when we are prioritizing what we want to spend our time researching then one variable that we take into account is how important is this topic that we could research? But another is how many other people are there who are already studying it? Because the more people who already studying it – the smaller the difference that having a few extra minds focusing on that topic.
So, say the topic of peace and war and how you can try to avoid international conflict is a very important topic – and many existential risks will be reduced if there is more global corporation.
However it is also hard to see how a very small group of people could make a substantial difference to today’s risk of arms races and wars. There is a big interest involved in this and so many people already working either on disarmament and peace and/or military strength that it’s an area where it would be great to make a change – but it’s hard to make a change if there are a smaller number people by contrast with something like the risk from machine intelligence and the risk of Super-Intelligence.
Only been a relatively small number of people have been thinking about this and there might be some low-hanging fruit there – some insights that might make a big difference. So that’s one of the criteria.
Now we are also looking at other existential risks and we are also looking at things other than existential risk like – with try to get a better understanding of what humanity’s situation is in the world and so we have been thinking some about the Fermi Paradox for example, some methodological tools that you need like observation selection theory how you can reason about these things. And to some extent also more near term impacts of technology and of course the opportunities involved in all of this – is that always worth to remind oneself that although enormous technological powers will pose great new dangers including existential risks they also of course make it possible to achieve enormous amount of good.
So one should bear in mind this ..the opportunities as well that are unleashed with technological advance.
About Professor Nick Bostrom
Director & James Martin Research Fellow
Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Programme on the Impacts of Future Technology within the Oxford Martin School. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009), and a forthcoming book on Superintelligence. He previously taught at Yale, and he was a Postdoctoral Fellow of the British Academy. Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy.
He is best known for his work in five areas: (i) the concept of existential risk; (ii) the simulation argument; (iii) anthropics (developing the first mathematically explicit theory of observation selection effects); (iv) transhumanism, including related issues in bioethics and on consequences of future technologies; and (v) foundations and practical implications of consequentialism. He is currently working on a book on the possibility of an intelligence explosion and on the existential risks and strategic issues related to the prospect of machine superintelligence.
In 2009, he was awarded the Eugene R. Gannon Award (one person selected annually worldwide from the fields of philosophy, mathematics, the arts and other humanities, and the natural sciences). He has been listed in the FP 100 Global Thinkers list, the Foreign Policy Magazine’s list of the world’s top 100 minds. His writings have been translated into more than 21 languages, and there have been some 80 translations or reprints of his works. He has done more than 500 interviews for TV, film, radio, and print media, and he has addressed academic and popular audiences around the world.
CV: http://www.nickbostrom.com/cv.pdf
Personal Web: http://www.nickbostrom.com
FHI Bio: https://www.fhi.ox.ac.uk/about/the-team/
Also consider joining the Facebook Group on Existential Risk: https://www.facebook.com/groups/ExistentialRisk