Posts

Narratives, Values & Progress – Anders Sandberg

Anders Sandberg discusses ideas & values and where we get them from, mindsets for progress, and that we are living in a unique era of technological change but also, importantly we are aware that we are living in an era of great change. Is there a direction in ethics? Is morality real? If so, how do we find it? What will our descendants think of our morals today – will they be weird to future generations?

One of the interesting things about our current world is that we are aware that a lot of ideas about morality are things going on in our culture and in our heads – and are not just the laws of nature – that’s very useful. Some people of course think that there is some ideal or best moral system – and maybe there is – but we’re not very good at finding it. It might turn out that in the long run if there is some kind of ultimate sensible moral – we’re going to find it – but that might take a very long time and might take brains much more powerful than ours – it might turn out that all sufficiently advanced alien civilizations eventually figure out the right thing to do – and do it. But it could also turn out actually when we meet real advanced aliens they’re going to be as confused about philosophy as we are – that’s one of the interesting things to find out about the universe.Anders Sandberg

Points covered:
– Technologies of the Future
– Efficient sustainability, in-vitro meat
– Living in an era of awareness of change
– Values have changed over time
– Will our morals be weird to future generations?
– Where is ethics going?
– Does moral relativism adequately explain reductions in violence?
– Is there an ideal ‘best moral system’? and if so, how do we find it?

Transcript

I grew up reading C.S. Lewis and his Narnia Stories. And at that time I didn’t get what was going on – I think it was when finally I was reading one, I then started thinking ‘this seems like an allegory’ and then sort of realizing ‘a christian allegory’ and then I felt ‘oh dear!’. I had to of course read all of them. In the end I was quite cross at Lewis for trying to foist that kind of stuff on children. He of course was unashamed – he was arguing in his letters ‘of course, if you are a christian you should make christian stories and try to tell them’ – but then of course he hides everything – so instead of having Jesus he turns him into a lion and so on.
But there’s an interesting problem in general of course ‘where do we get our ideas from?’. I grew up in boring Sweden in the 70’s so I had to read a lot of science fiction in order to get excited. That science fiction story reading made me interested in the technology & science and made it real – but it also gave me a sort of libertarian outlook accidentally. I realised that well, maybe our current rules for society are arbitrary – we could change them into something better. And aliens are people too, as well as robots. So in the end that kind of education also set me on my path.
So in general what we read as children effects us in sometimes very subtle ways – I was reading one book about technologies of the future by a German researcher – today of course it is very laughably 60ish – very much thinking about cybernetics and the big technologies, fusion reactions and rockets – but it also got me thinking about ‘we can change the world completely’ – there is no reason to think that it works out that only 700 billion people can live on earth – we rebuild it to house trillions – it wouldn’t be a particularly nice world, it would be nightmarish by our current standards – but it would actually be possible to do. It’s rather that we have a choice of saying ‘maybe we want to keep our world rather small scale with just a few billion people on it’. Other would say ‘we can’t event sustain a few billion people on the planet – we’re wearing out the biosphere’ – but again it’s based on a certain assumption about how the biosphere functions – we can produce the food more efficiently than we currently do. If we went back to the primitive hunter gatherers we would need several hundred earths to sustain us all simply hunter gatherers need enormous areas of land in order to get enough prey to hunt down in order to survive. Agriculture is much more effective – and we can go far beyond that – things like hydroponics and in-vitro meat might actually in the future mean that we would say it’s absolutely disgusting, or rather weird to culture farmland or eat animals! ‘Why would you actually eat animals? Well only disgusting people back in the stone-age did that’. In that stone age they were using silicone of course.
Dividing history into ages is very fraught because when you declare that ‘this is the atomic age’ you make certain assumptions – so the atomic age didn’t turn out so well because people lost their faith in their friend the atom – the space age didn’t turn out to be a space age because people found better ways of using the money – in a sense we went out into space prematurely before there was a good business case for it. The computer age on the other hand – well now computers are so everywhere that we could just as well call it the air age – it’s everywhere. Similarly the internet – that’s just the latest innovation – probably as people in the future look back we’re going to call it something completely different – just like we want to divide history into things like the Medieval age, or the Renaissance, which are not always more than just labels. What I think is unique about our era in history is that we’re very aware that we are living in a changing world; that is not going to be the same in 100 years, that is going to be utterly utterly different from what it was 100 years back. So many historical eras people have been thinking ‘oh we’re on the cusp of greatness or a great disaster’. But we actually have objective good reasons for thinking things cannot remain as they were. There are too many people, too many brains, too much technology – and a lot of these technologies are very dangerous and very transformative – so if we can get through this without too much damage to ourselves and the planet, I think we are going to have a very interesting future. But it’s also probably going to be a future that is somewhat alien from what we can foresee.
If we took an ancient roman and put him into the modern society he would absolutely shocked – not just by our technology, but by our values. We are very clear that compassion is a good virtue, and he would say the opposite and say ‘compassion is for old ladies’ – and of course a medieval knight would say ‘you have no honor in the 21st century’ and we’d say ‘oh yes, honor killings and all that – that’s bad, yeah actually a lot of those medieval honorable ideals they’re actually immoral by our standards’. So we should probably take that our moral standards are going to be regarded by the future as equally weird and immoral – and this is of course a rather chilling thought because our personal information is going to be available in the future to our descendants or even ourselves as older people with different values – a lot of advanced technologies we are worrying about are going to be wielded by our children, or by an older version of ourselves in ways we might not approve – but they’re going to say ‘yes but we’ve actually figured out the ethics now’.
The problem of course of where ethics is ever going is a really interesting question in itself – so people say oh yes, it’s just relative, it’s just societies making up rules to live by – but I do think we learned a few things – the reduction in violence over historical eras shows that we are getting something right. I don’t think that our relatives could just say that ‘violence is arbitrarily sometimes good and sometimes bad’ – I think it’s very clearly a bad thing. So I think we are making moral progress in some sense – we are figuring out better ways of thinking about morality. One of the interesting things about our current world is that we are aware that a lot of ideas about morality are things going on in our culture and in our heads – and are not just the laws of nature – that’s very useful. Some people of course think that there is some ideal or best moral system – and maybe there is – but we’re not very good at finding it. It might turn out that in the long run if there is some kind of ultimate sensible moral – we’re going to find it – but that might take a very long time and might take brains much more powerful than ours – it might turn out that all sufficiently advanced alien civilizations eventually figure out the right thing to do – and do it. But it could also turn out actually when we meet real advanced aliens they’re going to be as confused about philosophy as we are – that’s one of the interesting things to find out about the universe.

anders-sandberg-will-our-morals-be-weird-to-future-generations

Points covered:
– Technologies of the Future
– Efficient sustainability, in-vitro meat
– Living in an era of awareness of change
– Values have changed over time
– Will our morals be weird to future generations?
– Where is ethics going?
– Does moral relativism adequately explain reductions in violence?
– Is there an ideal ‘best moral system’? and if so, how do we find it?

Nick Bostrom: Why Focus on Existential Risk related to Machine Intelligence?

One can think of Existential Risk as a subcategory of a Global Catastrophic Risk – while GCR’s are really bad, civilization has the potential to recover from such a global catastrophic disaster.
An existential Risk is one in which there is no chance of recoverability. An example of the sort of disaster that fits the category of existential risk is human extinction which reduces the probability of [human] lives worth living – theories of value that imply even relatively small reductions in net existential risk have enormous expected value mostly fall under population ethics that consider an average or total utilitarian view of the well-being of the future of life in the universe.  Since we haven’t seen any convincing evidence of life outside earth’s gravity well, it may be that there is no advanced technologically capable life elsewhere in the observable universe.  If we value lives worth living, and lots of lives worth living, we might also value filling the uninhabited parts of the universe with lives worth living – and arguably we need an advanced technologically able civilization to achieve this.  Hence, if humans become extinct it may be that evolution will never again produce a life form capable of escaping the gravity well and colonizing the universe with valuable life.

Here we focus on the reasons to focus on Existential Risk related to machine intelligence.

Say machine intelligence is created with a theory of value outside of, contradictory to, or simply different enough to one in which valued human existence, or the existence of valuable life in the universe.  Also imagine that this machine intelligence could enact on it’s values in an exacting manner – it may cause humanity to become extinct on purpose, or as a side effect of implementing it’s values.

The paper ‘Existential Risk Prevention as Global Priority‘ by Nick Bostrom clarifies the concept of existential risk further:

Existential risks are those that threaten the entire future of humanity. Many theories of value imply that even relatively small reductions in net existential risk have enormous expected value. Despite their importance, issues surrounding human-extinction risks and related hazards remain poorly understood. In this paper, I clarify the concept of existential risk and develop an improved classification scheme. I discuss the relation between existential risks and basic issues in axiology, and show how existential risk reduction (via the maxipok rule) can serve as a strongly action-guiding principle for utilitarian concerns. I also show how the notion of existential risk suggests a new way of thinking about the ideal of sustainability. http://www.existential-risk.org

Interview with Nick Bostrom on Machine Intelligence and XRisk

I had the pleasure of doing an interview with Oxford philosopher Nick Bostrom on XRisk:

Transcription of interview:

In recent couple of years we’ve been focusing quite heavily on machine intelligence partly because it seems to raise some significant existentialist down the road part also because relatively little attention has been given to this risk. So when we are prioritizing what we want to spend our time researching then one variable that we take into account is how important is this topic that we could research? But another is how many other people are there who are already studying it? Because the more people who already studying it – the smaller the difference that having a few extra minds focusing on that topic.
So, say the topic of peace and war and how you can try to avoid international conflict is a very important topic – and many existential risks will be reduced if there is more global corporation.
However it is also hard to see how a very small group of people could make a substantial difference to today’s risk of arms races and wars. There is a big interest involved in this and so many people already working either on disarmament and peace and/or military strength that it’s an area where it would be great to make a change – but it’s hard to make a change if there are a smaller number people by contrast with something like the risk from machine intelligence and the risk of Super-Intelligence.
Only been a relatively small number of people have been thinking about this and there might be some low-hanging fruit there – some insights that might make a big difference. So that’s one of the criteria.
Now we are also looking at other existential risks and we are also looking at things other than existential risk like – with try to get a better understanding of what humanity’s situation is in the world and so we have been thinking some about the Fermi Paradox for example, some methodological tools that you need like observation selection theory how you can reason about these things. And to some extent also more near term impacts of technology and of course the opportunities involved in all of this – is that always worth to remind oneself that although enormous technological powers will pose great new dangers including existential risks they also of course make it possible to achieve enormous amount of good.
So one should bear in mind this ..the opportunities as well that are unleashed with technological advance.

About Professor Nick Bostrom

Director & James Martin Research Fellow

Bostrom Xrisk 2Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Programme on the Impacts of Future Technology within the Oxford Martin School. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009), and a forthcoming book on Superintelligence. He previously taught at Yale, and he was a Postdoctoral Fellow of the British Academy. Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy.

He is best known for his work in five areas: (i) the concept of existential risk; (ii) the simulation argument; (iii) anthropics (developing the first mathematically explicit theory of observation selection effects); (iv) transhumanism, including related issues in bioethics and on consequences of future technologies; and (v) foundations and practical implications of consequentialism. He is currently working on a book on the possibility of an intelligence explosion and on the existential risks and strategic issues related to the prospect of machine superintelligence.

In 2009, he was awarded the Eugene R. Gannon Award (one person selected annually worldwide from the fields of philosophy, mathematics, the arts and other humanities, and the natural sciences). He has been listed in the FP 100 Global Thinkers list, the Foreign Policy Magazine’s list of the world’s top 100 minds. His writings have been translated into more than 21 languages, and there have been some 80 translations or reprints of his works. He has done more than 500 interviews for TV, film, radio, and print media, and he has addressed academic and popular audiences around the world.

CV: http://www.nickbostrom.com/cv.pdf

Personal Web: http://www.nickbostrom.com

FHI Bio: https://www.fhi.ox.ac.uk/about/the-team/

Also consider joining the Facebook Group on Existential Risk: https://www.facebook.com/groups/ExistentialRisk