Posts

Nick Bostrom: Why Focus on Existential Risk related to Machine Intelligence?

One can think of Existential Risk as a subcategory of a Global Catastrophic Risk – while GCR’s are really bad, civilization has the potential to recover from such a global catastrophic disaster.
An existential Risk is one in which there is no chance of recoverability. An example of the sort of disaster that fits the category of existential risk is human extinction which reduces the probability of [human] lives worth living – theories of value that imply even relatively small reductions in net existential risk have enormous expected value mostly fall under population ethics that consider an average or total utilitarian view of the well-being of the future of life in the universe.  Since we haven’t seen any convincing evidence of life outside earth’s gravity well, it may be that there is no advanced technologically capable life elsewhere in the observable universe.  If we value lives worth living, and lots of lives worth living, we might also value filling the uninhabited parts of the universe with lives worth living – and arguably we need an advanced technologically able civilization to achieve this.  Hence, if humans become extinct it may be that evolution will never again produce a life form capable of escaping the gravity well and colonizing the universe with valuable life.

Here we focus on the reasons to focus on Existential Risk related to machine intelligence.

Say machine intelligence is created with a theory of value outside of, contradictory to, or simply different enough to one in which valued human existence, or the existence of valuable life in the universe.  Also imagine that this machine intelligence could enact on it’s values in an exacting manner – it may cause humanity to become extinct on purpose, or as a side effect of implementing it’s values.

The paper ‘Existential Risk Prevention as Global Priority‘ by Nick Bostrom clarifies the concept of existential risk further:

Existential risks are those that threaten the entire future of humanity. Many theories of value imply that even relatively small reductions in net existential risk have enormous expected value. Despite their importance, issues surrounding human-extinction risks and related hazards remain poorly understood. In this paper, I clarify the concept of existential risk and develop an improved classification scheme. I discuss the relation between existential risks and basic issues in axiology, and show how existential risk reduction (via the maxipok rule) can serve as a strongly action-guiding principle for utilitarian concerns. I also show how the notion of existential risk suggests a new way of thinking about the ideal of sustainability. http://www.existential-risk.org

Interview with Nick Bostrom on Machine Intelligence and XRisk

I had the pleasure of doing an interview with Oxford philosopher Nick Bostrom on XRisk:

Transcription of interview:

In recent couple of years we’ve been focusing quite heavily on machine intelligence partly because it seems to raise some significant existentialist down the road part also because relatively little attention has been given to this risk. So when we are prioritizing what we want to spend our time researching then one variable that we take into account is how important is this topic that we could research? But another is how many other people are there who are already studying it? Because the more people who already studying it – the smaller the difference that having a few extra minds focusing on that topic.
So, say the topic of peace and war and how you can try to avoid international conflict is a very important topic – and many existential risks will be reduced if there is more global corporation.
However it is also hard to see how a very small group of people could make a substantial difference to today’s risk of arms races and wars. There is a big interest involved in this and so many people already working either on disarmament and peace and/or military strength that it’s an area where it would be great to make a change – but it’s hard to make a change if there are a smaller number people by contrast with something like the risk from machine intelligence and the risk of Super-Intelligence.
Only been a relatively small number of people have been thinking about this and there might be some low-hanging fruit there – some insights that might make a big difference. So that’s one of the criteria.
Now we are also looking at other existential risks and we are also looking at things other than existential risk like – with try to get a better understanding of what humanity’s situation is in the world and so we have been thinking some about the Fermi Paradox for example, some methodological tools that you need like observation selection theory how you can reason about these things. And to some extent also more near term impacts of technology and of course the opportunities involved in all of this – is that always worth to remind oneself that although enormous technological powers will pose great new dangers including existential risks they also of course make it possible to achieve enormous amount of good.
So one should bear in mind this ..the opportunities as well that are unleashed with technological advance.

About Professor Nick Bostrom

Director & James Martin Research Fellow

Bostrom Xrisk 2Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Programme on the Impacts of Future Technology within the Oxford Martin School. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009), and a forthcoming book on Superintelligence. He previously taught at Yale, and he was a Postdoctoral Fellow of the British Academy. Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy.

He is best known for his work in five areas: (i) the concept of existential risk; (ii) the simulation argument; (iii) anthropics (developing the first mathematically explicit theory of observation selection effects); (iv) transhumanism, including related issues in bioethics and on consequences of future technologies; and (v) foundations and practical implications of consequentialism. He is currently working on a book on the possibility of an intelligence explosion and on the existential risks and strategic issues related to the prospect of machine superintelligence.

In 2009, he was awarded the Eugene R. Gannon Award (one person selected annually worldwide from the fields of philosophy, mathematics, the arts and other humanities, and the natural sciences). He has been listed in the FP 100 Global Thinkers list, the Foreign Policy Magazine’s list of the world’s top 100 minds. His writings have been translated into more than 21 languages, and there have been some 80 translations or reprints of his works. He has done more than 500 interviews for TV, film, radio, and print media, and he has addressed academic and popular audiences around the world.

CV: http://www.nickbostrom.com/cv.pdf

Personal Web: http://www.nickbostrom.com

FHI Bio: https://www.fhi.ox.ac.uk/about/the-team/

Also consider joining the Facebook Group on Existential Risk: https://www.facebook.com/groups/ExistentialRisk

Conference: Thinking Machines in the Physical World

“Thinking Machines in the Physical World” invites cross-disciplinary conversations about the opportunities and threats presented by advances in cognitive computing:
  – What concrete, real-world possibilities does intelligence-focused technology open up?
  – What potential effects will “smart computers” exert on labor and jobs around the globe?
  – What are the broader social implications of these changes?

When: Wednesday, July 13, 2016 8:30 AM until Friday ~6pm (then dinner)
Where: Melbourne Uni Law School Building, Level 10 185 Pelham Street, Carlton

Keynotes (see details here):

Prof Brian Anderson – Distinguished Professor at ANU College of Engineering and Computer Science.

Dr James Hughes – Executive Director of the Institute for Ethics and Emerging Technologies.

Prof M. Vidyasagar – Cecil & Ida Green Chair in Systems Biology Science

Prof Judy Wajcman – Anthony Giddens Professor of Sociology, London School of Economics

Dr. Juerg von Kaenel, IBM Research – Cognitive Computing – IBM Watson

Register here | Main website | Program

Professor Graeme Clark, AC Laureate Professor Emeritus  says “It gives me great pleasure to have the opportunity to welcome your interest in the work of Norbert Wiener and invite you to Melbourne to participate in this important conference.”

Official Website: http://21stcenturywiener.org/
Video: https://www.youtube.com/watch?v=etBMY6Orj50
Meetup: http://www.meetup.com/Science-Technology-and-the-Future/events/228816058/
Google+: https://plus.google.com/events/chcmpbupi30ffps4kf94gtn2rpc
Facebook Event: https://www.facebook.com/events/625367860953411/

Peter Singer & David Pearce on Utilitarianism, Bliss & Suffering

Moral philosophers Peter Singer & David Pearce discuss some of the long term issues with various forms of utilitarianism, the future of predation and utilitronium shockwaves.

Topics Covered

Peter Singer

– long term impacts of various forms of utilitarianism
– Consciousness
– Artificial Intelligence
– Reducing suffering in the long run and in the short term
– Practical ethics
– Pre-implantation genetic screening to reduce disease and low mood
– Lives today are worth the same as lives in the future – though uncertainty must be brought to bear in deciding how one weighs up the importance of life
– The Hedonistic Imperative and how people react to it
– Correlation to high hedonic set points with productivity
existential risks and global catastrophic risks
– Closing factory farms

David Pearce

– Veganism and reducitarianism
– Red meat vs white meat – many more chickens are killed per ton of meat than beef
– Valence research
– Should one eliminate the suffering? And should we eliminate emotions of happiness?
– How can we answer the question of how far suffering is present in different life forms (like insects)?

Talk of moral progress can make one sound naive. But even the darkest cynic should salute the extraordinary work of Peter Singer to promote the interests of all sentient beings.David Pearce
 

 

Many thanks for watching!
– Support me via Patreon: https://www.patreon.com/scifuture
– Please Subscribe to this Channel: http://youtube.com/subscription_cente…
– Science, Technology & the Future website: http://scifuture.org

Is there a Meaningful Future for Non-Optimal Moral Agents?

In an interview last year, I had a discussion with John Danaher on the Hedonistic Imperative & Superintelligence – a concern he has with HI is that it denies or de-emphasises some kind of moral agency – in moral theory there is a distinction between moral agents (being a responsible actor able to make moral decisions, influence direction of moral progress, shapes its future, and owes duties to others) and moral patients who may be deemed to have limited or no grounds for moral agency/autonomy/responsibility – they are simply a recipient of moral benefits – in contrast to humans, animals could be classified as moral patients – (see Stanford writing on Grounds for Moral Status).

As time goes on, the notion of strong artificial intelligence leading to Superintelligence (which may herald in something like an Intelligence Explosion) and ideas like the hedonistic imperative becomes less sensational sci-fi concepts and more like visions of realizable eventualities. Thinking about moral endpoints comes to me a paradoxical feeling of triumph and disempowerment.

John’s concern is that ensuring the well-being of humans (conscious entities) is consistent with denying their moral agency – minimizing their capacity to act – that there is a danger that the outcome of HI or an Intelligence Explosion may result in sentient life being made very happy forever, but unable to make choices – with a focus on a future entirely based on bliss whilst ignoring other aspects of what makes for a valuable or worthwhile existence.

Artificial Heart chipsSo even if we have a future where a) we are made very happy and b) we are subject to a wide variety of novelty (which I argue for in Novelty Utilitarianism) without some kind of self-determination we may not be able to enjoy part of what arguably makes for a worthwhile existence.

If the argument for moral agency is completely toppled by the argument against free will then I can see why there would be no reason for it – and that bliss/novelty may be enough – though I personally haven’t been convinced that this is the case.

Also the idea that moral agency and novelty should be ranked as auxiliary aspects to the main imperative of reducing suffering/increasing bliss seems problematic – I get the sense that they (agency/novelty) could easily be swapped out for most non-optimal moral agents in the quest for -suffering/+bliss troublesome.
The idea that upon evaluating grounds for moral status, our ethical/moral quotient may not match or even come close to a potential ethical force of a superintelligence is also troubling. If we are serious about the best ethical outcomes, when the time comes, should we be committed to resigning all moral agency to agents that are more adept at producing peek moral outcomes?
ancillary-one-esk-glitchIs it really possible for non-optimal agents to have a meaningful moral input in a universe where they’ve been completely outperformed by moral machines? Is a life of novelty & bliss the most optimal outcome we can hope for?

There probably should be some more discussion on trade-offs between moral agency, peek experience and novelty.

Discussion in this video here starts at 24:02

Below is the whole interview with John Danaher:

The long-term future of AI (and what we can do about it) : Daniel Dewey at TEDxVienna

daniel deweyThis has been one of my favourite simple talks on AI Impacts – Simple, clear and straight to the point. Recommended as an introduction to the ideas (referred to in the title).

I couldn’t find the audio of this talk at TED – it has been added to archive.org:

 

Daniel Dewey is a research fellow in the Oxford Martin Programme on the Impacts of Future Technology at the Future of Humanity Institute, University of Oxford. His research includes paths and timelines to machine superintelligence, the possibility of intelligence explosion, and the strategic and technical challenges arising from these possibilities. Previously, Daniel worked as a software engineer at Google, did research at Intel Research Pittsburgh, and studied computer science and philosophy at Carnegie Mellon University. He is also a research associate at the Machine Intelligence Research Institute.

http://www.tedxvienna.at/

 

Brian Greene on Artificial Intelligence, the Importance of Fundamental Physics, Alien Life, and the Possible Future of Our Civilization

March 14th was Albert Einstein’s birthday, and also PI day, so it was a fitting day to be interviewing well known theoretical physicist and string theorist Brian Greene – the author of a number of books including, The Elegant Universe, Icarus at the Edge of Time, The Fabric of the Cosmos, and The Hidden Reality!
Think-Inc-logo2Many thanks to Suzi and Desh at THINKINC for helping organize this interview & for bringing Brian Greene to Australia for a number of shows (March 16 in Perth, March 18 in Sydney and March 19 in Melbourne) – check out www.thinkinc.org.au for more info!

Audio recording of the interview:

About the Interview with Brian Greene

Brian Greene discusses implications Artificial Intelligence and news of DeepMind AI (AlphaGo) beating the world grand champion in the board game Go.  He then discusses physics string theory, the territory of opinion on grand unifying theories of physics, the importance of supporting fundamental science, the possibility of alien life, the possible future of our space-faring civilization and of course gravitational waves!

In answer to the question on the importance of supporting fundamental research in science, Brain Greene said:

I tell them to wake up! Wake up and recognize that fundamental science has radically changed the way they live their lives today. If any of these individuals have a cell phone, or a personal computer, or perhaps they themselves or loved ones has been saved by an MRI machine.. I mean any of these devices rely on integrated circuits, which they themselves rely on quantum physics – so IF those folks who were in charge in the 1920s had have said, ‘hey you guys working on quantum physics, that doesn’t seem to be relevant to anything in the world around as so were going to cut your funding – well those people would have short circuited on of the greatest revolutions that our species has gone through – the information age, the technological age – so the bottom line is we need to support fundamental research because we know historically that when you gain a deep understanding of how things work – we can often leverage that to then manipulate the world around us in spectacular ways! And that needs to be where our fundamental focus remains – in science!

 

Layered art of Brian Greene, background and series titleBrian Randolph Greene is an American theoretical physicist and string theorist. He has been a professor at Columbia University since 1996 and chairman of the World Science Festival since co-founding it in 2008. Greene has worked on mirror symmetry, relating two different Calabi–Yau manifolds (concretely, relating the conifold to one of its orbifolds). He also described the flop transition, a mild form of topology change, showing that topology in string theory can change at the conifold point.

Greene has become known to a wider audience through his books for the general public, The Elegant Universe, Icarus at the Edge of Time, The Fabric of the Cosmos, The Hidden Reality, and related PBS television specials. He also appeared on The Big Bang Theory episode “The Herb Garden Germination“, as well as the films Frequency and The Last Mimzy. He is currently a member of the Board of Sponsors of the Bulletin of the Atomic Scientists.

stf-science-technology-future-blueLogo-light-and-dark-grey-555x146-trans

Many thanks for listening!
Support me via Patreon
Please Subscribe to the YouTube Channel
Science, Technology & the Future on the web

Brian-Greene---Science,Technology-and-the-Future__square-1080x1080

Can Intelligence Explode? – Marcus Hutter at Singularity Summit Australia 2012

Abstract: The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. The most popular scenario is the creation of super-intelligent algorithms that recursively create ever higher intelligences. After a short introduction to this intriguing potential future, I will elaborate on what it could mean for intelligence to explode. In this course, I will (have to) provide a more careful treatment of what intelligence actually is, separate speed from intelligence explosion, compare what super-intelligent participants and classical human observers might experience and do, discuss immediate implications for the diversity and value of life, consider possible bounds on intelligence, and contemplate intelligences right at the singularity.

 


 

Slides (pdf): http://www.hutter1.net/publ/ssingularity.pdf
Slides (PowerPoint): http://www.hutter1.net/publ/ssingularity.ppsx
Paper: M.Hutter, Can Intelligence Explode, Journal of Consciousness Studies, Vol.19, Nr 1-2 (2012) pages 143–166.
http://www.hutter1.net/publ/singularity.pdf

Also see:
http://2012.singularitysummit.com.au/2012/08/can-intelligence-explode/
http://2012.singularitysummit.com.au/2012/08/universal-artificial-intelligence/
http://2012.singularitysummit.com.au/2012/08/panel-intelligence-substrates-computation-and-the-future/
http://2012.singularitysummit.com.au/2012/01/marcus-hutter-to-speak-at-the-singularity-summit-au-2012/
http://2012.singularitysummit.com.au/agenda

Marcus Hutter (born 1967) is a German computer scientist and professor at the Australian National University. Hutter was born and educated in Munich, where he studied physics and computer science at the Technical University of Munich. In 2000 he joined Jürgen Schmidhuber’s group at the Swiss Artificial Intelligence lab IDSIA, where he developed the first mathematical theory of optimal Universal Artificial Intelligence, based on Kolmogorov complexity and Ray Solomonoff’s theory of universal inductive inference. In 2006 he also accepted a professorship at the Australian National University in Canberra.

Hutter’s notion of universal AI describes the optimal strategy of an agent that wants to maximize its future expected reward in some unknown dynamic environment, up to some fixed future horizon. This is the general reinforcement learning problem. Solomonoff/Hutter’s only assumption is that the reactions of the environment in response to the agent’s actions follow some unknown but computable probability distribution.

team-marcus-hutter

Professor Marcus Hutter

Research interests:

Artificial intelligence, Bayesian statistics, theoretical computer science, machine learning, sequential decision theory, universal forecasting, algorithmic information theory, adaptive control, MDL, image processing, particle physics, philosophy of science.

Bio:

Marcus Hutter is Professor in the RSCS at the Australian National University in Canberra, Australia. He received his PhD and BSc in physics from the LMU in Munich and a Habilitation, MSc, and BSc in informatics from the TU Munich. Since 2000, his research at IDSIA and now ANU is centered around the information-theoretic foundations of inductive reasoning and reinforcement learning, which has resulted in 100+ publications and several awards. His book “Universal Artificial Intelligence” (Springer, EATCS, 2005) develops the first sound and complete theory of AI. He also runs the Human Knowledge Compression Contest (50’000€ H-prize).

Metamorphogenesis – How a Planet can produce Minds, Mathematics and Music – Aaron Sloman

The universe is made up of matter, energy and information, interacting with each other and producing new kinds of matter, energy, information and interaction.
How? How did all this come out of a cloud of dust?
In order to find explanations we first need much better descriptions of what needs to be explained.

By Aaron Sloman
Abstract – and more info – Held at Winter Intelligence Oxford – Organized by the Future of Humanity Institute

Aaron Sloman

Aaron Sloman

This is a multi-disciplinary project attempting to describe and explain the variety of biological information-processing mechanisms involved in the production of new biological information-processing mechanisms, on many time scales, between the earliest days of the planet with no life, only physical and chemical structures, including volcanic eruptions, asteroid impacts, solar and stellar radiation, and many other physical/chemical processes (or perhaps starting even earlier, when there was only a dust cloud in this part of the solar system?).

Evolution can be thought of as a (blind) Theorem Prover (or theorem discoverer).
– Proving (discovering) theorems about what is possible (possible types of information, possible types of information-processing, possible uses of information-processing)
– Proving (discovering) many theorems in parallel (including especially theorems about new types of information and new useful types of information-processing)
– Sharing partial results among proofs of different things (Very different biological phenomena may share origins, mechanisms, information, …)
Combining separately derived old theorems in constructions of new proofs (One way of thinking about symbiogenesis.)
– Delegating some theorem-discovery to neonates and toddlers (epigenesis/ontogenesis). (Including individuals too under-developed to know what they are discovering.)
– Delegating some theorem-discovery to social/cultural developments. (Including memes and other discoveries shared unwittingly within and between communities.)
– Using older products to speed up discovery of new ones (Using old and new kinds of architectures, sensori-motor morphologies, types of information, types of processing mechanism, types of control & decision making, types of testing.)

The “proofs” of discovered possibilities are implicit in evolutionary and/or developmental trajectories.

They demonstrate the possibility of development of new forms of development, evolution of new types of evolution learning new ways to learn evolution of new types of learning (including mathematical learning: by working things out without requiring empirical evidence) evolution of new forms of development of new forms of learning (why can’t a toddler learn quantum mechanics?) – how new forms of learning support new forms of evolution amd how new forms of development support new forms of evolution (e.g. postponing sexual maturity until mate-selection mating and nurturing can be influenced by much learning)
….
…. and ways in which social cultural evolution add to the mix

These processes produce new forms of representation, new ontologies and information contents, new information-processing mechanisms, new sensory-motor
morphologies, new forms of control, new forms of social interaction, new forms of creativity, … and more. Some may even accelerate evolution.

A draft growing list of transitions in types of biological information-processing.

An attempt to identify a major type of mathematical reasoning with precursors in perception and reasoning about affordances, not yet replicated in AI systems.

Even in microbes I suspect there’s much still to be learnt about the varying challenges and opportunities faced by microbes at various stages in their evolution, including new challenges produced by environmental changes and new opportunities (e.g. for control) produced by previous evolved features and competences — and the mechanisms that evolved in response to those challenges and opportunities.

Example: which organisms were first able to learn about an enduring spatial configuration of resources, obstacles and dangers, only a tiny fragment of which can be sensed at any one time?
What changes occurred to meet that need?

Use of “external memories” (e.g. stigmergy)
Use of “internal memories” (various kinds of “cognitive maps”)

More examples to be collected here.

7th Annual Conference of the Australasian Bayesian Network Modelling Society (ABNMS2015)

November 23 – 24, 2015: Pre-Conference Workshop
November 25 – 26, 2015: Conference

[Official Website Here]

Location: Monash University, Caulfield, Melbourne (Australia)
Promo vid | Contact: abnms2015@abnms.org

Keynote Speakers: The conference organisers are pleased to announce that Dr Bruce Marcot of the US Forest Service, Dan Ababei from Lighttwist Software, Netherlands and Assoc Prof Jonathan Keith from Monash University will deliver the keynote address.

You will be able to register for the tutorials and the conference separately or together.

Bayesian Intelligence blog about the conf

– Dr. Kevin B. Korb is a Director and co-founder of Bayesian Intelligence, and a reader at Monash University. He specializes in the theory and practice of causal discovery of Bayesian networks (aka data mining with BNs), machine learning, evaluation theory, the philosophy of scientific method and informal logic. Email: kevin.korb (at) bayesian-intelligence.com

Seventh Annual Conference of the Australasian Bayesian Network Modelling Society - Ann E Nicholson– Prof. Ann E. Nicholson is a Director and co-founder of Bayesian Intelligence and a professor at Monash University who specializes in Bayesian network modelling. She is an expert in dynamic Bayesian networks (BNs), planning under uncertainty, user modelling, Bayesian inference methods and knowledge engineering BNs. Email: ann (dot) nicholson (at) bayesian-intelligence (dot) com

Many thanks for watching!
– Support me via Patreon: https://www.patreon.com/scifuture
– Please Subscribe to this Channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
– Science, Technology & the Future website: http://scifuture.org

Vernor Vinge on the Technological Singularity

What is the Singularity? Vernor Vinge speaks about technological change, offloading cognition from minds into the environment, and the potential of Strong Artificial Intelligence.

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” – “The Coming Technological SingularityVernor Vinge 1993

Vernor Vinge popularised and coined the term “Technological Singularity” in his 1993 essay “The Coming Technological Singularity“, in which he argues that the creation of superhuman artificial intelligence will mark the point at which “the human era will be ended,” such that no current models of reality are sufficient to predict beyond it.

courtesy of the Imaginary Foundation

courtesy of the Imaginary Foundation

Vinge published his first short story, “Bookworm, Run!”, in the March 1966 issue of Analog Science Fiction, then edited by John W. Campbell. The story explores the theme of artificially augmented intelligence by connecting the brain directly to computerised data sources. He became a moderately prolific contributor to SF magazines in the 1960s and early 1970s. In 1969, he expanded two related stories, (“The Barbarian Princess”, Analog, 1966 and “Grimm’s Story”, Orbit 4, 1968) into his first novel, Grimm’s World. His second novel, The Witling, was published in 1975.

Vinge came to prominence in 1981 with his novella True Names, perhaps the first story to present a fully fleshed-out concept of cyberspace, which would later be central to cyberpunk stories by William Gibson, Neal Stephenson and others.

 

Vernor Vinge

Image Courtesy – Long Now Foundation