Posts

Anders Sandberg -The Technological Singularity

Anders Sandberg.00_23_53_16.Still031Anders gives a short tutorial on the Singularity – clearing up confusion and highlighting important aspects of the Technological Singularity and related ideas, such as accelerating change, horizons of predictability, self-improving artificial intelligence, and the intelligence explosion.

Tutorial Video:

Points covered in the tutorial:

  • The Mathematical Singularity
  • The Technological Singularity: A Horizon of predictability
  • Confusion Around The Technological Singularity
  • Drivers of Accelerated Growth
  • Technology Feedback Loops
  • A History of Coordination
  • Technological Inflection Points
  • Difficult of seeing what happens after an Inflection Point
  • The Intelligence Explosion
  • An Optimisation Power Applied To Itself
  • Group Minds
  • The HIVE Singularity: A Networked Global Mind
  • The Biointelligence explosion
  • Humans are difficult to optimise

An Overview of Models of the Technological Singularity

anders-sandberg-technology-feedback-loopsSee Anders’ paper ‘An overview of models of technological singularity
This paper reviews different definitions and models of technological singularity. The models range from conceptual sketches to detailed endogenous growth models, as well as attempts to fit empirical data to quantitative models. Such models are useful for examining the dynamics of the world-system and possible types of future crisis points where fundamental transitions are likely to occur. Current models suggest that, generically, even small increasing returns tends to produce radical growth. If mental capital becomes copyable (such as would be the case for AI or brain emulation) extremely rapid growth would also become likely.
http://agi-conf.org/2010/wp-content/uploads/2009/06/agi10singmodels2.pdf

[The] Technological singularity is of increasing interest among futurists both as a predicted possibility in the midterm future and as subject for methodological debate. The concept is used in a variety of contexts, and has acquired an unfortunately large number of meanings. Some versions stress the role of artificial intelligence, others refer to more general technological change. These multiple meanings can overlap, and many writers use combinations of meanings: even Vernor Vinge’s seminal essay that coined the term uses several meanings. Some of these meanings may imply each other but often there is a conflation of different elements that likely (but not necessarily) occur in parallel. This causes confusion and misunderstanding to the extent that some critics argue that the term should be avoided altogether. At the very least the term ‘singularity’ has led to many unfortunate assumptions that technological singularity involves some form of mathematical singularity and can hence be ignored as unphysical.Anders Sandberg

A list of models described in the paper:

A. Accelerating change

Exponential or superexponential technological growth (with linked economical growth and social change) (Ray Kurzweil (Kur05), John Smart (Smang))

B. Self improving technology

Better technology allows faster development of new and better technology. (Flake (Fla06))

C. Intelligence explosion

Smarter systems can improve themselves, producing even more intelligence in a strong feedback loop. (I.J. Good (Goo65), Eliezer Yudkowsky)

D. Emergence of superintelligence

(Singularity Institute) 1

E. Prediction horizon

Rapid change or the emergence of superhuman intelligence makes the future impossible to predict from our current limited knowledge and experience. (Vinge, (Vin93))

F. Phase transition

The singularity represents a shift to new forms of organisation. This could be a fundamental difference in kind such as humanity being succeeded by posthuman or artificial intelligences,
a punctuated equilibrium transition or the emergence of a new meta-system level. (Teilhard de Chardin, Valentin Turchin (Tur77), Heylighen (Hey07))

G. Complexity disaster

Increasing complexity and interconnectedness causes increasing payoffs, but increases instability. Eventually this produces a crisis, beyond which point the dynamics must be different.
(Sornette (JS01), West (BLH+07))

H. Inflexion point

Large-scale growth of technology or economy follows a logistic growth curve. The singularity represents the inflexion point where change shifts from acceleration to de-acceleration. (Extropian
FAQ, T. Modis (Mod02))

I. Infinite progress

The rate of progress in some domain goes to infinity in nite time. (Few, if any, hold this to be plausible 2 )

anders-sandberg-the-technological-singularity-predictability-horizon

Many thanks for watching!

Consider supporting SciFuture by:
a) Subscribing to the YouTube channel:
b) Donating via Patreon: https://www.patreon.com/scifuture and/or
c) Sharing the media SciFuture creates

Science, Technology & the Future: http://scifuture.org

Heavy-Tailed Distributions: What Lurks Beyond Our Intuitions?

Understanding heavy-tailed distributions are important to assessing likelihoods and impact scales when thinking about possible disasters – especially relevant to xRisk and Global Catastrophic Risk analysis. How likely is civilization to be devastated by a large scale disaster or even go extinct?
Anders discusses how heavy-tailed distributions account for more than our intuitions tell us.

How likely is civilization to devastated by a global disaster or even go extinct?
In this video, Anders Sandberg discusses (with the aid of a whiteboard) how heavy-tailed distributions account for more than our intuitions tell us .

Considering large-scale disasters may be far more important than we intuit.

Transcript of dialog

So typically when people talk about probability they think about nice probability distribution like the bell curve or the Gaussian curve. So this means that it’s most likely that you get something close to zero and then less and less likely that you get very positive or very negative things and this is a rather nice looking curve.

However, many things in the world turn out to have much nastier probability distributions. A lot of disasters for example have a power law distribution. So if this is the size of a disaster and this is the probably, they fall off like this. This doesn’t look very dangerous from the start. Most disasters are fairly small, there’s a high probability of something close to zero and a low probability of something large. But it turns out that the probability getting a really large one can become quite big.

So suppose this one has alpha equal to 1 – that means that there is the chance of getting a disaster of size 10 is proportional to 1 in 10 and that disaster is 10 times as large that’s just a 10th of that probability and that it’s also 10 times as large as that big disaster (again a 10th of that).

That means that we’ve quite a lot of probability of getting very very large disasters – so in this case getting something that is very far out here is exceedingly unlikely, but in the case of power laws you can actually expect to see some very very large outbreaks.

So if you think about the time that various disasters happen – they happen irregularly and occasionally one is through the roof, and then another one, and you can’t of course tell when they happen – that’s random. And you can’t really tell how big they are going to be except that you’re going to be distributed in this way.

The real problem is that when something is bigger than any threshold that you imagine.. well it’s not just going to be a little bit taller, it’s going to be a whole lot taller.

So if we’re going to see a war for example as large as even the Second World War, we shouldn’t expect it to kill a million people more. We could expect it to kill tens or most likely hundreds or even a billions of people more – which is a rather scary prospect.

So the problem here is that disasters seem to be having these heavy tails. So a heavy a tail in probability slang that means that the probability mass over here, the chance that something very large is happening, there again it falls off very slowly. And this is of course a big problem because we tend to think in terms of normal distributions.

Normal distributions are nice. We say they’re normal because a lot of the things in our everyday life get distributed like this. The tallness of people for example – very rarely do we meet somebody who’s a kilometer tall, however, when we meet the people and think about how much they’re making or much money they have – well Bill Gates. He is far far richer than just ten times you and me and then he’s actually got, he’s from afar out here.

So when we get to the land where we have these fat heavy tails when both the the richest (if we are talking about rich people and the dangers if we talk about this) also tend to be much bigger than we can normally think about.

Adam: Hmm yes definitely un-intuitive.

Mmm and the problem is of course our intuitions are all shaped by what’s going on here in the normal realm. We have this experience about what has happened so far in our lives and once we venture out here and talk about very big events or intuitions suddenly become very bad. We make mistakes. We don’t really understand the consequences, cognitive biases take over and this can of course completely mess up our planning.

So we invest far too little in handling the really big disasters and we’re far too uninterested in going for the big wins in technology and science.

We should pay more attention probability theory (esp heavy-tailed distributions) in order to discover and avoid disasters that lurk beyond our intuitions.


Also see –
– Anders Sandberg: The Survival Curve of Our Species: Handling Global Catastrophic and Existential Risks

Anders Sandberg on Wikipedia: https://en.wikipedia.org/wiki/Anders_Sandberg

anders-sandberg-03_41_59_21-still025

Many thanks for watching!

Consider supporting me by:
a) Subscribing to my YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating via Patreon: https://www.patreon.com/scifuture and/or
c) Sharing the media I create

Kind regards,
Adam Ford
– Science, Technology & the Future: http://scifuture.org