Posts

Anders Sandberg -The Technological Singularity

Anders Sandberg.00_23_53_16.Still031Anders gives a short tutorial on the Singularity – clearing up confusion and highlighting important aspects of the Technological Singularity and related ideas, such as accelerating change, horizons of predictability, self-improving artificial intelligence, and the intelligence explosion.

Tutorial Video:

Points covered in the tutorial:

  • The Mathematical Singularity
  • The Technological Singularity: A Horizon of predictability
  • Confusion Around The Technological Singularity
  • Drivers of Accelerated Growth
  • Technology Feedback Loops
  • A History of Coordination
  • Technological Inflection Points
  • Difficult of seeing what happens after an Inflection Point
  • The Intelligence Explosion
  • An Optimisation Power Applied To Itself
  • Group Minds
  • The HIVE Singularity: A Networked Global Mind
  • The Biointelligence explosion
  • Humans are difficult to optimise

An Overview of Models of the Technological Singularity

anders-sandberg-technology-feedback-loopsSee Anders’ paper ‘An overview of models of technological singularity
This paper reviews different definitions and models of technological singularity. The models range from conceptual sketches to detailed endogenous growth models, as well as attempts to fit empirical data to quantitative models. Such models are useful for examining the dynamics of the world-system and possible types of future crisis points where fundamental transitions are likely to occur. Current models suggest that, generically, even small increasing returns tends to produce radical growth. If mental capital becomes copyable (such as would be the case for AI or brain emulation) extremely rapid growth would also become likely.
http://agi-conf.org/2010/wp-content/uploads/2009/06/agi10singmodels2.pdf

[The] Technological singularity is of increasing interest among futurists both as a predicted possibility in the midterm future and as subject for methodological debate. The concept is used in a variety of contexts, and has acquired an unfortunately large number of meanings. Some versions stress the role of artificial intelligence, others refer to more general technological change. These multiple meanings can overlap, and many writers use combinations of meanings: even Vernor Vinge’s seminal essay that coined the term uses several meanings. Some of these meanings may imply each other but often there is a conflation of different elements that likely (but not necessarily) occur in parallel. This causes confusion and misunderstanding to the extent that some critics argue that the term should be avoided altogether. At the very least the term ‘singularity’ has led to many unfortunate assumptions that technological singularity involves some form of mathematical singularity and can hence be ignored as unphysical.Anders Sandberg

A list of models described in the paper:

A. Accelerating change

Exponential or superexponential technological growth (with linked economical growth and social change) (Ray Kurzweil (Kur05), John Smart (Smang))

B. Self improving technology

Better technology allows faster development of new and better technology. (Flake (Fla06))

C. Intelligence explosion

Smarter systems can improve themselves, producing even more intelligence in a strong feedback loop. (I.J. Good (Goo65), Eliezer Yudkowsky)

D. Emergence of superintelligence

(Singularity Institute) 1

E. Prediction horizon

Rapid change or the emergence of superhuman intelligence makes the future impossible to predict from our current limited knowledge and experience. (Vinge, (Vin93))

F. Phase transition

The singularity represents a shift to new forms of organisation. This could be a fundamental difference in kind such as humanity being succeeded by posthuman or artificial intelligences,
a punctuated equilibrium transition or the emergence of a new meta-system level. (Teilhard de Chardin, Valentin Turchin (Tur77), Heylighen (Hey07))

G. Complexity disaster

Increasing complexity and interconnectedness causes increasing payoffs, but increases instability. Eventually this produces a crisis, beyond which point the dynamics must be different.
(Sornette (JS01), West (BLH+07))

H. Inflexion point

Large-scale growth of technology or economy follows a logistic growth curve. The singularity represents the inflexion point where change shifts from acceleration to de-acceleration. (Extropian
FAQ, T. Modis (Mod02))

I. Infinite progress

The rate of progress in some domain goes to infinity in nite time. (Few, if any, hold this to be plausible 2 )

anders-sandberg-the-technological-singularity-predictability-horizon

Many thanks for watching!

Consider supporting SciFuture by:
a) Subscribing to the YouTube channel:
b) Donating via Patreon: https://www.patreon.com/scifuture and/or
c) Sharing the media SciFuture creates

Science, Technology & the Future: http://scifuture.org

Suffering, and Progress in Ethics – Peter Singer

Peter Singer_profileSuffering is generally bad – Peter Singer (who is a Hedonistic Utilitarian), and most Effective Altruists would agree with this. Though in addressing the need for suffering today Peter acknowledges that, as we are presently constituted, suffering is useful as a warning sign (e.g. against further injury). But what about the future?
What if we could eliminate suffering?
Perhaps in the future we will have advanced technological interventions to warn us of danger that will be functionally similar to suffering, but without the nasty raw feels.
Peter Singer, like David Pearce, suggests that if we could eliminate suffering of non-human animals that are capable of suffering, perhaps in some way that is difficult to imagine now – that this would be a good thing.

Video Interview:

I would see no reason to regret the absence of sufferingPeter Singer
Peter can’t see any regret to lament the disappearance of suffering, though perhaps people may say it would be useful to help understand literature of the past. Perhaps there are some indirect uses for suffering – but on balance Peter thinks that the elimination of suffering would be an amazingly good thing to do.

Singer thinks it is interesting to speculate what might be possible for the future of human beings, if we do survive over the longer term. To what extent are we going to be able to enhance ourselves? In particular to what extent are we going to be more ethical human beings – which brings to question ‘Moral Enhancement’.

The Expanding Circle - Peter SingerHave we made Progress in Ethics? Peter argues for the case that our species has expanded the circle of our ethical concern we have in his book ‘The Expanding Circle‘, and more recently Steven Pinker took up this idea in ‘Better Angels Of Our Nature’ – and this has happened over the millennia, beyond initially the tribal group, then to a national level, beyond ethnic groups to all human beings, and now we are starting to expand moral concern to non-human sentient beings as well.

Steven Pinker thinks that increases in our ethical consideration is bound up with increases in our intelligence (as proposed by James Flynn – the Flynn Effect – though this research is controversial (it could be actual increases in intelligence or just the ability to do more abstract reasoning)) and increases in our ability to reason abstractly.

As mentioned earlier there are other ways in which we may increase our ability and tendency to be more moral (see Moral Enhancement), and in the future we may discover genes that may influence us to think more about others, to dwell less on negative emotions like anger or rage. It is hard to say whether people will use these kinds of moral enhancers voluntarily, or whether we need state policies to encourage people to use moral enhances in order to produce better communities – and there are a lot of concerns here that people may legitimately have about how the moral enhancement project takes place. Peter sees this as a fascinating prospect and that it would be great to be around to see how things develop over the next couple of centuries.

Note Steven Pinker said of Peter’s book:

Singer’s theory of the expanding circle remains an enormously insightful concept, which reconciles the existence of human nature with political and moral progress. It was also way ahead of its time. . . . It’s wonderful to see this insightful book made available to a new generation of readers and scholars.Steven Pinker

The Expanding Circle

Abstract: What is ethics? Where do moral standards come from? Are they based on emotions, reason, or some innate sense of right and wrong? For many scientists, the key lies entirely in biology–especially in Darwinian theories of evolution and self-preservation. But if evolution is a struggle for survival, why are we still capable of altruism?

Peter Singer - The Most Good You Should Do - EA Global Melbourne 2015In his classic study The Expanding Circle, Peter Singer argues that altruism began as a genetically based drive to protect one’s kin and community members but has developed into a consciously chosen ethic with an expanding circle of moral concern. Drawing on philosophy and evolutionary psychology, he demonstrates that human ethics cannot be explained by biology alone. Rather, it is our capacity for reasoning that makes moral progress possible. In a new afterword, Singer takes stock of his argument in light of recent research on the evolution of morality.

References:
The Expanding Circle book page at Princeton University: http://press.princeton.edu/titles/9434.html

The Flynn Effect: http://en.wikipedia.org/wiki/Flynn_effect

Peter Singer – Ethics, Evolution & Moral Progress – https://www.youtube.com/watch?v=91UQAptxDn8

For more on Moral Enhancement see Julian Savulescu’s and others writings on the subject.

Subscribe to this Channel: http://youtube.com/subscription_center?add_user=TheRationalFuture

Science, Technology & the Future: http://scifuture.org

A Non-trivial Pursuit of Happiness – Paradise Engineering with David Pearce :-)

It is non-trivial for a couple of reasons:
a) the pursuit of this vision of happiness is not trivial – it is likely to be a very challenging endeavor (though totally worthy of the effort)
b) the aim is to achieve non-trivial modes of happiness, kinds of information-sensitive gradients of bliss (as opposed to being stuck in a narrow-local maximum of ecstatic stupor)

Imagine that the best experience possible – and imagine that it would be lower than tomorrows floor.

It may be that our decedents will have the chance to re-engineer themselves to be able to experience well-being far beyond what we can experience and imagine today.

Full blown paradise engineering is likely not something that people alive now should expect, though if we are ethically serious, we should be investigating ways to redesign our default mode of being to flourish in states of bliss.

Transcript

Think of the most wonderful experience of your life – now imagine if life could be as good as that – or rather imagine if life could be better than that all the time. Just imagine if your best
experience ever could be lower than tomorrow’s hedonic flaw. Other things being equal, wouldn’t it be better if we live in paradise?
Now, for much of history this kind of talk would be simply could be dismissed as utopian dreaming; that manipulating the environment in
innumerable different ways has been tried and to be honest we’re not significantly happier now than ancestors on the African savanna – certainly not if suicide,
depression marital breakup statistics et cetera are taken seriously.
However thanks to biotechnology now it will be possible re-engineer ourselves; to edit our own source code; to enjoy life animated by gradients of bliss – other things being equal,
doesn’t it make sense to make that our default option?

What could go wrong? Well lots of things could go wrong – but that’s true of any experiment – and that’s what having kids involves today. When two people decide to bring children into
the world chances are the moment they are going to be bringing in an awful of suffering into the world too.
Whereas in future when one creates new life one will be creating these potentially creating gradients of lifelong well-being. And if we’re ethically serious, that’s the approach I think we ought to be taking.

A lost people will probably think |well that’s all well and good maybe our
children, grandchildren or great-grandchildren will enjoy this kind of fabulous life.”
“What about me now?” – because we’re human, one can listen to these wonderful tales some futurists relate of how good life could be in future – a future of super-intelligence, super-longevity and super-happiness – all these wonderful things – what about now? One still has bills to pay, taxes, relationship problems, just the messy nitty-gritty reality of life – unfortunately I don’t have a panacea now or rather the kinds of interventions one can suggest: good diets, exercise, sleep discipline… unfortunately are on not as exciting as this tantalizing prospect that our children and grandchildren will enjoy.

But after that somber note perhaps its worth suggesting that with to designer drugs and with future autosomal genetic therapy it will be possible for adults my age and older to enjoy the best time of their lives too – perhaps not full blown paradise engineering; the richness that our descendants may enjoy – but there is no reason to be skeptical that the later in years of our life won’t be incomparably richer than anything that’s gone before.

The Hedonistic Imperative

The Hedonistic Imperative outlines how genetic engineering and nanotechnology will abolish suffering in all sentient life.

The abolitionist project is hugely ambitious but technically feasible. It is also instrumentally rational and morally urgent. The metabolic pathways of pain and malaise evolved because they served the fitness of our genes in the ancestral environment. They will be replaced by a different sort of neural architecture – a motivational system based on heritable gradients of bliss. States of sublime well-being are destined to become the genetically pre-programmed norm of mental health. It is predicted that the world’s last unpleasant experience will be a precisely dateable event.

Two hundred years ago, powerful synthetic pain-killers and surgical anesthetics were unknown. The notion that physical pain could be banished from most people’s lives would have seemed absurd. Today most of us in the technically advanced nations take its routine absence for granted. The prospect that what we describe as psychological pain, too, could ever be banished is equally counter-intuitive. The feasibility of its abolition turns its deliberate retention into an issue of social policy and ethical choice.

Subscribe to our YouTube Channel | Science, Technology & the Future

Is there a Meaningful Future for Non-Optimal Moral Agents?

In an interview last year, I had a discussion with John Danaher on the Hedonistic Imperative & Superintelligence – a concern he has with HI is that it denies or de-emphasises some kind of moral agency – in moral theory there is a distinction between moral agents (being a responsible actor able to make moral decisions, influence direction of moral progress, shapes its future, and owes duties to others) and moral patients who may be deemed to have limited or no grounds for moral agency/autonomy/responsibility – they are simply a recipient of moral benefits – in contrast to humans, animals could be classified as moral patients – (see Stanford writing on Grounds for Moral Status).

As time goes on, the notion of strong artificial intelligence leading to Superintelligence (which may herald in something like an Intelligence Explosion) and ideas like the hedonistic imperative becomes less sensational sci-fi concepts and more like visions of realizable eventualities. Thinking about moral endpoints comes to me a paradoxical feeling of triumph and disempowerment.

John’s concern is that ensuring the well-being of humans (conscious entities) is consistent with denying their moral agency – minimizing their capacity to act – that there is a danger that the outcome of HI or an Intelligence Explosion may result in sentient life being made very happy forever, but unable to make choices – with a focus on a future entirely based on bliss whilst ignoring other aspects of what makes for a valuable or worthwhile existence.

Artificial Heart chipsSo even if we have a future where a) we are made very happy and b) we are subject to a wide variety of novelty (which I argue for in Novelty Utilitarianism) without some kind of self-determination we may not be able to enjoy part of what arguably makes for a worthwhile existence.

If the argument for moral agency is completely toppled by the argument against free will then I can see why there would be no reason for it – and that bliss/novelty may be enough – though I personally haven’t been convinced that this is the case.

Also the idea that moral agency and novelty should be ranked as auxiliary aspects to the main imperative of reducing suffering/increasing bliss seems problematic – I get the sense that they (agency/novelty) could easily be swapped out for most non-optimal moral agents in the quest for -suffering/+bliss troublesome.
The idea that upon evaluating grounds for moral status, our ethical/moral quotient may not match or even come close to a potential ethical force of a superintelligence is also troubling. If we are serious about the best ethical outcomes, when the time comes, should we be committed to resigning all moral agency to agents that are more adept at producing peek moral outcomes?
ancillary-one-esk-glitchIs it really possible for non-optimal agents to have a meaningful moral input in a universe where they’ve been completely outperformed by moral machines? Is a life of novelty & bliss the most optimal outcome we can hope for?

There probably should be some more discussion on trade-offs between moral agency, peek experience and novelty.

Discussion in this video here starts at 24:02

Below is the whole interview with John Danaher:

Wireheading with David Pearce

Is the Hedonistic Imperative equivalent to wire-heading?
People are often concerned about the future being a cyber-puink dystopia where people are hard wired into pleasure centers like smacked out like lotus eating milk-sops devoid of meaningful existence. Does David Pearce’s Hedonistic Imperative entail a future where we are all in thrall to permanent experiential orgasms – intravenously hotwired into our pleasure centers via some kind of soma like drug turning us into blissful-idiots?

Adam Ford: I think some people often conflate or distill the Hedonistic Imperative to mean ‘wireheading’ – what do you (think)?

David Pearce: Yes, I mean, clearly if one does argue that were going to phase out the biology of suffering and live out lives of perpetual bliss then it’s very natural to assimilate this to something like ‘wireheading’ – but for all sorts of reasons I don’t think wireheading (i.e. intercrainial self-stimulation of the reward centers and it’s pharmacological equivalent) is a plausible scenario for our future. Not least there will presumably always be selection pressure against wireheading – wireheads do not want to have baby wireheads and raise wirehead children.
I think a much more credible scenario is the idea that were going to re-calibrate the hedonic treadmill and allow ourselves and our future children to enjoy lives based on gradients of intelligent bliss. And one of the advantages of re-calibration rather than straight forward hedonic maximization is that by urging recalibration one isn’t telling people they ought to be giving up their existing preferences or values is that if your hedonic set-point (i.e. your average state of wellbeing) is much higher than it is now your quality wireheads - white of life will really be much higher – but it doesn’t involve any sacrifice of the values you hold most dear.
As a rather simplistic way of putting it – clearly where one lies basically on the hedonic axis will impose serious cognitive biases (i.e. someone who is let’s say depressive or prone to low mood) at least will have a very different set of biases from someone who is naturally cheerful. But none-the-less it doesn’t entail, so long as we aim for a motivational architecture of gradients of bliss, it doesn’t entail giving up anything you want to hold onto. I think that’s really important because a lot of people will be worried that somehow that if, yes, we do enter into some kind of secular paradise – it will involve giving up their normal relationships, their ordinary values and what they hold most dear. Re-calibration does not entail this (wireheading).

Adam Ford: That’s interesting – people think that you know as soon as you turn on the Hedonistic Imperative you are destined for a very narrow set of values – that could be just one peek experience being replayed over and over again – in some narrow local maximum.

wirehead-utility-function-hijacking1024x448David Pearce: Yes – I suppose one thinks of (kind of) crazed wirehead rats – in fairness, if one does imagine orgasmic bliss most people don’t complain that their orgasms are too long (and I’m not convinced that there is something desperately wrong with orgasmic bliss that lasts weeks, months, years or even centuries) but one needs to examine the wider sociological picture – and ask ‘is it really sustainable for us to become blissed out as distinct form blissful’.

Adam Ford: Right – and by blissed out you mean something like the lotus eaters found in Odysseus?

David Pearce: Yes, I mean clearly it is one version of paradise and bliss – they call it meditative tranquility (not doing anything) – but there are other versions of bliss in which one is hyper-motivated. It seems that, crudely speaking, motivation (which is mediated by the meso-limbic dopamene system) and raw bliss (which is associated with mu-opiod activation of our twin hedonic-hotspots) – the axis are orthogonal. Now they’re very closely interrelated (thanks to natural selection) – but in principle we can amplify one or damp down the other. Empirically, at any rate it seems to be the case today that the happiest people are also the most motivated – they have the greatest desires – I mean, this runs counter to the old buddhist notion that desire is suffering – but if you actually look at people who are depressive or chronically depressed quite frequently they have an absence of desire or motivation. But the point is we should be free to choose – yes it is potentially hugely liberatery – this control over our reward architecture, our pleasure circuitry that biotechnology offers – but let’s get things right. We don’t want to mess things up and produce the equivalent of large numbers of people on Heroin – and this is why I so strenuously urge the case for re-calibration – in the long run genetically, in the short run by various no-recreational drugs.

Clearly it is one version of paradise and bliss – they call it meditative tranquility (not doing anything) – but there are other versions of bliss in which one is hyper-motivated.David Pearce

Adam Ford: Ok… People may be worried that re-calibrating someone is akin to disrupting the continuum of self (or this enduring metaphysical ego) – so the person at the other end wouldn’t be really a continuation of the person at the beginning. What do you think? How would you respond to that sort of criticism?

wireheading - static David PearceDavid Pearce: It depends how strict ones conception of what personal identity is. Now, would you be worried if to learn tomorrow that you had won the national lottery (for example)? It would transform your lifestyle, your circle of friends – would this trigger the anxiety that the person who was living the existence of a multi-millionaire wasn’t really you? Well perhaps you should perhaps you should be worried about this – but on the whole most people would be relatively relaxed at the prospect. I would see this more as akin to a small child growing up – yes in one sense as one becomes a mature adult one has killed the toddler or lost the essence of what it was to be a toddler – but only in a very benign sense. And by aiming for re-calibration and hedonic enrichment rather than maximization, there is much less of a risk of loosing anything that you think is really valuable or important.

Adam Ford: Okay – well that’s interesting – we’ll talk about value. In order to not loose forms of value – even if you don’t use it (the values) much – you might have some values that you leave up in the attic to gather dust – like toys that you don’t play with anymore – but you might want to pick up once in a thousand years or what not. How do you then preserve complexity of value while also achieving high hedonic states – do you think they can go hand in hand? Or do you think preserving complexity of value reduces the likelihood that you will be able to achieve optimal hedonic states?

David Pearce: As an empirical matter – and I stress empirical here – it seems to be the case that the happiest are responsive to the broadest possible range of rewarding stimuli – it tends to be depressives who get stuck in a rut. So other things being equal – by re-calibrating ourselves, becoming happy and then superhappy – we can potentially at any rate, yes, enrich the complexity of our lives with a range of rewarding stimuli – it makes getting stuck in a rut less likely both for the individual and for civilization as a whole.
I think one of the reasons we are afraid of some kind of loss of complexity is that the idea of heaven – including in traditional christian heaven – it can sound a bit monotonous, and for happy people at least one of the experiences they find most unpleasant is boredom. But essentially it should be a matter of choice – yes, someone who is very happy to, let’s say, listen to a piece of music or contemplate or art, should be free to do so, and not forced into leading a very complex or complicated life – but equally folk who want to do a diverse range of things – well that’s feasible too.

For all sorts of reasons I don’t think wireheading… is a plausible scenario for our future. Not least there will presumably always be selection pressure against wireheading – wireheads do not want to have baby wireheads and raise wirehead children.David Pearce

– video/audio interview continues on past 10:00