The Ghost in the Quantum Turing Machine – Scott Aaronson

Interview on whether machines can be conscious with Scott Aaronson, theoretical computer scientist and David J. Bruton Jr. Centennial Professor of Computer Science at the University of Texas at Austin. His primary areas of research are quantum computing and computational complexity theory.
Scott blogged about this and other segments of our interview – his blog is very popular and has way more comments than this one does – check it out.

Check out interview segment “The Winding Road to Quantum Supremacy” with Scott Aaronson – covering progress in quantum computation, whether there are things that quantum computers could do that classical computers can’t etc..

Transcript

Adam Ford: In ‘Could a Quantum Computer have Subjective Experience?‘ you speculate where the process has to fully participate in the arrow of time to be conscious and this points to decoherence. If pressed, how might you try to formalize this?

Scott Aaronson: So yeah so I did write this kind of crazy essay five or six years ago that was called “The Ghost in the Quantum Turing Machine“, where I tried to explore a position that seemed to me to be mysteriously under-explored! And all of the debates about ‘could a machine be conscious?’ and we want to be thoroughgoing materialists right? There’s no magical ghost that defies the laws of physics; the brains or physical systems that obey the laws physics
just like any others.
But there is at least one very interesting difference between a brain and any digital computer that’s ever been built – and that is that the state of a brain is not obviously copyable; that is not obviously knowable to an outside person well enough to predict what a person will do in the future, without having to scan the person’s brain so invasively that you would kill them okay. And so there is a sort of privacy or opacity if you like to a brain that there is not to a piece of code running on a digital computer.
And so there are all sorts of classic philosophical conundrums that play on that difference. For example suppose that a human-level AI does eventually become possible and we have simulated people who were running a inside of our computers – well if I were to murder such a person in the sense of deleting their file is that okay as long as I kept the backup somewhere? As long as I can just restore them from backup? Or what if I’m running two exact copies of the program on two computers next to each other – is that instantiating two consciousnesses? Or is it really just one consciousness? Because there’s nothing to distinguish the one from the other?
So could I blackmail an AI to do what I wanted by saying even if I don’t have access to you as an AI, I’m gonna say if you don’t give me a million dollars then I’m just going to – since I have your code – I’m gonna create a million copies of your of the code and torture them? And – if you think about it – you are almost certain to be one of those copies because there’s far more of them than there are of you, and they’re all identical!
So yeah so there’s all these puzzles that philosophers have wondered about for generations about: the nature of identity, how does identity persist across time, can it be duplicated across space, and somehow in a world with copy-able AIs they would all become much more real!
And so one one point of view that you could take is that: well if I can predict exactly what someone is going to do right – and I don’t mean you know just saying as a philosophical matter that I could predict your actions if I were a Laplace demon and I knew the complete state of the universe right, because I don’t in fact know the complete state of the universe okay – but imagine that I could do that as an actual practical matter – I could build an actual machine that would perfectly predict down to the last detail every thing you would do before you had done it.
Okay well then in what sense do I still have to respect your personhood? I mean I could just say I have unmasked you as a machine; I mean my simulation has every bit as much right to personhood as you do at this point right – or maybe they’re just two different instantiations of the same thing.
So another possibility, you could say, is that maybe what we like to think of is consciousness only resides in those physical systems that for whatever reason are uncopyable – that if you try to make a perfect copy then you know you would ultimately run into what we call the no-cloning theorem in quantum mechanics that says that: you cannot copy the exact physical state of a an unknown system for quantum mechanical reasons. And so this would suggest of you where kind of personal identity is very much bound up with the flow of time; with things that happen that are evanescent; that can never happen again exactly the same way because the world will never reach exactly the same configuration.
A related puzzle concerns well: what if I took your conscious or took an AI and I ran it on a reversible computer? Now some people believe that any appropriate simulation brings about consciousness – which is a position that you can take. But now what if I ran the simulation backwards – as I can always do on a reversible computer? What if I ran the simulation, I computed it and then I uncomputed it? Now have I caused nothing to have happened? Or did I cause one forward consciousness, and then one backward consciousness – whatever that means? Did it have a different character from the forward consciousness?
But we know a whole class of phenomena that in practice can only ever happen in one direction in time – and these are thermodynamic phenomena right; these are phenomena that create waste heat; create entropy; that may take these little small microscopic unknowable degrees of freedom and then amplify them to macroscopic scale. And in principle there was macroscopic records could could get could become microscopic again. Like if I make a measurement of a quantum state at least according to the let’s say many-worlds quantum mechanics in principle that measurement could always be undone. And yet in practice we never see those things happen – for the same for basically the same reasons why we never see an egg spontaneously unscramble itself, or why we why we never see a shattered glass leap up to the table and reassemble itself right, namely these would represent vastly improbable decreases of entropy okay. And so the speculation was that maybe this sort of irreversibility in this increase of entropy that we see in all the ordinary physical processes and in particular in our own brains, maybe that’s important to consciousness?
Right uh or what we like to think of as free will – I mean we certainly don’t have an example to say that it isn’t – but you know the truth of the matter is I don’t know I mean I set out all the thoughts that I had about it in this essay five years ago and then having written it I decided that I had enough of metaphysics, it made my head hurt too much, and I was going to go back to the better defined questions in math and science.

Adam Ford: In ‘Is Information Physical?’ you note that if a system crosses a Swartzschild Bound it collapses into a black-hole – do you think this could be used to put an upper-bound on the amount of consciousness in any given physical system?

Scott Aaronson: Well so I can decompose your question a little bit. So there is what quantum gravity considerations let you do, it is believed today, is put a universal bound on how much computation can be going on in a physical system of a given size, and also how many bits can be stored there. And I the bounds are precise enough that I can just tell you what they are. So it appears that a physical system you know, that’s let’s say surrounded by a sphere of a given surface area, can store at most about 10 to the 69 bits, or rather 10 to the 69 qubits per square meter of surface area of the enclosing boundary. And it has a similar limit on how many computational steps it can do over it’s it’s whole history.
So now I think your question kind of reduces to the question: Can we upper-bound how much consciousness there is in a physical system – whatever that means – in terms of how much computation is going on in it; or in terms of how many bits are there? And that’s a little hard for me to think about because I don’t know what we mean by amount of consciousness right? Like am I ten times more conscious than a frog? Am I a hundred times more conscious? I don’t know – I mean some of the time I feel less conscious than a frog right.
But I am sympathetic to the idea that: there is some minimum of computational interestingness in any system that we would like to talk about as being conscious. So there is this ancient speculation of panpsychism, that would say that every electron, every atom is conscious – and do me that’s fine – you can speculate that if you want. We know nothing to rule it out; there were no physical laws attached to consciousness that would tell us that it’s impossible. The question is just what does it buy you to suppose that? What does it explain? And in the case of the electron I’m not sure that it explains anything!
Now you could say does it even explain anything to suppose that we’re conscious? But and maybe at least not for anyone beyond ourselves. You could say there’s this ancient conundrum that we each know that we’re conscious presumably by our own subjective experience and as far as we know everyone else might be an automaton – which if you really think about that consistently it could lead you to become a solipsist. So Allen Turing in his famous 1950 paper that proposed the Turing test had this wonderful remark about it – which was something like – ‘A’ is liable to think that ‘A’ thinks while ‘B’ does not, while ‘B’ is liable to think ‘B’ thinks but ‘A’ does not. But in practice it is customary to adopt the polite convention that everyone thinks. So it was a very British way of putting it to me right. We adopt the polite convention that solipsism is false; that people who can, or any entities let’s say, that can exhibit complex behaviors or goal-directed intelligent behaviors that are like ours are probably conscious like we are. And that’s a criterion that would apply to other people it would not apply to electrons (I don’t think), and it’s plausible that there is some bare minimum of computation in any entity to which that criterion would apply.

Adam Ford: Sabine Hossenfelder – I forget her name now – {Sabine Hossenfelder yes} – she had a scathing review of panpsychism recently, did you read that?

Scott Aaronson: If it was very recent then I probably didn’t read it – I mean I did read an excerpt where she was saying that like Panpsychism – is what she’s saying that it’s experimentally ruled out? If she was saying that I don’t agree with that – know I don’t even see how you would experimentally rule out such a thing; I mean you’re free to postulate as much consciousness as you want on the head of a pin – I would just say well it’s not if it doesn’t have
an empirical consequence; if it’s not affecting the world; if it’s not affecting the behavior of that head of a pin, in a way that you can detect – then Occam’s razor just itches to slice it out from our description of the world – always that’s the way that I would put it personally.\
So I put a detailed critique of integrated information theory (IIT), which is Giulio Tononi’s proposed theory of consciousness on my blog, and my critique was basically: so Tononi know comes up with a specific numerical measure that he calls ‘Phi’ and he claims that a system should be regarded as conscious if and only if the Phi is large. Now the actual definition of Phi has changed over time – it’s changed from one paper to another, and it’s not always clear how to apply it and there are many technical objections that could be raised against this criterion. But you know what I respect about IIT is that at least it sticks its neck out right. It proposes this very clear criterion, you know are we always much clearer than competing accounts do right – to tell you this is which physical systems you should regard as conscious and which not.
Now the danger of sticking your neck out is that it can get cut off right – and indeed I think that IIT is not only falsifiable but falsified, because as soon as this criterion is written down (what the point I was making is that) it is easy to construct physical systems that have enormous values of Phi – much much larger then a human has – that I don’t think anyone would really want to regard as intelligent let alone conscious or even very interesting.
And so my examples show that basically Phi is large if and only if your system has a lot of interconnection – if it’s very hard to decompose into two components that interact with each other only weakly – and so you have a high degree of information integration. And so my the point of my counter examples was to try to say well this cannot possibly be the sole relevant criterion, because a standard error correcting code as is used for example on every compact disc also has an enormous amount of information integration – but should we therefore say that you know ‘every error correcting code that gets implemented in some piece of electronics is conscious?’, and even more than that like a giant grid of logic gates just sitting there doing nothing would have a very large value of Phi – and we can multiply examples like that.
And so Tononi then posted a big response to my critique and his response was basically: well you’re just relying on intuition; you’re just saying oh well yeah these systems are not a conscious because my intuition says that they aren’t – but .. that’s parochial right – why should you expect a theory of consciousness to accord with your intuition and he just then just went ahead and said yes the error correcting code is consciouss, yes the giant grid of XOR gates is conscious – and if they have a thousand times larger value of Phi than a brain, then there are a thousand times more conscious than a human is. So you know the way I described it was he didn’t just bite the bullet he just devoured like a bullet sandwich with mustard. Which was not what I was expecting but now the critique that I’m saying that ‘any scientific theory has to accord with intuition’ – I think that is completely mistaken; I think that’s really a mischaracterization of what I think right.
I mean I’ll be the very first to tell you that science has overturned common sense intuition over and over and over right. I mean like for example temperature feels like an intrinsic quality of a of a material; it doesn’t feel like it has anything to do with motion with the atoms jiggling around at a certain speed – okay but we now know that it does. But when scientists first arrived at that modern conception of temperature in the eighteen hundreds, what was essential was that at least you know that new criterion agreed with the old criterion that fire is hotter than ice right – so at least in the cases where we knew what we meant by hot or cold – the new definition agreed with the old definition. And then the new definition went further to tell us many counterintuitive things that we didn’t know before right – but at least that it reproduced the way in which we were using words previously okay.
Even when Copernicus and Galileo where he discovered that the earth is orbiting the Sun right, the new theory was able to account for our observation that we were not flying off the earth – it said that’s exactly what you would expect to have happened even in the in ?Anakin? because of these new principles of inertia and so on okay.
But if a theory of consciousness says that this giant blank wall or this grid is highly highly conscious just sitting there doing nothing – whereas even a simulated person or an AI that passes the Turing test would not be conscious if it’s organized in such a way that it happens to have a low value of Phi – I say okay the burden is on you to prove to me that this Phi notion that you have defined has anything whatsoever to do with what I was calling consciousness you haven’t even shown me any cases where they agree with each other where I should therefore extrapolate to the hard cases; the ones where I lack an intuition – like at what point is an embryo conscious? or when is an AI conscious? I mean it’s like the theory seems to have gotten wrong the only things that it could have possibly gotten right, and so then at that point I think there is nothing to compel a skeptic to say that this particular quantity Phi has anything to do with consciousness.

The Winding Road to Quantum Supremacy – Scott Aaronson

Interview on quantum computation with Scott Aaronson, theoretical computer scientist and David J. Bruton Jr. Centennial Professor of Computer Science at the University of Texas at Austin. His primary areas of research are quantum computing and computational complexity theory.
Scott blogged about this and other segments of our interview – his blog is very popular and has way more comments than this one does – check it out.

Check out interview segment “The Ghost in the Quantum Turing Machine” – covering whether a machine can be conscious, whether information is physical and integrated information theory.

Transcript

Scott Aaronson: Okay so – Hi, I’m Scott Aaronson. I’m a computer science professor at the University of Texas at Austin and my main interest is the capabilities and limits of quantum computers, and more broadly what computer science and physics have to tell each other. And I got interested in it I guess because it was hard not to be – because as a teenager it just seemed clear to me that the universe is a giant video game and it just obeys certain rules, and so if I really wanted to understand the universe maybe I could ignore the details of physics and just think about computation.
But then with the birth of quantum computing and the dramatic discoveries in the mid-1990s (like Shor’s algorithm for factoring huge numbers) it became clear that physics actually changes the basic rules of computation – so that was something that I felt like I had to understand. And 20 years later we’re still trying to understand it, and we may also be able to build some devices that can outperform classical computers namely quantum computers and use them to do some interesting things.
But to me that’s that’s really just icing on the cake; really I just want to understand how things fit together. Well to tell you the truth when I first heard about quantum computing (I think from reading some popular article in the mid 90s about Shor’s algorithm which had only recently been discovered) my first reaction was this sounds like obvious hogwash; this sounds like some physicists who just do not understand the first thing about computation – and they’re just inventing some physics proposal that sounds like it just tries every possible solution in parallel. But none of these things are going to scale and in computer science there’s been decades of experience of that; of people saying: well why don’t you build a computer using a bunch of mirrors? or using soap bubbles? or using folding proteins?
And there’s all kinds of ideas that on paper look like they could evaluate an exponential number of solutions at only a linear amount of time, but they’re always kind of idealizing something? So it’s always when you examine them carefully enough you find that the amount of energy or scales explose up on you exponentially, or the precision with which you would need to measure becomes exponentially precise, or something becomes totally unrealistic – and I thought the same must be true of quantum computing. But in order to be sure I had to read something about it.
So I while I was working over a summer at Bell Labs doing work that had nothing to do with quantum computing, well my boss was nice enough to let me spend some time learning about and reading up on the basics of quantum computing – and that was really a revelation for me because I accepted [that] quantum mechanics is the real thing. It is a thing of comparable enormity to the basic principles of computation – you can say the principles of Turing – and it is exactly the kind of thing that could modify some of those principles. But the biggest surprise of all I think was that I despite not being a physicist not having any skill that partial differential equations or the others tools of the physicists that I could actually understand something about quantum mechanics.
And ultimately to learn the basic rules of how a quantum computer would work and start thinking about what they would be good for – quantum algorithms and things like that – it’s enough to be conversant with vectors and matrice. So you need to know a little bit of math but not that much. You need to be able to know linear algebra okay and that’s about it.
And I feel like this is a kind of a secret that gets buried in almost all the popular articles; they make it sound like quantum mechanics is just this endless profusion of counterintuitive things. That it’s: particles can be in two places at once, and a cat can be both dead and alive until you look at it, and then why is that not just a fancy way of saying well either the cat’s alive or dead and you don’t know which one until you look – they they never quite explained that part, and particles can have spooky action at a distance and affect each other instantaneously, and particles can tunnel through walls! It all sounds hopelessly obscure and you know there’s no hope for anyone who’s not a PhD in physics to understand any of it.
But the truth of the matter is there’s this one counterintuitive hump that you have to get over which is the certain change to or generalization of the rules of probability – and once you’ve gotten that then all the other things are just different ways of talking about or different manifestations of that one change. And a quantum computer in particular is just a computer that tries to take advantage of this one change to the rules of probability that the physicists discovered in the 1920s was needed to account for our world. And so that was really a revelation for me – that even you’re computer scientists are math people; people who are not physicists can actually learn this and start contributing to it – yeah!

Adam Ford: So it’s interesting that often when you try to pursue an idea, the practical gets in the way – we try to get to the ideal without actually considering the practical – and they feel like enemies. Should we be letting the ideal be the enemy of the practical?

Scott Aaronson: Well I think that from the very beginning it was clear that there is a theoretical branch of quantum computing which is where you just assume you have as many of these quantum bits (qubits) as you could possibly need, and they’re perfect; they stay perfectly isolated from their environment, and you can do whatever local operations on them you might like, and then you just study how many operations would you need to factor a number, or solve some other problem of practical importance. And the theoretical branch is really the branch where I started out in this field and where I’ve mostly been ever since.
And then there’s the practical branch which asks well what will it take to actually build a device that instantiates this theory – where we have to have qubits that are actually the energy levels of an electron, or the spin states of an atomic nucleus, or are otherwise somehow instantiated in the physical world. And they will be noisy, they will be interacting with their environment – we will have to take heroic efforts to keep them sufficiently isolated from their environments – which is needed in order to maintain their superposition state. How do we do that?
Well we’re gonna need some kind of fancy error correcting codes to do that, and then there are there are theoretical questions there as well but how do you design those correcting codes?
But there’s also practical questions: how do you engineer a system where the error rates are low enough that these codes can even be used at all; that if you try to apply them you won’t simply be creating even more error than you’re fixing. What should be the physical basis for qubits? Should it be superconducting coils? Should it be ions trapped in a magnetic field? Should it be photons? Should it be some new topological state of matter? Actually all four of those proposals and many others are all being pursued now!
So I would say that until fairly recently in the field, like five years ago or so, the theoretical and the practical branches we’re pretty disjointed from each other; they were never enemies so to speak. I mean we might poke fun at each other sometimes but we were we were never enemies. The the field always sort of rose or fell as a whole and we all knew that. But we just didn’t have a whole lot to scientifically say to each other because the experimentalists we’re just trying to get one or two qubits to work well, and they couldn’t even do that much, and we theorists we’re thinking about – well suppose you’ve got a billion cubits, or some arbitrary number, what could you do? And what would still be hard to do even then?
A lot of my work was has actually been about the limitations of quantum computers, but I also like to say the study of what you can’t do even with computers that you don’t have. And only recently the experimentalists have finally gotten the qubits to work pretty well in isolation so that now it finally makes sense to start to scale things up – not yet to a million qubits but maybe 50 qubits, maybe to 60, maybe to a hundred. This as it happens is what
Google and IBM and Intel and a bunch of startup companies are trying to do right now. And some of them are hoping to have devices within the next year or two, that might or might not do anything useful but if all goes well we hope will at least be able to do something interesting – in the sense of something that would be challenging for a classical computer to simulate, and that at least proves the point that we can do something this way that is beyond what classical computers can do.
And so as a result the most nitty-gritty experimentalists are now actually talking to us theorists because now they need to know – not just as a matter of intellectual curiosity, but as a fairly pressing practical matter – once we get 50 or 100 cubits working what do we do with them? What do we do with them first of all that you know is hard to simulate classically? How sure are you that there’s no fast classical method to do the same thing? How do we verify that we’ve really done it , and is it useful
for anything?
And ideally they would like us to come up with proposals that actually fit the constraints of the hardware that they’re building, where you could say you know eventually none of this should matter, eventually a quantum programmer should be able to pay as little attention to the hardware as a classical programmer has to worry about the details of the transistors today.
But in the near future when we only have 50 or 100 cubits you’re gonna have to make the maximum use of each and every qubit that you’ve got, and the actual details of the hardware are going to matter, and the result is that even we theorists have had to learn about these details in a way that we didn’t before.
There’s been a sort of coming together of the theory and practical branches of the field just in the last few years that to me has been pretty exciting.

Adam Ford: So you think we will have something equivalent to functional programming for quantum computing in the near future?

Scott Aaronson: Well there actually has been a fair amount of work on the design of quantum programming languages. There’s actually a bunch of them out there now that you can download and try out if you’d like. There’s one called Quipper, there’s another one called a Q# from Microsoft, and there are several others. Of course we don’t yet have very good hardware to run the programs on yet, mostly you can just run them in classical simulation, which naturally only works well for up to about 30 or 40 cubits, and then it becomes too slow. But if you would like to get some experience with quantum programming you can try these things out today, and many of them do try to provide higher level functionalities, so that you’re not just doing the quantum analog of assembly language programming, but you can think in higher-level modules, or you can program functionally. I would say that in quantum algorithms we’ve mostly just been doing theory and we haven’t been implementing anything, but we have had to learn to think that way. If we had to think in terms of each individual qubit, each individual operation on one or two
qubits, well we would never get very far right? And so we have to think in higher-level terms like there are certain modules that we know can be done – one of them is called the Quantum Fourier Transform and that’s actually the heart of Shor’s famous algorithm for factoring numbers (it has other applications as well). Another one is called Amplitude Amplification that’s the heart of Grover’s famous algorithm for searching long long lists of numbers
in about the square root of the number of steps that you would need classically, and that’s also like a quantum algorithm design primitive that we can just kind of plug in as a black box and it has many applications.
So we do think in these higher level terms, but there’s a different set of higher level abstractions than there would be for classical computing – and so you have to learn those. But the basic idea of decomposing a complicated
problem by breaking it down into its sub components that’s exactly the same in quantum computing as it is in classical computing.

Adam Ford: Are you optimistic with regards to quantum computing in the short to medium term?

Scott Aaronson: You’re asking what am I optimistic about – so I am I mean like I feel like the field has made amazing progress: both on theory side and on the experimental side. We’re not there yet, but we know a lot more than we did a decade ago. Some of what were my favorite open problems as a theorist a decade ago have now been resolved – some of them within the last year – actually and the hardware the qubits are not yet good enough to build a scalable quantum computer – in that sense the skeptics can clearly legitimately say we’re not there yet – well no duh we’re not – okay but: if you look at the coherence times of the qubits, you look at what you can do with them, and you compare that to where they were 10 years ago or 20 years ago – there’s been orders of magnitude type of progress. So the analogy that I like to make: Charles Babbage laid down the basic principles of classical computing in the 1820s right? I mean not with as much mathematical rigor as Turing would do later, but the basic ideas were there. He had what today we would call a design for a universal computer.
So now imagine someone then saying ‘well so when is this analytical engine gonna get built? will it be in the 1830s or will it take all the way until the 1840s?’ Well in this case it took more than a hundred years for a technology to be invented – namely the transistor – that really fully realized Babbage’s vision. I mean the vacuum tube came along earlier, and you could say partially realized that but it was just not reliable enough to really be scalable in the way that the transistor was. And optimistically now we’re in the very very early vacuum tube era of quantum computing. We don’t yet have the quantum computing analog of the transistor as people don’t even agree about which technology is the right one to scale up yet. Is it superconducting? Is it trapped ions? Is it photonics? Is it a topological matter? So they’re pursuing all these different approaches in parallel. The partisans of each approach have what sounds like compelling arguments as to why none of the other approaches could possibly scale. I hope that they’re not all correct uh-huh. People have only just recently gotten to the stage where one or two qubits work well in isolation, and where it makes sense to try to scale up to 50 or 100 of them and see if you can get them working well together at that kind of scale.
And so I think the the big thing to watch for in the next five to ten years is what’s been saddled with the somewhat unfortunate name of ‘Quantum Supremacy’ (and this was coined before Trump I hasten to say). But so this is just a term to refer to doing something with a quantum computer it’s not necessarily useful but that at least is classically hard. So you know as I was saying earlier, proves the point that you can do something that would take a lot longer to simulate it with a classical computer. And this is the thing that Google and some others are going to take their best shot at within the next couple of years so. What puts that in the realm of possibility is that just a mere 50 or 100 cubits if they work well enough should already be enough to get us this. In principle you know you may be able to do this without needing error correction – once you need error correction then that enormously multiplies the resources that you need to do even the simplest of what’s called ‘Fault-Tolerant Computing’ might take many thousands of physical qubits, okay, even though everyone agrees that ultimately if you want to scale to realize the true promise of quantum computing – or let’s say to threaten our existing methods of cryptography – then you’re going to need this fault tolerance. But that I expect we’re not gonna see in the next five to ten years.
If we do see it I mean that will be a huge shock – as big a shock as it would be if you told someone in 1939 that there was going to be a nuclear weapon in six years. In that case there was a world war that sort of accelerated the timeline you could say from what it would otherwise be. In this case I hope there won’t be a world war that accelerates this timeline. But my guess would be that if all goes well then quantum supremacy might be achievable within the next decade, and I hope that after that we could start to see some initial applications of quantum computing which will probably be some very very specialized ones; some things that we can already get with a hundred or so non-error-corrected qubits. And by necessity these are going to be very special things – they might mostly be physics simulations or simulations of some simple chemistry problems.
I actually have a proposed application for near-term quantum computers which is to generate cryptographically secure random numbers – those random numbers that you could prove to a skeptic really were generated randomly – turns out that even like a 50 or 60 qubit quantum computer should already be enough to give us that. But true scalable quantum computing the kind that could threaten cryptography and that could also speed up optimization problems and things like that – that will probably require error correction – and I could be pleasantly surprised . I’m not optimistic about that part becoming real on the next five to ten years, but you know since every everyone likes an optimist I guess I’ll I try to be optimistic that we will take big steps in that direction and maybe even get there within my lifetime.

Also see this and this of an interview with Mike Johnson conducted by Andrés Gómez Emilson and I. Also this interview with Christof Koch will likely be of interest.

Uncovering the Mysteries of Affective Neuroscience – the Importance of Valence Research with Mike Johnson

Valence in overview

Adam: What is emotional valence (as opposed to valence in chemistry)?

Mike: Put simply, emotional valence is how pleasant or unpleasant something is. A somewhat weird fact about our universe is that some conscious experiences do seem to feel better than others.

 

Adam: What makes things feel the way they do? What makes some things feel better than others?

Mike: This sounds like it should be a simple question, but neuroscience just don’t know. It knows a lot of random facts about what kinds of experiences, and what kinds of brain activation patterns, feel good, and which feel bad, but it doesn’t have anything close to a general theory here.

..the way affective neuroscience talks about this puzzle sometimes sort of covers this mystery up, without solving it.

And the way affective neuroscience talks about this puzzle sometimes sort of covers this mystery up, without solving it. For instance, we know that certain regions of the brain, like the nucleus accumbens and ventral pallidum, seem to be important for pleasure, so we call them “pleasure centers”. But we don’t know what makes something a pleasure center. We don’t even know how common painkillers like acetaminophen (paracetamol) work! Which is kind of surprising.

In contrast, the hypothesis about valence I put forth in Principia Qualia would explain pleasure centers and acetaminophen and many other things in a unified, simple way.

 

Adam: How does the hypothesis about valence work?

Mike: My core hypothesis is that symmetry in the mathematical representation of an experience corresponds to how pleasant or unpleasant that experience is. I see this as an identity relationship which is ‘True with a capital T’, not merely a correlation.  (Credit also goes to Andres Gomez Emilsson & Randal Koene for helping explore this idea.)

What makes this hypothesis interesting is that
(1) On a theoretical level, it could unify all existing valence research, from Berridge’s work on hedonic hotspots, to Friston & Seth’s work on predictive coding, to Schmidhuber’s idea of a compression drive;

(2) It could finally explain how the brain’s so-called “pleasure centers” work– they function to tune the brain toward more symmetrical states!

(3) It implies lots and lots of weird, bold, *testable* hypotheses. For instance, we know that painkillers like acetaminophen, and anti-depressants like SSRIs, actually blunt both negative *and* positive affect, but we’ve never figured out how. Perhaps they do so by introducing a certain type of stochastic noise into acute & long-term activity patterns, respectively, which disrupts both symmetry (pleasure) and anti-symmetry (pain).

 

Adam: What kinds of tests would validate or dis-confirm your hypothesis? How could it be falsified and/or justified by weight of induction?

Mike: So this depends on the details of how activity in the brain generates the mind. But I offer some falsifiable predictions in PQ (Principia Qualia):

  • If we control for degree of consciousness, more pleasant brain states should be more compressible;
  • Direct, low-power stimulation (TMS) in harmonious patterns (e.g. 2hz+4hz+6hz+8hz…160hz) should feel remarkably more pleasant than stimulation with similar-yet-dissonant patterns (2.01hz+3.99hz+6.15hz…).

Those are some ‘obvious’ ways to test this. But my hypothesis also implies odd things such as that chronic tinnitus (ringing in the ears) should product affective blunting (lessened ability to feel strong valence).

Note: see https://qualiacomputing.com/2017/06/18/quantifying-bliss-talk-summary/ and http://opentheory.net/2018/08/a-future-for-neuroscience/ for a more up-to-date take on this.

 

Adam: Why is valence research important?

Mike Johnson: Put simply, valence research is important because valence is important. David Chalmers famously coined “The Hard Problem of Consciousness”, or why we’re conscious at all, and “The Easy Problem of Consciousness”, or how the brain processes information. I think valence research should be called “The Important Problem of Consciousness”. When you’re in a conscious moment, the most important thing to you is how pleasant or unpleasant it feels.

That’s the philosophical angle. We can also take the moral perspective, and add up all the human and non-human animal suffering in the world. If we knew what suffering was, we could presumably use this knowledge to more effectively reduce it and make the world a kinder place.

We can also take the economic perspective, and add up all the person-years, capacity to contribute, and quality of life lost to Depression and chronic pain. A good theory of valence should allow us to create much better treatments for these things. And probably make some money while doing it.

Finally, a question I’ve been wondering for a while now is whether having a good theory of qualia could help with AI safety and existential risk. I think it probably can, by helping us see and avoid certain failure-modes.

 

Adam: How can understanding valence could help make future AIs safer? (How to help define how the AI should approach making us happy?, and in terms of a reinforcement mechanism for AI?)

Mike: Last year, I noted a few ways a better understanding of valence could help make future AIs safer on my blog. I’d point out a few notions in particular though:

  • If we understand how to measure valence, we could use this as part of a “sanity check” for AI behavior. If some proposed action would cause lots of suffering, maybe the AI shouldn’t do it.
  • Understanding consciousness & valence seem important for treating an AI humanely. We don’t want to inadvertently torture AIs- but how would we know?
  • Understanding consciousness & valence seems critically important for “raising the sanity waterline” on metaphysics. Right now, you can ask 10 AGI researchers about what consciousness is, or what has consciousness, or what level of abstraction to define value, and you’ll get at least 10 different answers. This is absolutely a recipe for trouble. But I think this is an avoidable mess if we get serious about understanding this stuff.

 

Adam: Why the information theoretical approach?

Mike: The way I would put it, there are two kinds of knowledge about valence: (1) how pain & pleasure work in the human brain, and (2) universal principles which apply to all conscious systems, whether they’re humans, dogs, dinosaurs, aliens, or conscious AIs.

It’s counter-intuitive, but I think these more general principles might be a lot easier to figure out than the human-specific stuff. Brains are complicated, but it could be that the laws of the universe, or regularities, which govern consciousness are pretty simple. That’s certainly been the case when we look at physics. For instance, my iPhone’s processor is super-complicated, but it runs on electricity, which itself actually obeys very simple & elegant laws.

Elsewhere I’ve argued that:

>Anything piped through the complexity of the brain will look complex, regardless of how simple or complex it starts out as. Similarly, anything will look irreducibly complex if we’re looking at it from the wrong level of abstraction.

 

Adam: What do you think of Thomas A. Bass’s view of ITheory – he thinks that (at least in many cases) it has not been easy to turn data into knowledge. That there is a pathological attraction to information which is making us ‘sick’ – he calls it Information Pathology. If his view offers any useful insights to you concerning avoiding ‘Information Pathology’ – what would they be?

Mike: Right, I would agree with Bass that we’re swimming in neuroscience data, but it’s not magically turning into knowledge. There was a recent paper called “Could a neuroscientist understand a microprocessor?” which asked if the standard suite of neuroscience methods could successfully reverse-engineer the 6502 microprocessor used in the Atari 2600 and NES. This should be easier than reverse-engineering a brain, since it’s a lot smaller and simpler, and since they were analyzing it in software they had all the data they could ever ask for, but it turned out that the methods they were using couldn’t cut it. Which really begs the question of whether these methods can make progress on reverse-engineering actual brains. As the paper puts it, neuroscience thinks it’s data-limited, but it’s actually theory-limited.

The first takeaway from this is that even in the age of “big data” we still need theories, not just data. We still need people trying to guess Nature’s structure and figuring out what data to even gather. Relatedly, I would say that in our age of “Big Science” relatively few people are willing or able to be sufficiently bold to tackle these big questions. Academic promotions & grants don’t particularly reward risk-taking.

 

Adam: Information Theory frameworks – what is your “Eight Problems” framework and how does it contrast with Giulio Tononi’s Integrated Information Theory (IIT)? How might IIT help address valence in a principled manner? What is lacking IIT – and how does your ‘Eight Problems’ framework address this?

Mike: IIT is great, but it’s incomplete. I think of it as *half* a theory of consciousness. My “Eight Problems for a new science of consciousness” framework describes what a “full stack” approach would look like, what IIT will have to do in order to become a full theory.

The biggest two problems IIT faces is that (1) it’s not compatible with physics, so we can’t actually apply it to any real physical systems, and (2) it says almost nothing about what its output means. Both of these are big problems! But IIT is also the best and only game in town in terms of quantitative theories of consciousness.

Principia Qualia aims to help fix IIT, and also to build a bridge between IIT and valence research. If IIT is right, and we can quantify conscious experiences, then how pleasant or unpleasant this experience is should be encoded into its corresponding mathematical object.

 

Adam: What are the three principles for a mathematical derivation of valence?

Mike: First, a few words about the larger context. Probably the most important question in consciousness research is whether consciousness is real, like an electromagnetic field is real, or an inherently complex, irreducible linguistic artifact, like “justice” or “life”. If consciousness is real, then there’s interesting stuff to discover about it, like there was interesting stuff to discover about quantum mechanics and gravity. But if consciousness isn’t real, then any attempt to ‘discover’ knowledge about it will fail, just like attempts to draw a crisp definition for ‘life’ (elan vital) failed.

If consciousness is real, then there’s a hidden cache of predictive knowledge waiting to be discovered. If consciousness isn’t real, then the harder we try to find patterns, the more elusive they’ll be- basically, we’ll just be talking in circles. David Chalmers refers to a similar distinction with his “Type-A vs Type-B Materialism”.

I’m a strong believer in consciousness realism, as are my research collaborators. The cool thing here is, if we assume that consciousness is real, a lot of things follow from this– like my “Eight Problems” framework. Throw in a couple more fairly modest assumptions, and we can start building a real science of qualia.

Anyway, the formal principles are the following:

  1. Consciousness can be quantified. (More formally, that for any conscious experience, there exists a mathematical object isomorphic to it.)
  2. There is some order, some rhyme & reason & elegance, to consciousness. (More formally, the state space of consciousness has a rich set of mathematical structures.)
  3. Valence is real. (More formally, valence is an ordered property of conscious systems.)

 

Basically, they combine to say: this thing we call ‘valence’ could have a relatively simple mathematical representation. Figuring out valence might not take an AGI several million years. Instead, it could be almost embarrassingly easy.

 

Adam: Does Qualia Structuralism, Valence Structuralism and Valence Realism relate to the philosophy of physics principles of realism and structuralism? If so, is there an equivalent ontic Qualia Structuralism and Valence Structuralism?….

Mike: “Structuralism” is many things to many contexts. I use it in a specifically mathematical way, to denote that the state space of qualia quite likely embodies many mathematical structures, or properties (such as being a metric space).

Re: your question about ontics, I tend to take the empirical route and evaluate claims based on their predictions whenever possible. I don’t think predictions change if we assume realism vs structuralism in physics, so maybe it doesn’t matter. But I can get back to you on this. 🙂

 

Adam: What about the Qualia Research Institute I’ve also recently heard about :D! It seems both you (Mike) and Andrés Gómez Emilson are doing some interesting work there

Mike: We know very little about consciousness. This is a problem, for various and increasing reasons– it’s upstream of a lot of futurist-related topics.

But nobody seems to know quite where to start unraveling this mystery. The way we talk about consciousness is stuck in “alchemy mode”– we catch glimpses of interesting patterns, but it’s unclear how to systematize this into a unified framework. How to turn ‘consciousness alchemy’ into ‘consciousness chemistry’, so to speak.

Qualia Research Institute is a research collective which is working on building a new “science of qualia”. Basically, we think our “full-stack” approach cuts through all the confusion around this topic and can generate hypotheses which are novel, falsifiable, and useful.

Right now, we’re small (myself, Andres, and a few others behind the scenes) but I’m proud of what we’ve accomplished so far, and we’ve got more exciting things in the pipeline. 🙂

Also see the 2nd part, and the 3rd part of this interview series. Also this interview with Christof Koch will likely be of interest.

 

Mike Johnson is a philosopher living in the Bay Area, writing about mind, complexity theory, and formalization. He is Co-founder of the Qualia Research Institute. Much of Mike’s research and writings can be found at the Open Theory website.
‘Principia Qualia’ is Mike’s magnum opus – a blueprint for building a new Science of Qualia. Click here for the full version, or here for an executive summary.
If you like Mike’s work, consider helping fund it at Patreon.

Antispecism & Compassionate Stewardship – David Pearce

I think our first ethical priority is to stop doing harm, and right now in our factory farms billions of non-human animals are being treated in ways that if our victims were human, we would get the perpetrators locked up for life. And the sentience (and what it’s worth the sapience) of a pig compares with the pre-linguistic toddler. A chicken perhaps may be no more intellectually advanced or sentient than a human infant. But before considering the suffering of free living animals we need to consider, I think, the suffering we’re causing our fellow creatures.

Essentially it’s a lifestyle choice – do we want to continue to exploit and abuse other sentient beings because we like the taste of their flesh, or do we want to embrace the cruelty free vegan lifestyle. Some people would focus on treating other sentient beings less inhumanely. I’d say that we really need an ethical revolution in which our focus is: how can we help other sentient beings rather than harm them?

It’s very straightforward indeed to be a vegetarian. Vegetarians tend to statistically live longer, they record high IQ scores, they tend to be slimmer – it’s very easy to be a vegetarian. A strict vegan lifestyle requires considerably more effort. But over the medium to long run I think our focus should be going vegan.

In the short run I think we should be closing factory farms and slaughterhouses. And given that factory farming and slaughterhouses are the greatest source of severe chronic readily avoidable
suffering in the world today, any talk of intervening compassionate stewardship of the rest of the living world is fanciful.

Will ethical argument alone persuade us to stop exploiting & killing other non-human beings because we like the taste of their flesh? Possibly not. I think realistically one wants a twin track strategy that combines animal advocacy with the development of in-vitro meat. But I would strenuously urge anyone watching this program to consider giving up meat and animal products if you are ethically serious.

The final strand of the Abolitionist Project on earth however is free-living animals in nature. And it might seem ecologically illiterate to argue that it is going to be feasible to take care of elephants, zebras, and free living animals. Because after all – let’s say there is starvation, it’s in winter, if you start feeding a lot of starving herbivores – all this does is lead the next spring to a population explosion followed by ecological collapse & more suffering than before.

However what is potentially feasible, if we’re ethically serious, is to micromanage the entire living world – now this sounds extremely far fetched and utopian, but I’ll sketch how it is feasible. Later this century and beyond, every cubic meter of the planet is going to be computationally accessible to surveillance, micro-management and control. And if we want to, we can use fertility regulation & immuno-contraception to regulate population numbers – cross-species fertility control. Starting off presumably with higher vertebrates – elephants for instance – already now – in the Kruger National Park for example – in preference to the cruel practice of culling, population numbers are controlled by immuno-contraception.

So starting off with higher vertebrates but eventually in our wildlife parks, then across the phylogenetic tree, it will be possible to micromanage the living world.

And just as right now if you were to stumble across a small child who is drowning in a pond – you would be guilty of complicity in that child’s drowning if you didn’t pull the child out – exactly the same intimacy over the rest of the living world is going to be feasible later this century and beyond.

Now what about obligate carnivores – predators? Surely it’s inevitable that they’re going to continue to prey on herbivores, so that means one might intuitively suppose that the abolitionist project could never be completed. But even there, if we’re ethically serious there are workarounds – in-vitro meat – for instance big cats if they are offered in vitro meat, catnip flavored in-vitro meat – they’re not going to be tempted to chase after herbivores.

Alternatively, a little bit of genetic tweaking, and you no longer have an obligate carnivore.

I’m supposing here that we do want to preserve recognizable approximations of today’s so-called charismatic megafauna – many people are extremely unhappy at the idea that lions or tigers or snakes or crocodiles should go extinct. I’m not personally persuaded that the world would be a worse place without crocodiles or snakes, but if we do want to preserve them it’s possible genetically to treat them or provide in vitro meat so that they don’t actually do any harm to sentient beings.

Some species essentialists would respond that a lion that is no longer chasing, asphyxiating, disemboweling zebras is no longer truly a lion. But one might make the same argument that a homo sapiens who is no longer beating his rivals over their heads, or waging war or practicing infanticide, slavery and all the other ghastly practices of our evolutionary past, or for that matter wearing clothes, that which are that someone who adopts a more civilized life style are no longer truly human – which I can only say good.

And likewise, if there is a living world in which lions are pacifistic, if a lion so to speak is lying down with the lamb I would say that is much more civilized.

Compassionate Biology

See this exerpt from The Antispeciesist Revolution:
If and when humans stop systematically harming other sentient beings, will our ethical duties to members of other species have been discharged? Not if the same ethical considerations as apply to members of other human races or age-groups apply also to members of other species of equivalent sentience. Thus if famine breaks out in sub-Saharan Africa and young human children are starving, then we recognise we have a duty to send aid; or better still, to take proactive to measures to ensure famines do not arise in the first instance, i.e. to provide not just food aid but family planning. So why not assist, say, starving free-living elephants? Until recently, no comparable interventions were feasible for members of other species. The technical challenges were insurmountable. Not least, the absence of cross-species fertility control technologies would have often made bad problems worse. Yet thanks to the exponential growth of computer power, every cubic metre of the planet will shortly be computationally accessible to micro-management, surveillance and control. Harnessed to biotechnology, nanotechnology and robotics, such tools confer unprecedented power over Nature. With unbridled power comes complicity. Ethically speaking, how many of the traditional cruelties of the living world do we wish to perpetuate? Orthodox conservation biologists argue we should not “interfere”: humans can’t “police” Nature. Antispeciesists disagree. Advocates of compassionate biology argue that humans and nonhumans alike should not be parasitised, starved, disembowelled, asphyxiated, or eaten alive.

As always, bioconservatives insist such miseries are “natural”; status quo bias runs deep. “”Custom will reconcile people to any atrocity””, observed George Bernard Shaw. Snuff movies in the guise of Nature documentaries are quite popular on Youtube, a counterpoint to the Disneyfied wildlife shows aired on mainstream TV. Moreover even sympathetic critics of compassionate biology might respond that helping free-living members of other species is prohibitively expensive. An adequate welfare safety-net scarcely exists for humans in many parts of the world. So how can we contemplate its extension to nonhumans – even just to large-brained, long-lived vertebrates in our Nature reserves? Provision of comprehensive healthcare for all free-living elephants, for example, might cost between two or three billion dollars annually. Compassionate stewardship of the living world would be technically daunting too, entailing ecosystem management, cross-species fertility control via immunocontraception, veterinary care, emergency famine-relief, GPS tracking and monitoring, and ultimately phasing out or genetically “reprogramming” carnivorous predators. The notional bill could approach the world’s 1.7 trillion-dollar annual arms budget. But irrespective of cost or timescale, if we are to be consistently non-speciesist, then decisions about resource allocation should be based not on species membership, but directly or indirectly on sentience. An elephant, for example, is at least as sentient as a human toddler – and may well be as sentient if not sapient as adult humans. If it is ethically obligatory to help sick or starving children, then it’s ethically obligatory to help sick or starving elephants – not just via crisis interventions but via long-term healthcare support.

A traditional conservation biologist might respond that elephants helped by humans are no longer truly wild. Yet on such a criterion, clothes-wearing humans or beneficiaries of food aid and family planning aren’t “wild” humans either. Why should this matter? “Free-living” and “wild” are conceptually distinct. To assume that the civilising process should be confined to our own species is mere speciesist prejudice. Humans, transhumans and posthumans must choose what forms of sentience we want to preserve and create on Earth and beyond. Humans already massively intervene in Nature, whether through habitat destruction, captive breeding programs for big cats, “rewilding”, etc. So the question is not whether humans should “interfere”, but rather what ethical principles should govern our interventions.

http://www.hedweb.com/transhumanism/antispeciesist.html

Subscribe to the YouTube Channel

Science, Technology & the Future

One Big Misconception About Consciousness – Christof Koch

Christof Koch (Allen Institute for Brain Science) discusses Shannon information and it’s theoretical limitations in explaining consciousness –

Information Theory misses a critical aspect of consciousnessChristof Koch

Christof argues that we don’t need observers to have conscious experiences (other poeple, god, etc), the underlying assumptions behind traditional information theory assumes Shannon information – and that a big misconception about the structure of consciousness stems from this idea – assuming that Shannon information is enough to explain consciousness.  Shannon information is about “sending information from a channel to a receiver – consciousness isn’t about sending anything to anybody.”  So what other kind of information is there?

The ‘information’ in Integrated Information Theory (IIT) does not refer to Shannon information.  Etymologically, the word ‘information’ derives from ‘informare’ – “it refers to information in the original sense of the word ‘Informare’ – to give form to” – that is to give form to a high dimensional structure.

 

 

It’s worth noting that many disagree with Integrated Information Theory – including Scott Aaronson – see here, here and here.

 

See interview below:

“It’s a theory that proceeds from phenomenology to as it were mechanisms in physics”.

IIT is also described in Christof Koch’s Consciousness: Confessions of a Romantic Reductionist’.

Axioms and postulates of integrated information theory

5 axioms / essential properties of experience of consciousness that are foundation to IIT – the intent is to capture the essential aspects of all conscious experience. Each axiom should apply to every possible experience.

  • Intrinsic existence: Consciousness exists: each experience is actual—indeed, that my experience here and now exists (it is real) is the only fact I can be sure of immediately and absolutely. Moreover, my experience exists from its own intrinsic perspective, independent of external observers (it is intrinsically real or actual).
  • Composition: Consciousness is structured: each experience is composed of multiple phenomenological distinctions, elementary or higher-order. For example, within one experience I may distinguish a book, a blue color, a blue book, the left side, a blue book on the left, and so on.
  • Information: Consciousness is specific: each experience is the particular way it is—being composed of a specific set of specific phenomenal distinctions—thereby differing from other possible experiences (differentiation). For example, an experience may include phenomenal distinctions specifying a large number of spatial locations, several positive concepts, such as a bedroom (as opposed to no bedroom), a bed (as opposed to no bed), a book (as opposed to no book), a blue color (as opposed to no blue), higher-order “bindings” of first-order distinctions, such as a blue book (as opposed to no blue book), as well as many negative concepts, such as no bird (as opposed to a bird), no bicycle (as opposed to a bicycle), no bush (as opposed to a bush), and so on. Similarly, an experience of pure darkness and silence is the particular way it is—it has the specific quality it has (no bedroom, no bed, no book, no blue, nor any other object, color, sound, thought, and so on). And being that way, it necessarily differs from a large number of alternative experiences I could have had but I am not actually having.
  • Integration: Consciousness is unified: each experience is irreducible to non-interdependent, disjoint subsets of phenomenal distinctions. Thus, I experience a whole visual scene, not the left side of the visual field independent of the right side (and vice versa). For example, the experience of seeing the word “BECAUSE” written in the middle of a blank page is irreducible to an experience of seeing “BE” on the left plus an experience of seeing “CAUSE” on the right. Similarly, seeing a blue book is irreducible to seeing a book without the color blue, plus the color blue without the book.
  • Exclusion: Consciousness is definite, in content and spatio-temporal grain: each experience has the set of phenomenal distinctions it has, neither less (a subset) nor more (a superset), and it flows at the speed it flows, neither faster nor slower. For example, the experience I am having is of seeing a body on a bed in a bedroom, a bookcase with books, one of which is a blue book, but I am not having an experience with less content—say, one lacking the phenomenal distinction blue/not blue, or colored/not colored; or with more content—say, one endowed with the additional phenomenal distinction high/low blood pressure. Moreover, my experience flows at a particular speed—each experience encompassing say a hundred milliseconds or so—but I am not having an experience that encompasses just a few milliseconds or instead minutes or hours.

So, does IIT solve what David Chalmers calls the “Hard Problem of consciousness”?

Christof Koch  is an American neuroscientist best known for his work on the neural bases of consciousness. He is the President and Chief Scientific Officer of the Allen Institute for Brain Science in Seattle. From 1986 until 2013, he was a professor at the California Institute of Technology.

This interview is a short section of a larger interview which will be released at a later date.

Anders Sandberg -The Technological Singularity

Anders Sandberg.00_23_53_16.Still031Anders gives a short tutorial on the Singularity – clearing up confusion and highlighting important aspects of the Technological Singularity and related ideas, such as accelerating change, horizons of predictability, self-improving artificial intelligence, and the intelligence explosion.

Tutorial Video:

Points covered in the tutorial:

  • The Mathematical Singularity
  • The Technological Singularity: A Horizon of predictability
  • Confusion Around The Technological Singularity
  • Drivers of Accelerated Growth
  • Technology Feedback Loops
  • A History of Coordination
  • Technological Inflection Points
  • Difficult of seeing what happens after an Inflection Point
  • The Intelligence Explosion
  • An Optimisation Power Applied To Itself
  • Group Minds
  • The HIVE Singularity: A Networked Global Mind
  • The Biointelligence explosion
  • Humans are difficult to optimise

An Overview of Models of the Technological Singularity

anders-sandberg-technology-feedback-loopsSee Anders’ paper ‘An overview of models of technological singularity
This paper reviews different definitions and models of technological singularity. The models range from conceptual sketches to detailed endogenous growth models, as well as attempts to fit empirical data to quantitative models. Such models are useful for examining the dynamics of the world-system and possible types of future crisis points where fundamental transitions are likely to occur. Current models suggest that, generically, even small increasing returns tends to produce radical growth. If mental capital becomes copyable (such as would be the case for AI or brain emulation) extremely rapid growth would also become likely.
http://agi-conf.org/2010/wp-content/uploads/2009/06/agi10singmodels2.pdf

[The] Technological singularity is of increasing interest among futurists both as a predicted possibility in the midterm future and as subject for methodological debate. The concept is used in a variety of contexts, and has acquired an unfortunately large number of meanings. Some versions stress the role of artificial intelligence, others refer to more general technological change. These multiple meanings can overlap, and many writers use combinations of meanings: even Vernor Vinge’s seminal essay that coined the term uses several meanings. Some of these meanings may imply each other but often there is a conflation of different elements that likely (but not necessarily) occur in parallel. This causes confusion and misunderstanding to the extent that some critics argue that the term should be avoided altogether. At the very least the term ‘singularity’ has led to many unfortunate assumptions that technological singularity involves some form of mathematical singularity and can hence be ignored as unphysical.Anders Sandberg

A list of models described in the paper:

A. Accelerating change

Exponential or superexponential technological growth (with linked economical growth and social change) (Ray Kurzweil (Kur05), John Smart (Smang))

B. Self improving technology

Better technology allows faster development of new and better technology. (Flake (Fla06))

C. Intelligence explosion

Smarter systems can improve themselves, producing even more intelligence in a strong feedback loop. (I.J. Good (Goo65), Eliezer Yudkowsky)

D. Emergence of superintelligence

(Singularity Institute) 1

E. Prediction horizon

Rapid change or the emergence of superhuman intelligence makes the future impossible to predict from our current limited knowledge and experience. (Vinge, (Vin93))

F. Phase transition

The singularity represents a shift to new forms of organisation. This could be a fundamental difference in kind such as humanity being succeeded by posthuman or artificial intelligences,
a punctuated equilibrium transition or the emergence of a new meta-system level. (Teilhard de Chardin, Valentin Turchin (Tur77), Heylighen (Hey07))

G. Complexity disaster

Increasing complexity and interconnectedness causes increasing payoffs, but increases instability. Eventually this produces a crisis, beyond which point the dynamics must be different.
(Sornette (JS01), West (BLH+07))

H. Inflexion point

Large-scale growth of technology or economy follows a logistic growth curve. The singularity represents the inflexion point where change shifts from acceleration to de-acceleration. (Extropian
FAQ, T. Modis (Mod02))

I. Infinite progress

The rate of progress in some domain goes to infinity in nite time. (Few, if any, hold this to be plausible 2 )

anders-sandberg-the-technological-singularity-predictability-horizon

Many thanks for watching!

Consider supporting SciFuture by:
a) Subscribing to the YouTube channel:
b) Donating via Patreon: https://www.patreon.com/scifuture and/or
c) Sharing the media SciFuture creates

Science, Technology & the Future: http://scifuture.org

Juergen Schmidhuber on DeepMind, AlphaGo & Progress in AI

In asking AI researcher Juergen Schmidhuber about his thoughts on progress at DeepMind and about the AlphaGo vs Lee Sedol Go tournament – provided some initial comments. I will be updating this post with further interview.

juergen288x466genova1Juergen Schmidhuber: First of all, I am happy about DeepMind’s success, also because the company is heavily influenced by my former students: 2 of DeepMind’s first 4 members and their first PhDs in AI came from my lab, one of them co-founder, one of them first employee. (Other ex-PhD students of mine joined DeepMind later, including a co-author of our first paper on Atari-Go in 2010.)

Go is a board game where the Markov assumption holds: in principle, the current input (the board state) conveys all the information needed to determine an optimal next move (no need to consider the history of previous states). That is, the game can be tackled by traditional reinforcement learning (RL), a bit like 2 decades ago, when Tesauro used RL to learn from scratch a backgammon player on the level of the human world champion (1994). Today, however, we are greatly profiting from the fact that computers are at least 10,000 times faster per dollar.

In the last few years, automatic Go players have greatly improved. To learn a good Go player, DeepMind’s system combines several traditional methods such as supervised learning (from human experts) and RL based on Monte Carlo Tree Search. It will be very interesting to see the system play against the best human Go player Lee Sedol in the near future.

Unfortunately, however, the Markov condition does not hold in realistic real world scenarios. That’s why games such as football are much harder for machines than Go, and why Artificial General Intelligence (AGI) for RL robots living in partially observable environments will need more sophisticated learning algorithms, e.g., RL for recurrent neural networks.

For a comprehensive history of deep RL, see Section 6 of my survey with 888 references:
http://people.idsia.ch/~juergen/deep-learning-overview.html

Also worth seeing Juergen’s AMA here.

Juergen Schmidhuber’s website.

The Simpsons and Their Mathematical Secrets with Simon Singh

You may have watched hundreds of episodes of The Simpsons (and its sister show Futurama) without ever realizing that cleverly embedded in many plots are subtle references to mathematics, ranging from well-known equations to cutting-edge theorems and conjectures. That they exist, Simon Singh reveals, underscores the brilliance of the shows’ writers, many of whom have advanced degrees in mathematics in addition to their unparalleled sense of humor.

A mathematician is a machine for turning coffee into theorems. Simon Singh, The Simpsons and Their Mathematical Secrets

The Simpsons and their Mathematical SecretsWhile recounting memorable episodes such as “Bart the Genius” and “Homer3,” Singh weaves in mathematical stories that explore everything from p to Mersenne primes, Euler’s equation to the unsolved riddle of P v. NP; from perfect numbers to narcissistic numbers, infinity to even bigger infinities, and much more. Along the way, Singh meets members of The Simpsons’ brilliant writing team—among them David X. Cohen, Al Jean, Jeff Westbrook, and Mike Reiss—whose love of arcane mathematics becomes clear as they reveal the stories behind the episodes.
With wit and clarity, displaying a true fan’s zeal, and replete with images from the shows, photographs of the writers, and diagrams and proofs, The Simpsons and Their Mathematical Secrets offers an entirely new insight into the most successful show in television history.

Buy the book on amazon

An astronomer, a physicist, and a mathematician (it is said) were holidaying in Scotland. Glancing from a train window, they observed a black sheep in the middle of a field. “How interesting,” observed the astronomer, “all Scottish sheep are black!” To which the physicist responded, “No, no! Some Scottish sheep are black!” The mathematician gazed heavenward in supplication, and then intoned, “In Scotland there exists at least one field, containing at least one sheep, at least one side of which is black. Simon Singh, The Simpsons and Their Mathematical Secrets

 

 

Simon Singh is a British author who has specialised in writing about mathematical and scientific topics in an accessible manner. His written works include Fermat’s Last Theorem (in the United States titled Fermat’s Enigma: The Epic Quest to Solve the World’s Greatest Mathematical Problem),The Code Book (about cryptography and its history), Big Bang (about the Big Bang theory and the origins of the universe), Trick or Treatment? Alternative Medicine on Trial[6] (about complementary and alternative medicine) and The Simpsons and Their Mathematical Secrets (about mathematical ideas and theorems hidden in episodes of The Simpsons and Futurama).

Singh has also produced documentaries and works for television to accompany his books, is a trustee of NESTA, the National Museum of Science and Industry and co-founded the Undergraduate Ambassadors Scheme.

Subscribe to the Sci Future Channel

As a society, we rightly adore our great musicians and novelists, yet we seldom hear any mention of the humble mathematician. It is clear that mathematics is not considered part of our culture. Instead, mathematics is generally feared and mathematicians are often mocked. Simon Singh, The Simpsons and Their Mathematical Secrets

Science, Technology & the Future

Julian Savulescu – Government & Surveillance

julian savulescu - surveilanceIf you increase the altruistic motivation of people, you decrease the risk that they will negligently fail to consider the possible harmful effects of their behaviour on their fellow-beings. Being concerned about avoiding such risks is part of what having altruistic concern for these beings consists in. Moreover, the advance of technology will in all probability bring along more effective mechanisms of surveillance, and it is easier for these to pick up people who are negligent rather than evil-doers who are intent on beating them.

“The nutshell: Human societies have grown larger, more diverse, and more technologically complex, and as a result, our moral compasses are no longer up to the task of guiding us, argue Oxford University’s Persson (a philosopher) and Savulescu (an ethicist)—and we’re in danger of destroying ourselves. The severity of the problem demands an equally severe solution: biomedical moral enhancement and increased government surveillance of citizens.” – Slate

julian savulescu white shirtJulian Savulescu (born December 22, 1963) is an Australian philosopher and bioethicist. He is Uehiro Professor of Practical Ethics at the University of Oxford, Fellow of St Cross College, Oxford, Director of the Oxford Uehiro Centre for Practical Ethics, Sir Louis Matheson Distinguished Visiting Professor at Monash University, and Head of the Melbourne–Oxford Stem Cell Collaboration, which is devoted to examining the ethical implications of cloning and embryonic stem cell research. He is the editor of the Journal of Medical Ethics, which is ranked as the #1 journal in bioethics worldwide by Google Scholar Metrics as of 2013. In addition to his background in applied ethics and philosophy, he also has a background in medicine and completed his MBBS (Hons) at Monash University. He completed his PhD at Monash University, under the supervision of renowned bioethicist Peter Singer. Published Jan 30, 2014.

Science, Technology & the Future

Metamorphogenesis – How a Planet can produce Minds, Mathematics and Music – Aaron Sloman

The universe is made up of matter, energy and information, interacting with each other and producing new kinds of matter, energy, information and interaction.
How? How did all this come out of a cloud of dust?
In order to find explanations we first need much better descriptions of what needs to be explained.

By Aaron Sloman
Abstract – and more info – Held at Winter Intelligence Oxford – Organized by the Future of Humanity Institute

Aaron Sloman

Aaron Sloman

This is a multi-disciplinary project attempting to describe and explain the variety of biological information-processing mechanisms involved in the production of new biological information-processing mechanisms, on many time scales, between the earliest days of the planet with no life, only physical and chemical structures, including volcanic eruptions, asteroid impacts, solar and stellar radiation, and many other physical/chemical processes (or perhaps starting even earlier, when there was only a dust cloud in this part of the solar system?).

Evolution can be thought of as a (blind) Theorem Prover (or theorem discoverer).
– Proving (discovering) theorems about what is possible (possible types of information, possible types of information-processing, possible uses of information-processing)
– Proving (discovering) many theorems in parallel (including especially theorems about new types of information and new useful types of information-processing)
– Sharing partial results among proofs of different things (Very different biological phenomena may share origins, mechanisms, information, …)
Combining separately derived old theorems in constructions of new proofs (One way of thinking about symbiogenesis.)
– Delegating some theorem-discovery to neonates and toddlers (epigenesis/ontogenesis). (Including individuals too under-developed to know what they are discovering.)
– Delegating some theorem-discovery to social/cultural developments. (Including memes and other discoveries shared unwittingly within and between communities.)
– Using older products to speed up discovery of new ones (Using old and new kinds of architectures, sensori-motor morphologies, types of information, types of processing mechanism, types of control & decision making, types of testing.)

The “proofs” of discovered possibilities are implicit in evolutionary and/or developmental trajectories.

They demonstrate the possibility of development of new forms of development, evolution of new types of evolution learning new ways to learn evolution of new types of learning (including mathematical learning: by working things out without requiring empirical evidence) evolution of new forms of development of new forms of learning (why can’t a toddler learn quantum mechanics?) – how new forms of learning support new forms of evolution amd how new forms of development support new forms of evolution (e.g. postponing sexual maturity until mate-selection mating and nurturing can be influenced by much learning)
….
…. and ways in which social cultural evolution add to the mix

These processes produce new forms of representation, new ontologies and information contents, new information-processing mechanisms, new sensory-motor
morphologies, new forms of control, new forms of social interaction, new forms of creativity, … and more. Some may even accelerate evolution.

A draft growing list of transitions in types of biological information-processing.

An attempt to identify a major type of mathematical reasoning with precursors in perception and reasoning about affordances, not yet replicated in AI systems.

Even in microbes I suspect there’s much still to be learnt about the varying challenges and opportunities faced by microbes at various stages in their evolution, including new challenges produced by environmental changes and new opportunities (e.g. for control) produced by previous evolved features and competences — and the mechanisms that evolved in response to those challenges and opportunities.

Example: which organisms were first able to learn about an enduring spatial configuration of resources, obstacles and dangers, only a tiny fragment of which can be sensed at any one time?
What changes occurred to meet that need?

Use of “external memories” (e.g. stigmergy)
Use of “internal memories” (various kinds of “cognitive maps”)

More examples to be collected here.