Interview on quantum computation with Scott Aaronson, theoretical computer scientist and David J. Bruton Jr. Centennial Professor of Computer Science at the University of Texas at Austin. His primary areas of research are quantum computing and computational complexity theory.
Scott blogged about this and other segments of our interview – his blog is very popular and has way more comments than this one does – check it out.
Scott Aaronson: Okay so – Hi, I’m Scott Aaronson. I’m a computer science professor at the University of Texas at Austin and my main interest is the capabilities and limits of quantum computers, and more broadly what computer science and physics have to tell each other. And I got interested in it I guess because it was hard not to be – because as a teenager it just seemed clear to me that the universe is a giant video game and it just obeys certain rules, and so if I really wanted to understand the universe maybe I could ignore the details of physics and just think about computation.
But then with the birth of quantum computing and the dramatic discoveries in the mid-1990s (like Shor’s algorithm for factoring huge numbers) it became clear that physics actually changes the basic rules of computation – so that was something that I felt like I had to understand. And 20 years later we’re still trying to understand it, and we may also be able to build some devices that can outperform classical computers namely quantum computers and use them to do some interesting things.
But to me that’s that’s really just icing on the cake; really I just want to understand how things fit together. Well to tell you the truth when I first heard about quantum computing (I think from reading some popular article in the mid 90s about Shor’s algorithm which had only recently been discovered) my first reaction was this sounds like obvious hogwash; this sounds like some physicists who just do not understand the first thing about computation – and they’re just inventing some physics proposal that sounds like it just tries every possible solution in parallel. But none of these things are going to scale and in computer science there’s been decades of experience of that; of people saying: well why don’t you build a computer using a bunch of mirrors? or using soap bubbles? or using folding proteins?
And there’s all kinds of ideas that on paper look like they could evaluate an exponential number of solutions at only a linear amount of time, but they’re always kind of idealizing something? So it’s always when you examine them carefully enough you find that the amount of energy or scales explose up on you exponentially, or the precision with which you would need to measure becomes exponentially precise, or something becomes totally unrealistic – and I thought the same must be true of quantum computing. But in order to be sure I had to read something about it.
So I while I was working over a summer at Bell Labs doing work that had nothing to do with quantum computing, well my boss was nice enough to let me spend some time learning about and reading up on the basics of quantum computing – and that was really a revelation for me because I accepted [that] quantum mechanics is the real thing. It is a thing of comparable enormity to the basic principles of computation – you can say the principles of Turing – and it is exactly the kind of thing that could modify some of those principles. But the biggest surprise of all I think was that I despite not being a physicist not having any skill that partial differential equations or the others tools of the physicists that I could actually understand something about quantum mechanics.
And ultimately to learn the basic rules of how a quantum computer would work and start thinking about what they would be good for – quantum algorithms and things like that – it’s enough to be conversant with vectors and matrice. So you need to know a little bit of math but not that much. You need to be able to know linear algebra okay and that’s about it.
And I feel like this is a kind of a secret that gets buried in almost all the popular articles; they make it sound like quantum mechanics is just this endless profusion of counterintuitive things. That it’s: particles can be in two places at once, and a cat can be both dead and alive until you look at it, and then why is that not just a fancy way of saying well either the cat’s alive or dead and you don’t know which one until you look – they they never quite explained that part, and particles can have spooky action at a distance and affect each other instantaneously, and particles can tunnel through walls! It all sounds hopelessly obscure and you know there’s no hope for anyone who’s not a PhD in physics to understand any of it.
But the truth of the matter is there’s this one counterintuitive hump that you have to get over which is the certain change to or generalization of the rules of probability – and once you’ve gotten that then all the other things are just different ways of talking about or different manifestations of that one change. And a quantum computer in particular is just a computer that tries to take advantage of this one change to the rules of probability that the physicists discovered in the 1920s was needed to account for our world. And so that was really a revelation for me – that even you’re computer scientists are math people; people who are not physicists can actually learn this and start contributing to it – yeah!
Adam Ford: So it’s interesting that often when you try to pursue an idea, the practical gets in the way – we try to get to the ideal without actually considering the practical – and they feel like enemies. Should we be letting the ideal be the enemy of the practical?
Scott Aaronson: Well I think that from the very beginning it was clear that there is a theoretical branch of quantum computing which is where you just assume you have as many of these quantum bits (qubits) as you could possibly need, and they’re perfect; they stay perfectly isolated from their environment, and you can do whatever local operations on them you might like, and then you just study how many operations would you need to factor a number, or solve some other problem of practical importance. And the theoretical branch is really the branch where I started out in this field and where I’ve mostly been ever since.
And then there’s the practical branch which asks well what will it take to actually build a device that instantiates this theory – where we have to have qubits that are actually the energy levels of an electron, or the spin states of an atomic nucleus, or are otherwise somehow instantiated in the physical world. And they will be noisy, they will be interacting with their environment – we will have to take heroic efforts to keep them sufficiently isolated from their environments – which is needed in order to maintain their superposition state. How do we do that?
Well we’re gonna need some kind of fancy error correcting codes to do that, and then there are there are theoretical questions there as well but how do you design those correcting codes?
But there’s also practical questions: how do you engineer a system where the error rates are low enough that these codes can even be used at all; that if you try to apply them you won’t simply be creating even more error than you’re fixing. What should be the physical basis for qubits? Should it be superconducting coils? Should it be ions trapped in a magnetic field? Should it be photons? Should it be some new topological state of matter? Actually all four of those proposals and many others are all being pursued now!
So I would say that until fairly recently in the field, like five years ago or so, the theoretical and the practical branches we’re pretty disjointed from each other; they were never enemies so to speak. I mean we might poke fun at each other sometimes but we were we were never enemies. The the field always sort of rose or fell as a whole and we all knew that. But we just didn’t have a whole lot to scientifically say to each other because the experimentalists we’re just trying to get one or two qubits to work well, and they couldn’t even do that much, and we theorists we’re thinking about – well suppose you’ve got a billion cubits, or some arbitrary number, what could you do? And what would still be hard to do even then?
A lot of my work was has actually been about the limitations of quantum computers, but I also like to say the study of what you can’t do even with computers that you don’t have. And only recently the experimentalists have finally gotten the qubits to work pretty well in isolation so that now it finally makes sense to start to scale things up – not yet to a million qubits but maybe 50 qubits, maybe to 60, maybe to a hundred. This as it happens is what
Google and IBM and Intel and a bunch of startup companies are trying to do right now. And some of them are hoping to have devices within the next year or two, that might or might not do anything useful but if all goes well we hope will at least be able to do something interesting – in the sense of something that would be challenging for a classical computer to simulate, and that at least proves the point that we can do something this way that is beyond what classical computers can do.
And so as a result the most nitty-gritty experimentalists are now actually talking to us theorists because now they need to know – not just as a matter of intellectual curiosity, but as a fairly pressing practical matter – once we get 50 or 100 cubits working what do we do with them? What do we do with them first of all that you know is hard to simulate classically? How sure are you that there’s no fast classical method to do the same thing? How do we verify that we’ve really done it , and is it useful
And ideally they would like us to come up with proposals that actually fit the constraints of the hardware that they’re building, where you could say you know eventually none of this should matter, eventually a quantum programmer should be able to pay as little attention to the hardware as a classical programmer has to worry about the details of the transistors today.
But in the near future when we only have 50 or 100 cubits you’re gonna have to make the maximum use of each and every qubit that you’ve got, and the actual details of the hardware are going to matter, and the result is that even we theorists have had to learn about these details in a way that we didn’t before.
There’s been a sort of coming together of the theory and practical branches of the field just in the last few years that to me has been pretty exciting.
Adam Ford: So you think we will have something equivalent to functional programming for quantum computing in the near future?
Scott Aaronson: Well there actually has been a fair amount of work on the design of quantum programming languages. There’s actually a bunch of them out there now that you can download and try out if you’d like. There’s one called Quipper, there’s another one called a Q# from Microsoft, and there are several others. Of course we don’t yet have very good hardware to run the programs on yet, mostly you can just run them in classical simulation, which naturally only works well for up to about 30 or 40 cubits, and then it becomes too slow. But if you would like to get some experience with quantum programming you can try these things out today, and many of them do try to provide higher level functionalities, so that you’re not just doing the quantum analog of assembly language programming, but you can think in higher-level modules, or you can program functionally. I would say that in quantum algorithms we’ve mostly just been doing theory and we haven’t been implementing anything, but we have had to learn to think that way. If we had to think in terms of each individual qubit, each individual operation on one or two
qubits, well we would never get very far right? And so we have to think in higher-level terms like there are certain modules that we know can be done – one of them is called the Quantum Fourier Transform and that’s actually the heart of Shor’s famous algorithm for factoring numbers (it has other applications as well). Another one is called Amplitude Amplification that’s the heart of Grover’s famous algorithm for searching long long lists of numbers
in about the square root of the number of steps that you would need classically, and that’s also like a quantum algorithm design primitive that we can just kind of plug in as a black box and it has many applications.
So we do think in these higher level terms, but there’s a different set of higher level abstractions than there would be for classical computing – and so you have to learn those. But the basic idea of decomposing a complicated
problem by breaking it down into its sub components that’s exactly the same in quantum computing as it is in classical computing.
Adam Ford: Are you optimistic with regards to quantum computing in the short to medium term?
Scott Aaronson: You’re asking what am I optimistic about – so I am I mean like I feel like the field has made amazing progress: both on theory side and on the experimental side. We’re not there yet, but we know a lot more than we did a decade ago. Some of what were my favorite open problems as a theorist a decade ago have now been resolved – some of them within the last year – actually and the hardware the qubits are not yet good enough to build a scalable quantum computer – in that sense the skeptics can clearly legitimately say we’re not there yet – well no duh we’re not – okay but: if you look at the coherence times of the qubits, you look at what you can do with them, and you compare that to where they were 10 years ago or 20 years ago – there’s been orders of magnitude type of progress. So the analogy that I like to make: Charles Babbage laid down the basic principles of classical computing in the 1820s right? I mean not with as much mathematical rigor as Turing would do later, but the basic ideas were there. He had what today we would call a design for a universal computer.
So now imagine someone then saying ‘well so when is this analytical engine gonna get built? will it be in the 1830s or will it take all the way until the 1840s?’ Well in this case it took more than a hundred years for a technology to be invented – namely the transistor – that really fully realized Babbage’s vision. I mean the vacuum tube came along earlier, and you could say partially realized that but it was just not reliable enough to really be scalable in the way that the transistor was. And optimistically now we’re in the very very early vacuum tube era of quantum computing. We don’t yet have the quantum computing analog of the transistor as people don’t even agree about which technology is the right one to scale up yet. Is it superconducting? Is it trapped ions? Is it photonics? Is it a topological matter? So they’re pursuing all these different approaches in parallel. The partisans of each approach have what sounds like compelling arguments as to why none of the other approaches could possibly scale. I hope that they’re not all correct uh-huh. People have only just recently gotten to the stage where one or two qubits work well in isolation, and where it makes sense to try to scale up to 50 or 100 of them and see if you can get them working well together at that kind of scale.
And so I think the the big thing to watch for in the next five to ten years is what’s been saddled with the somewhat unfortunate name of ‘Quantum Supremacy’ (and this was coined before Trump I hasten to say). But so this is just a term to refer to doing something with a quantum computer it’s not necessarily useful but that at least is classically hard. So you know as I was saying earlier, proves the point that you can do something that would take a lot longer to simulate it with a classical computer. And this is the thing that Google and some others are going to take their best shot at within the next couple of years so. What puts that in the realm of possibility is that just a mere 50 or 100 cubits if they work well enough should already be enough to get us this. In principle you know you may be able to do this without needing error correction – once you need error correction then that enormously multiplies the resources that you need to do even the simplest of what’s called ‘Fault-Tolerant Computing’ might take many thousands of physical qubits, okay, even though everyone agrees that ultimately if you want to scale to realize the true promise of quantum computing – or let’s say to threaten our existing methods of cryptography – then you’re going to need this fault tolerance. But that I expect we’re not gonna see in the next five to ten years.
If we do see it I mean that will be a huge shock – as big a shock as it would be if you told someone in 1939 that there was going to be a nuclear weapon in six years. In that case there was a world war that sort of accelerated the timeline you could say from what it would otherwise be. In this case I hope there won’t be a world war that accelerates this timeline. But my guess would be that if all goes well then quantum supremacy might be achievable within the next decade, and I hope that after that we could start to see some initial applications of quantum computing which will probably be some very very specialized ones; some things that we can already get with a hundred or so non-error-corrected qubits. And by necessity these are going to be very special things – they might mostly be physics simulations or simulations of some simple chemistry problems.
I actually have a proposed application for near-term quantum computers which is to generate cryptographically secure random numbers – those random numbers that you could prove to a skeptic really were generated randomly – turns out that even like a 50 or 60 qubit quantum computer should already be enough to give us that. But true scalable quantum computing the kind that could threaten cryptography and that could also speed up optimization problems and things like that – that will probably require error correction – and I could be pleasantly surprised . I’m not optimistic about that part becoming real on the next five to ten years, but you know since every everyone likes an optimist I guess I’ll I try to be optimistic that we will take big steps in that direction and maybe even get there within my lifetime.