The following is a rough transcript which has not been revised by The Jim Rutt Show or by Seth Lloyd. Please check with us before using any quotations from this transcript. Thank you.

**Jim**: Today’s guest is Seth Lloyd, professor of mechanical engineering at MIT, and author of the book Programming the Universe. Hi Seth, great to have you on.

**Seth**: Hi, Jim. It’s great to be here.

**Jim**: Yeah. This is going to be an interesting conversation. Talk about going from the micro sphere to the macro sphere, and everything in between. I read your book, Programming the Universe, when it first came out quite a while ago. And I re read it in preparatory for this episode. And there’s an awful lot in that book. We’re not going to cover everything from that book, but we’re going to cover some of it. And we’re going to talk a fair amount about what is quantum mechanics, and what is quantum computing. What’s the state of the art, et cetera. So let’s get started. Let’s start with some basics. How is the quantum world different from the classical world?

**Seth**: It’s smaller, and it’s weirder. So the quantum mechanics describes how things behave at their most fundamental level, the level of atoms, molecules, elementary particles, and the main thing to remember about it is that things behave differently, and in a weird, fancy, strange way. So sometimes say it’s like quantum mechanics is kind of the James Brown of sciences. Whatever’s there, it’s got to be funky.

**Jim**: Interesting. Yeah. One of my longterm hobbies, actually, is reading in the field known as quantum interpretation. I fell into this weird habit by reading a book called Quantum Reality, by a guy named Nick Herbert.

**Seth**: Uh-huh (affirmative) I’m familiar with that book. Yeah, absolutely.

**Jim**: Yeah, and ever since, I’ve kept up with the evolution of the field, and it is interesting. Here’s something so fundamental in our sciences, and yet there’s some significant number, like five, or 10 different interpretations of what it all means. And none of them are inconsistent with the experimental results yet, right? Including one of my favorite, getting the shorthand of quantum mechanics, is there’s inherent randomness in the positions, the spins, the various things that we measure. And yet there’s still a school, and it seems like maybe it’s even gaining strength at the moment, that quantum phenomenon may not actually be random. It’s just that we don’t understand the mechanisms well enough. Maybe talk a little bit about quantum interpretations, and your views on what’s going on there.

**Seth**: Sure, sure. Jim, that’s a subject dear to my heart because one, the way I got into working on ideas of information, and quantum mechanics, came from doing a master’s in philosophy of science at Cambridge University, studying foundations of quantum mechanics, and quantum interpretations. So I share your interest in it. And you’re right, there are at least 10 different interpretations out there. And the interesting thing about them is they’re all consistent with experiment, as you say, and all of them are, each one in it’s own way, is deeply unsatisfactory. If any one of them were completely satisfactory, then there’d be only one interpretation, but none of them are satisfactory. And I interpret this to mean that, really, it’s hard to come up with a good interpretation of quantum mechanics. In fact, my belief is that we will never have an interpretation of quantum mechanics that satisfies ordinary human beings, or smart human beings, or philosophers of science, simply because quantum mechanics is so strange and counterintuitive that there’s no way to interpret it in that strangeness and counterintuitiveness away.

**Jim**: That’s certainly one of the meta theories, right? That it’s just shut up and calculate, basically. We’ll never understand it.

**Seth**: No, no, I’m not. I’m not advocating that. I think we should think about it, but we should be aware that we’re not likely to get a good solution. I mean, the basic problem is that, as you mentioned, there’s a tension between determinism, and indeterminism, were chancey, probabilistic behavior, that the Schrödinger equation, and the equations of motion that govern quantum mechanics, are themselves deterministic. So you set them running. You say, “Here’s the situation now. What’s going to happen in the future?” And then the equations just truant on a head and they say, “Here’s the state of the system in the future, completely determined by the state in the past.” But when you actually look at the predictions for the results of experiments, and you look at what you observe, then you find that those predictions are probabilistic, and not determined. So how do you reconcile those two features? The fact that the underlying theory is, in some sense, deterministic, and yet it makes probabilistic predictions for what’s going to happen. That’s pretty tough.

**Jim**: Interesting. Now, do you have a view? Do you have an intuition, shall we say, on this randomness versus determinism question? Or is it just so damn hazy, it’s not worth speculating, in your view?

**Seth**: Oh no, I do. I have a view which I find reasonably satisfying. I say, no views are satisfying, or perfectly satisfying. But as you know well, from reading about this, that the original interpretation that people came up with came largely from Niels Bohr, is called the Copenhagen interpretation. And Bohr said, “Okay, the underlying laws are deterministic.” But these just describe microscopic phenomenon. When you get to macroscopic things like measurement devices that are making measurements, then these are categorically different than the microscopic devices. So their behaviors can be different. And that’s pretty annoying because, well, the devices are made up, our measurement apparatus is made up of atoms and molecules, and things dating according to the laws of quantum mechanics. Why shouldn’t they also obey the laws of quantum mechanics? So that’s one interpretation.

**Seth**: And then another very commonly used interpretation, which is pretty wacky, but is adopted by many people, is called the many-worlds interpretation, which says, “Yeah, the underlying laws are deterministic. And so you set this up, and the universe evolves in a deterministic fashion.” But every time you make a measurement, or look at something happening that appears to be probabilistic, the universe splits into two pieces, and you only get one of them. And which one you get is determined by the laws of chance. So it’s probabilistic. And that’s okay, that explains what’s going on. As you say, it gets what we see experimentally, but then you have this [inaudible 00:06:21], every time you make a measurement, this whole other piece of the universe just split off, and you’re never going to see it again. And it constitutes another world different from our worlds. That’s also not very satisfactory.

**Jim**: Yep. It’s crazy. In fact, I had a very intense conversation with Murray Gell-Mann one day, when I was out at the Santa Fe Institute. It started at lunch, and then we ended up walking down to his office, and talking for about two hours. And we went all around these various theories. And he finally used some words, which I found that he probably hasn’t used with anybody else, at least, I haven’t found anybody else that says they ever heard these words from him. He said, “Jim, here’s one way to think about it, which is take the many-worlds interpretation, do the math as if many-worlds, but by some mechanism we don’t understand. Only one of those worlds is actually realized. And the rest just don’t happen.” He said, “That’s kind of a layman’s way to describe his own views on quantum interpretation.” What do you think of that?

**Seth**: Yeah. No, I agree with that. Well I did my first postdoc with Murray, and we spent a lot of time working on what’s called the decoherent histories formulation of quantum mechanics, which can be thought of as a aversion of many worlds. But instead of just looking at the world now, to say we split up into these different worlds, we look at the whole history of the universe, past and future. And so then we have one past history, but our history can branch off into many other histories in the future. Murray and I were pretty much on the same page about this. So I actually agree with that. And I have a modest and mild proposal about interpretation here. I think that a lot of the problem that people have, particularly physicists, about this, is that the issue about many-worlds is it’s very hard to believe in the existence of these other worlds, in which everything is radically different from the way it is in our world.

**Seth**: I mean, are they real, or are they not? We have our world about which we have empirical evidence. That’s the case, I had a delicious bowl of oatmeal with homemade rhubarb compote this morning, and in another world, I had a delicious fried egg. But I’m in this world. Right? So I know that I have information about this world that things are the case. So what distinguishes our world from these other worlds is we have evidence about what is the case in our world. Well, I like to think of this in the following. And physicists don’t like this in general, but philosophers are okay. So physicists want to believe there’s only one thing that’s real. And if it’s there in the way function of the universe, by God, it’s real. The problem with that with many worlds is then you’ve got all these other worlds that are, by definition, real, because you just declared them to be so.

**Seth**: So my simple solution to this, which of course will not… Many people won’t like it, and that’s okay. Is that really, we should distinguish between empirical reality, things that we know to be the case, because we have evidence that they’re the case. And then this kind of wave function reality, being real in the wave function of the universe, because the wave function of the universe contains all these other worlds, where things are, by definition, different, empirically different from the way they are in our world. And so there are different things. We have our world that we see, we know to be the case. And that’s one piece of the whole wave function of the universe. And then the wave function contains all of these other worlds, which are real in a wave function kind of way, but not in an empirical fashion. And that, for me, is good enough.

**Jim**: Interesting. Interesting fudge. I have to try to get my head around that later

**Seth**: It’s not a fudge. It’s I’m simply saying that physicists want to say there’s only one kind of reality.

**Jim**: Right.

**Seth**: But I’m saying, “Okay, there’s reality as it exists in the wave function, and then there’s, what is empirical reality, which refers to one part of the wave function.” I’m not fudging, I’m just being precise.

**Jim**: Okay. I’ll retract fudge, and say, “That’s yet another interpretation. I’m going to try and get my head around it later.” Let’s talk about another phenomenon, which will turn out to be extremely important as we dive into quantum computing, which is entanglement. Non locality. Could you talk about that a little bit?

**Seth**: Absolutely. Yeah. Entanglement is one of those James Brown features. I was at a James Brown concert long ago, and somebody… James Brown finished a song, and then somebody else, somebody said, “James, what are you going to play next?” And he said, “I don’t know, but whatever it is, it’s got to be funky.” And so, as I said, quantum mechanics is a James Brown of sciences. It’s very funky. And entanglement is tied to the funkiest central funkitute of quantum mechanics. So entanglement is a very funny feature. Well funny, I don’t know about in the ha ha sense, but strange and counterintuitive feature of quantum mechanics, where things can be correlated with each other, or have information about each other. And they have strictly more information than you were allowed to have classically. And it shows up in ways, if you have an entangled pair of particles, and let’s say we have two electrons, and they have spin, and you can say one is spinning clockwise, the other is spinning counterclockwise.

**Seth**: They can be in a situation where, if you look at each electron on its own, it’s spin clockwise or counterclockwise, is completely not determined. So if you measure it, then you say, “Hey, are you spinning clockwise or counterclockwise?” It will say, in a chance it will say, “Oh, I’m spending clockwise.” Then if you measure the other electron, you’ll find that the spin is always anticorrelated. So it’s in the opposite way. So if the first electron is spinning clockwise, the second one is spinning counterclockwise. Or if you measure the first one, you find it spinning counterclockwise, you find the second one is spinning clockwise. Now what’s weird about this is that, until you actually make the measurement, then the electron is neither clockwise, spinning clockwise, nor counterclockwise. It’s spinning both clockwise and counterclockwise at the same time in some weird, and funky quantum mechanical fashion, that nobody has a good intuition for.

**Seth**: And so when you measure the first electron, it has to, as it were, declare itself to say, “Oh, I’m clockwise.” But as soon as it declares itself to be clockwise, you know that if you were to go and measure the other electron, you’d find it’s counterclockwise. So there’s some weird sense in this entanglement, as it’s called, that these two electrons are in an indeterminate, but anticorrelated state. And somehow, if you make a measurement on one, thereby making that electron declare itself to be in a determined state, let’s say clockwise. Then at the same instant, the other electron you know, has declared itself to be counterclockwise. And this is weird and strange. And Einstein called this spooky action at a distance. Actually, he called it in German, [foreign language 00:13:18], which sounds more impressive. And for many years, people thought, “Oh my goodness, this is terrible. Would it allow instantaneous communication from one electron to another?” Though, as it turns out, it doesn’t allow communications. It’s merely some funny way in which the two electrons are more intimate with each other.

**Jim**: Interesting. Yeah. And in fact, if people that are interested in this, your book actually does the best job I’ve ever seen in explaining why you can’t cheat, and use entanglement to send information over a distance. But that’s an easy fallacy to say, “Looks like faster than the speed of light. Whether we have two electrons that were separated. One went to Alpha Centauri, one stayed on earth. You did a measure on Alpha Centauri. We could tell here on earth some information.” Your book does a great job explaining how that can happen. Mother nature just does not seem to want us to hack the speed of light.

**Seth**: I believe this to be a good thing,

**Jim**: Not just a good thing. It’s the law, right?

**Seth**: If we talk later on about time travel, I can qualify that statement.

**Jim**: Oh dear. We may not get there. We’ll see. Now closely related to entanglement is the concept of coherence. Probably both, actually, rooted in the deeper idea of superposition. Talk a little bit about coherence, and how that’s similar to, and related to the whole difference, at least from a point of view perspective, for entanglement. Again, that’s going to be important as we talk about quantum computing.

**Seth**: Yeah. So indeed, so coherence is… Well coherence, when referring human beings, mean that they’re making a coherent case for what they’re doing. And I’m not going to claim that I’m going to be coherent talking about coherence, but I’ll give it a shot. So in quantum mechanics, the notion of coherence comes from this feature, that when quantum mechanics was first formalized by Schrödinger and Heisenberg and the 1920s, it was about waves. So Schrödinger came up with this wave equation, which said, “Oh, look. Each situation, each wave, for instance, electron spinning clockwise, or counterclockwise, corresponds to a wave.” And if you think of waves of water, waves of sound, waves of water are the surface of the water wiggling up and down in a periodic and regular fashion. And so waves have a notion of coherence in the sense that, if you think of a water wave, that’s going undulating up and down, it’s got a regular way of undulating up and down. And when you have two waves that collide with each other, they can interfere. And the parts where the waves are both high, become extra high. The parts where the waves are both low, become extra low. And the parts where one wave is high, and the other wave is low, then cancel each other out.

**Seth**: So the waves interfere with each other in a way that’s coherent. That’s how it’s described. And all it really means is that the amplitude of the waves, how high or low they are, just add up together in a regular, and coherent fashion. So coherence in quantum mechanics refers to this wave like nature of quantum mechanics. And the notion of superposition that you mentioned before, simply means that when you take two waves, corresponding to two possible states, you just add them together. And the resulting wave is the sum of the two waves. So mathematically, and actually, even intuitively, and visualizing it, that’s what’s going on. So you states or waves, coherence means that the waves can add up in a systematic fashion. And superposition means that, if you have a state of two waves, then you just add them together. So that actually doesn’t sound so bad.

**Seth**: The problem is when you try to relate it to behavioral things, like electrons, and then square that with your macroscopic intuition. So for example, in quantum mechanics, the amplitude of the wave is related to where you’re likely to find a particle. So if I have the wave corresponding to the electron, and the wave is really big over here, then that means if I make a measurement to say, “Where is the electron?” Then it’s going to be likely to be over here. Or if the wave is really big over there. And I measure to see where the electron is, then I’m likely to find the electron over there. Well, that doesn’t sound so bad. It’s okay, that’s something to do with these probabilities of measurement.

**Seth**: But the problem is because of this principle of superposition and quantum mechanics is you have two waves, then not only can you add them up, if you add them up, the sum of these two waves is a perfectly reasonable state for the electron. So the wave electron over here, okay? Wave with the electron over there. That’s fine. But when you add them up, you have a wave where there’s a big amplitude for the electron being over here, and a big amplitude for the electron being over there. And what does that mean? Well, it means that in some funky quantum mechanical sense, the electron is here, and there, at the same time. And that’s pretty tough for our feeble macroscopic minds to grasp.

**Jim**: Indeed. And now, let’s make the turn towards quantum computing, because we’ve set the foundations, this funky concept that the electron, or the photon, or whatever it is, can be many things at once. How do we use that to produce quantum computing? And how is it fundamentally different to classical computing? Two big questions, so feel free to take a while to answer those two.

**Seth**: Oh, well Jim, since you set it up pretty well, I think we are in good shape to look at that, because really, the basic distance between quantum computing and classical computing is what we were just talking about. That is quantum computers are just ordinary computers. You got bits of information, you code information and bits. You flip the bits in a systematic fashion, and then you perform a computation. You process the information. That’s what classical digital computers do. That’s what quantum computers do. But because quantum computers have access to this quantum funkiness, then what they have is something more than what classical computers have. In particular, let’s think of this electron. Actually, in a classical computer, the way you store a bit of information is you got a whole bunch of electrons, you put the electrons onto a capacitor, you put them over here and you call that zero.

**Seth**: You take them, put them over there. You put them on another capacitor, you call that one. Okay? And when you flip the bit, you say, “Oh, let’s move the electrons, let’s open the switch, and let the electrons flow from this capacitor to this other capacitor. The capacitor is just like a bucket for electrons. You can toss electrons in the bucket, and then flipping a bit in a computer is like, “Oh, let’s take the electrons from this bucket, and we’ll empty it into that other bucket.” So we take it from the zero bucket, we pop them in the one bucket. Okay, fine. And then, the way that classical computers work is just simply doing this in a systematic fashion. Let’s say, “Oh, let’s empty this bucket, if this other bucket is full, and then let’s fill up this other bucket, if the other bucket is empty and then let’s do this many, many, many, many, many, many times.” And that’s what digital computers are doing.

**Seth**: Okay. What about a quantum computer? Well, quantum computer is like a classical computer, but let’s say we’re storing a bit of information on a single electron. So electron over here, with it’s wave concentrated over here. Okay, we’ll call that zero. Electron over there was, with it’s wave concentrated over there. We’ll call that one. But then electron with it’s wave over here and over there, that’s an electron that’s here and there at same time. That’s a bit, a quantum bit, or a qubit, which is both zero and one at the same time. Again, in some funky counterintuitive quantum sense, which nobody really understands. And that’s, at bottom, the only difference between quantum computers and classical computers is that quantum computers, quantum bits, can be zero and one at the same time by taking advantage of this quantum funkiness.

**Jim**: And it turns out to have a huge difference in the capabilities of computation, right? I mean, the fact that we can, with some sense, explore all possible solutions, more or less simultaneously, at least for some algorithms, give an unbelievable boost in their capability. For instance, the first time I ever heard of quantum computing was Shor’s algorithm. And believe it or not, there was probably 15 minutes where I actually understood Shor’s algorithm, and worked through it enough to really understood how it worked. And I go, “Holy shit.” Could you maybe run through a little bit how a relatively small quantum computer could do what seems impossible, which is to factor huge numbers? How does entanglement allow that to happen?

**Seth**: Yeah, absolutely. We were just talking about how, in a quantum computer, a quantum bit can be zero and one at the same time. And then few quantum bits can be… They don’t have to be either zero one, or one zero. They can be zero one, and one zero at the same time. They can be in a quantum superposition of the two states, zero one, and one zero. And when they’re in this state, then they’re in this entangled state. Because if I measure the first one, and I find the answer is zero. Then I know the second one is in the state one. If I find the first one in the state one, I know the second one is in the state zero. So this principle of superposition, that a quantum bit, or a qubit, can be zero and one at the same time, when you apply it to two or more qubits, you get entangled states. And this is key for using quantum computers to solve hard problems.

**Seth**: A good way to think of this is the following. So remember that the state of a quantum system corresponds to a wave, to a solution, to the Schrödinger equation. And a classical computation, all the bits at any given time have a well defined value. So, each one is either zero or one, but not zero and one at the same time. And so, all the bits in the classical computer, at one point, they were like zero, one, one, zero, one one, one, one, zero for a small classical computer. But in the quantum computer, the bits can be zero, one, one, one, zero, zero, one, one, zero. And one, zero, one, zero, one, zero, one, zero, one, zero, one, zero, one, zero, one at the same time. Again, it’s hard to understand how that works in our intuitions, but it’s just a superposition of two waves.

**Seth**: So a nice way of thinking about this, this is a metaphor, but it’s a very precise metaphor, because it actually has to do with waves. Is that a classical computation, which is a sequence of states of bits, is like something like, Gregorian plainchant like (singing), where there’s only one sound. There’s no harmony. It’s just a melody. One sound at each point. Whereas a quantum computation is the superposition of many sounds simultaneously. It’s a symphony. Everybody’s playing at once, and if they play at once in a harmonious fashion, you can make a sound that you could never obtain, or even know what it sounded like, if you just had a single chant, with a single note. So as I say, this is a metaphor, but it’s a good metaphor to keep in mind. So what happens for solving hard problems like Shor’s algorithm? So in 1994, my colleague Peter Shor showed that if you had a quantum computer, you could use this symphonic nature of quantum computation to solve a problem of considerable practical, interest, which is a code breaking.

**Seth**: And he showed you could use quantum computers to break the commonly use public key crypto systems that we use whenever we buy something over the internet. When we buy something over the internet, I say to Amazon, “I’d like to buy these teabags please.” And they say, “Okay, send us your credit card number.” And I would say, “Well, I’m not going to send my credit card number openly over the internet.” They say, “That’s fine. We’re going to send you a public key, and you’re going to use that to send us to your credit card number. And once you’ve combined your credit card number with the public key, nobody will know what your credit card number is, but we’ll be able to decode it using our private key when you send it to us.” And I say, “Okay, that’s fine. I really want my tea. And this sounds good.”

**Seth**: So you do that. And this is very useful, as you can imagine. And this is what happens with all of these transactions over the internet, that whenever you want to send private information, retailer sends you a public key, which you use to encrypt your information in a way that nobody else can encrypt… So that nobody else can discover it. So Shor showed that, by being able to do something called factoring, to take, find a number, given a number which is the product of two large numbers to find the other two large numbers, which when multiplied together, make the first number. So you’re given R, which is the product of P and Q. And given R, you want to find P and Q. That’s the factoring problem. It’s like 15. Given 15, it’s like, “Oh, 15 is three times five.” So when the numbers get very, very big, this is a very hard problem. And Shor showed how to do this.

**Seth**: The way that he showed how to do this as related intimately with the wave like nature of how information is represented in quantum mechanics. So if you have a wave, like a wave in the ocean, then the wave can have different wavelengths, and different periods. So when the waves are small, the wavelength is short, and the waves wiggle up and down very rapidly. When the waves are big, the wave length is longer, and the waves are wiggling up and down more slowly. So Shor showed how to encode the answer to this hard problem of factoring and code breaking into a huge long wave that stretches over… If you were to actually stretch it out in space, it would be larger than the length of the universe. But if you can stretch it out, you can curl it up into the quantum bits of a quantum computer.

**Seth**: And you want to find the period of this wave. You want to find how fast it’s wiggling up and down. And classically, this is very hard to do, but quantum mechanically, because quantum mechanics is all about waves, it turns out that it’s quite possible in a straightforward fashion. If you’re given a huge long wave and you want to find out how fast it’s wiggling up and down, you can do that very efficiently and straightforwardly on a quantum computer. It’s because quantum mechanics is about waves to begin with. So Shor took this wave like nature of quantum mechanics, showed how you could encode the solution to a hard problem by finding the wavelength of a long periodic wave, and then showed how you-

**Seth**: Of a long periodic way, and then showed how you could use a quantum computer to find that wavelength, and thereby, factoring the numbers and breaking the code. It’s a very beautiful piece of work, and it shows intimately how quantum mechanics actually works because it takes this very important feature of quantum mechanics, this wave feature ordinary bits don’t have, but quantum bits, cubits, do have, and then shows how to elegantly solve a hard problem.

**Jim**: Yeah, that’s interesting. As I said, I understood it for about 15 minutes, and here’s an interesting personal story. Shor’s algorithm was actually central to a key decision I made in my life. How about that? Who would have thought it, right? In 2000, I had an opportunity to sell my Internet company to Verisign, the company that ran the digital certificate business, which was based on the RSA crypto algorithm that you just described. At that time, essentially, all the digital certificates that did what you described, which was secure the transmission over the web of your credit card to Amazon, were sold by Verisign since they had a quasi monopoly. There was one small competitor, but they eventually bought them out. Anyway, they wanted to buy my company. So one of the things I had to do as part of the new [inaudible 00:29:16] we were a publicly traded companies, we had to be quite public about all our due diligence, was estimate, what is the chances in some reasonable period of time that Shor’s algorithm will get implemented in a quantum computer and break RSA, right? We literally had to do that.

**Jim**: And I, again, this was 2000, so nobody had any fucking clue really about how fast quantum computing would progress, but we talked to some people, I will say, in classic American short-term business thinking. We concluded that there was essentially no chance it would happen in the next five years, so we went ahead and did the deal. And then, interestingly, after we did the deal, I ended up running the digital certificate business. How about that? So I literally had to worry about a little bit [inaudible 00:30:03] myself. I didn’t have to worry about it anytime soon, which is a good pivot actually to the Shor’s algorithm and many other algorithms in theory are exceedingly powerful. However, to get that power, there has to be a size of entanglements essentially, or at least an effective size of entanglement to solve a problem of size X.

**Jim**: At the time, I think our keys were 512 bits long, and they were like three cubits or something at the time. Okay. That’s a long damn way from being able to break a 512 bit key. I just looked it up this morning. The minimum specified key length for digital certificates now is 2048 bit, and so essentially, this RSA encryption algorithm is in a race adding more bits to stay ahead of the increasing power of quantum computing. So talk a little bit about how the power of quantum computing is related to the number of cubits that are simultaneously entangled. And we’ll start with that, and then we’ll go onto the other pieces that delineate the effective power of quantum computing.

**Seth**: Absolutely true. Yeah, so a nice way of thinking about this. I mean, this is the power of information in general. The amazing power of information is that a small number of bits can have a very large number of possibles states. So if I have one bit, it can be either zero or one. You don’t have to call it zero one. It’s conventionally called… It could be yes or no; or true or false; or heads or tails. It’s just a question of having two states. And then, two bits. Well, there’s zero, zero; zero, one; one, zero; one, one, so it’s got four states, and then three bits you’ve got, zero, zero, zero; zero, zero, one; zero, one, zero. Oh, I’m getting tired of it. They’ve got eight states, right? Four bits of 16 states, five bits of 32 states, 10 bits of 1,024 states, 20 bits of more than a million states, 30 bits of more than a billion states, 40 bits of more than a trillion states.

**Seth**: So the number of states where it’s very rapidly with the number of bits. It grows exponentially. If I have n bits, the number goes as two to the end, and that number gets very big, very fast. So for instance, 300 bits… Well, two to the 300 is around 10 to the 90, and 10 to the 90 power is the number of elementary particles in the universe. So with 300 bits, you have enough bits to label every elementary particle in the universe if you could figure out a way of where to put the label. So this is the power of classical information, was tremendously powerful because it shows you have the potential to explore a huge information space, even with a relatively small number of bits. And the power of quantum computation means, “Oh, if I have quantum bits, and they can be in superposition and entangled with each other, then I can explore that same huge space simultaneously.

**Seth**: So with 300 cubits, I can explore a space where if I were going to do this classically, on a classical computer, my classical computer would have to be the size of the whole universe. So the size grows very rapidly. Of course, being able to control those bits and build a quantum computer, that’s harder. So the back of 1993, before Shor’s algorithm, I made the first technologically feasible design for quantum computer. Prior to that, the first quantum computers, the idea, was proposed by Paul Benioff around 1980, picked up by the famous Nobel Lord Richard Fineman in the early eighties. And then David Deutsch codified the theory of quantum computation in the mid eighties, but all that time, nobody had any idea how you could build a quantum computer. There was some quote from Fineman in conference, he says, “I’m going to talk to you about this notion of quantum computation. Now, you may think I’m crazy, and nobody has any idea how to build these things, but I’m going to talk about it anyway.”

**Seth**: So back in the late 1980s, early 1990s, I said, “I wonder how you might build one of these things.” And I started working on how the physics of building quantum computation. Early 1990s, I learned about what’s called electromagnetic resonance, which is what happens when you zap atoms with lasers or when you zap spins with microwaves. And I realized, you zap atoms with lasers in the right way, you could build a quantum computer. So I published a paper in science called, “A Technologically Feasible Quantum Computer.” That was 1993. And in 1994, I started working with folks to start making the first quantum logic gates and the first quantum computers, and so it worked very well, except the computers, the quantum computers only had a couple of quantum bits.

**Seth**: We started with two cubit quantum computer. Turns out, you can actually do a proof of principle demonstration of quantum computation with only two cubits, so we did the first algorithm with those. Then, we went to three cubits. Now, you have eight states. That’s great. Then, we went to four cubits and five cubits. Then eventually, by the early 2000s, we were up to a dozen or two cubits. It shows you the progress is slow because you’re trying to convince atoms, molecules, superconducting systems, to process information at the most fundamental possible level. It’s simply hard to do, and so the progress is slow, but then about five or six years ago, Hartmut Nevin at Google convinced Sergey Brin. It’s like, “Hey, say if we were to invest a hundred million dollars or a couple of hundred million dollars in actually building a quantum computer, we could do it. The technology is there.”

**Seth**: And they hired John Martinez from University of Santa Barbara, and they went wild, and they immediately started building quantum computers that had 20 cubits, 30 cubits, 40 cubits, and then IBM and Microsoft, who had been working on quantum computing for decades, and at a smaller level, they say, “Hey, Google is eating our lunch. We got to do this too.” So they invested a hundred million dollars in it, and they started building much larger quantum computers, and then places Intel and [Huaway 00:36:22] said, “We don’t even know what a quantum computer is, but we better invest in this too.” Anyway, so the last five years, things have been going crazy in the field, but the net result is that now people have quantum computers on the order of 50 to a hundred quantum bits. They’re still not big enough to factor large numbers and break public key crypto systems, thereby instilling fear in the heart of agencies with three letters in their name like NFL, not that such agencies have hearts, but so they still are not there in terms breaking the public key crypto systems, but it’s starting to look like it’s within shouting distance of doing so.

**Jim**: Why has it been hard? Why do you have to spend hundreds of millions of dollars to get a hundred bits that talk to each other? Jesus, that doesn’t sound too hard, but obviously it is. So what makes that a hard problem?

**Seth**: It’s useful to remember what happened when people first started building conventional, electronic digital computers. They first started with vacuum tubes, and then they started with transistors. And then, the computers that you had to put together, you had to wire together a huge number of vacuum tubes or a huge number of transistors, each vacuum tube or each transistor corresponded to a single switch or a bit, and the vacuum tubes would break and blow out. And then, some bit in a computer has blown out. Where is it? How do we find it? And so, to build a conventional, classical computer with a few hundred or a few thousand bits also took an apparatus the size of the room, a huge, expensive effort on the part of the government and [inaudible 00:37:59] bill. So it’s hard because with competition it’s hard. You’re trying to build a system with many different parts that functions reliably, and you can control each of the parts. That just isn’t easy.

**Seth**: It was already hard to do back in the 1950s and early sixties in terms of vacuum tubes and transistors. And now, we’re trying to do this at the level of individual atoms. The change in conventional classical computing came with the invention of integrated circuits in the late sixties, early seventies, where it became clear, “Oh, look, we can actually have a systematic way of putting lots and lots of transistors on a single semiconductor chip, and then we can figure out ways to scale it up and to make more and more while maintaining things in a reliable, functioning state. So with quantum computers, the place we’re at right now is more the place where classical computers were back in the vacuum tube era of the 1950s, early 1960s, where we’re just trying to figure out the basics of how we put together the different cubits, how do we make them talk with each other in a way where they can do so accurately and come up with schemes for scaling up.

**Seth**: And so, it’s just simply hard to do. It was hard to do with classical computers as well, and it’s a lot harder if the things you’re putting together are individual atoms. You have to address atom individually. You have to make them talk with each other pair by pair. You have to zap them with lasers to try to tickle them and convince them to process information the right way. It’s not an easy thing to do, and it’s taken a long time. Moreover, it’s not even clear, even though huge progress has been made in the last five years about this. It’s not even clear we’re going to get there because just in the same way when we had vacuum tube computers, it’s like, “Ah, this is never going to work. Vacuum tubes are… Then, we have a computer. Once we have 10,000 or 100,000 vacuum tubes, they’re always going to be blowing out. We can never make this happen, so we’re still at the stage where we’re aspiring to make them bigger.

**Seth**: If we can make them bigger, that’d be great. I mean, not just for… It’s fine to factor large numbers, break public crypto systems, and thereby, screw up the whole business of the world, which is to buy extra crap on the Internet. How disruptive would that be? But there’s lots of other great applications for quantum computers as well, and those would be wonderful if we could achieve them, and it looks right now that they might actually be within our grasp.

**Jim**: Interesting, and in addition to the number of cubits, which is what we usually hear about in the popular press, I’ve had a concept, quantum computational density, which I don’t know if that’s a word that you all actually use, but I use it myself, and I include the number of cubits plus how long you can keep the entangled cubits from decohering and the error rates, and the three together provide a combination, which talks about the actual how a [inaudible 00:40:58] can get done. How do those other two parts, the decoherence times and the error rates, factor in the equation of how useful quantum computation can actually be.

**Seth**: You’re right on target there, Jim. Those are very important. You can’t actually think of how to build a quantum computer without actually looking at this three dimensional space rather than the one dimensional space. So how many cubits you have? How long can you keep them coherent? Which in the quantum mechanical cases, how long can you keep this wave coherence of the different waves making up the different states of the cubits? Because once they become incoherent, you have a cubit that’s zero and one at the same time in this funky quantum mechanical way because it’s coherent. But when it loses coherence, then it just lapses into something much more classical, which is being either zero or one, which we can get just classically. So there’s a coherence time, which is how long you can maintain this superposed wave nature of entangled cubits.

**Seth**: And then finally, there’s the error rate, which you can think of is, when you’re flipping cubits from one state to another, how accurately you can do that. So right now, the numbers are, well, we have 50 to 100 cubits, and the coherence times for superconducting cubits are in the order of one ten-thousandths of a second, a hundred microseconds, a hundred millionths of a second. And during that time, you can put in on the order of 10,000 individual two-bit operations or bit flips, and so the precision, which they can be done, is currently around 99.7%. That’s the world record right now for the accuracy of an individual cubit, so three parts out of a thousand. It sounds pretty accurate, but it turns out that if you actually really wanted to do something like factor a large number, you’re going to need something like a hundred thousand cubits.

**Seth**: You’re going to be able to do millions of operations, and to do this, you’re going to need error rates that are one part in 10,000, 99.99%, rather than 99.7%, or sorry, accuracy. That’s not the error rate. So the error rate is one minus the accuracy, so we’re still a ways away. We’re still a few orders of magnitude away in terms of being able to have the right number of quantum bits, being able to have the… The coherence times are now very good. They will stay coherent for a long time. Those are actually quite adequate at the moment. We don’t really need longer coherence times, but of course, it’ll always be nice, but more bits and better accuracy, so lower error rate. And the great thing about that is that we have a goal. We can attain it.

**Seth**: It’s great. And moreover, it’s insight. We really basically need a factor of 10 in terms of the precision for doing the logic operations. Well, that actually is quite doable if we’ve already gone up by a factor. We started out at 50 times less accurate than we are right now. If we can get another factor of 10, we’re good. Number of cubits, okay. We need to go up by a factor of a hundred, but we’re up by a factor of a hundred already from the early days of quantum computing too, so maybe we’ll get there, very hard to predict these things. The technological prediction is a mug’s game.

**Jim**: In some areas, some areas it’s not. I mean, obviously Moore’s law has been not so bad for quite a while, and I suppose the rate must have an exponential associated with it with respect to improving computational density in the quantum world as well. Of course, the question always is, is there a wall? We know there’s a wall on Moore’s Law. One wonders whether there is a wall or not in the quantum world.

**Seth**: Yeah, so it’s interesting you should mention Moore’s Law, which is, as you know, but I’ll just say it again, is this empirical fact, an observation about technology made by Gordon Moore, the head of Intel back in the seventies, it’s like, “Oh look, the number of bits in our computers has been growing by a factor of two every two years.” And at that time, was also the case that the speed of the computers was growing by a factor of two. The clock rate was growing by a factor of two, and so the overall computational power of the classical computers was growing by a factor of two or so every couple of years, it’s important to remember about Moore’s Law. Moore’s Law is not a law of nature. It’s an empirical law about technological progress, and it’s actually continued for an amazing period of time.

**Seth**: I mean, it’s really started back when people first started building computers in the mid to late 1940s and has gone on ever since for 70 years, which is why our computers are extremely powerful by now. But Moore’s Law, if you actually look at its different features, none of its individual pieces… Some of them are exponential, like the density of bits in transistorized wafers has been going up by a factor of two every two years. But the speed, the clock speed of the computers, stopped doubling just right after 2000. And the reason was very simple. It was doubling every couple of years, you’re getting faster and faster. And then, starting around 2003 or something like that, when the clock speed got up to a few billion times a second, a few gigahertz, you start pushing it further than that.

**Seth**: And the chips started melting, and that’s something that’s hard to get around. So it’s stayed at a few gigahertz, a few billion times a second, ever since. So not all aspects of computation continue to proceed by a factor of two every two years. And in quantum computing, it actually hasn’t really happened that way in the sense that the number of quantum bits that we have has not doubled every two years. It’s been a slower pace, which I think comes from the fact that we’re really starting at the very bottom scale and trying to put them together. In Moore’s law, it was really an observation, but what happens is when you’ve got a good platform, the Silicon wafer and the etched chip, and then you proceed by making the materials more pure, by making the etching more precise, by allowing you to pack more transistors on the chip by process of miniaturization. And in the case of quantum computers, we actually started off at the most miniature scale already. And then, we’re trying to put together more and more quantum bit for cubits. It’s a different process, and it hasn’t end up being Moore’s Law. Though, as I said, it’s sped up a lot quite recently, which shows that if you invest, you give money to smart people who know what they’re doing and invest in a good program for building better quantum computers, by God, you’re going to get better quantum computers.

**Jim**: Yeah, that’s cool. We’ll come back to the different architectures that people are working on. But before we do that, let’s talk a little bit about some of the applications and algorithms and higher level approaches of quantum computing. First, what we’ve been talking about, and I believe it’s a field that you were active in, one can divide quantum computing roughly into digital quantum computing. Now, what they call the quantum circuit skies and put you in that bucket, but there’s also some analog approaches to quantum computing, like quantum annealing, or adiabatic quantum computing. Could you talk a little bit about the distinction between those two?

**Seth**: Yeah, and do you mind if I talk a little bit about quantum analog computing in the original sense of the word analog computing, is you build a physical system, like an electronic system, and you program it so that its dynamics is an analog of the dynamics of some system you’d like to simulate. May I talk about that as well?

**Jim**: Sure.

**Seth**: Good. I think I’ll talk about that first because I think that’s important. It was really the first suggested application of quantum computers. So as I mentioned, Paul Benioff first proposed quantum computers back in 1980, and in 1982, Richard Fineman said, “A good application of these quantum devices would be to simulate other quantum systems.” And he said, “Wouldn’t it be nice if we had a box with knobs on it, and the thing inside the box was quantum mechanical, and when we turned the knobs, we were changing the dynamics of the quantum thing inside the box, and if we turn the knobs to the right setting, we would program this quantum dynamics in the box to be an analog of the dynamics of something we want to simulate.” For example, we might want to simulate strong interactions in elementary particle physics, or we might want to simulate quantum gravity and what happens inside a black hole, or we might want to see what happens to electrons as they hop around inside a semiconductor. All of these are very quantum mechanical processes. They’re very hard to simulate on a classical computer because the classical computer has to follow all these different waves that are wiggling around in superposition, and there are a lot of these waves. So it’s hard to simulate these quantum systems on a classical computer.

**Seth**: So Fineman said, “Hey, wouldn’t it be great to have a quantum simulator that could actually simulate these other systems?” And he was basically proposing the quantum version of an analog computer. So how do we make a quantum system where we can program that to be the analog of other quantum systems? And this is a very widespread application, certainly for someone like me who has a PhD in physics and doesn’t understand how quantum systems are behaving. It would be awesome to have a quantum analog computer or a quantum simulator that you could use to say, “Now, I wonder why this superconducting system is not behaving the way it should. Let’s try another design. Let’s see what happens. Let’s simulate it. Let’s see what happens if we do it this way. That would be awesome.” So that’s a great application, and that is one of the applications that, in fact, for quantum computers, the most practical right now. Pretty much as soon as quantum computers began to be built, remember, I proposed the first design in ’93.

**Seth**: Ignacio Cirac and Peter Zoller proposed how to build quantum computers using iron traps in ’94. I started working with Hans Moya and Delft to do superconducting quantum computers in ’96 to ’97 for superconducting quantum computers. Where in 1999, Yasunobu Nakamura, Shen Tsai at [inaudible 00:51:37] so these technologies have been developing. One of the very first applications is this quantum simulation. Back in ’94, I looked at this original [final 00:51:47] paper, and I said, “That’s a great idea. Now that we have quantum computers that are being built, how do we do this? So I started writing algorithms for quantum simulation and a lot of quantum computers, and pretty soon, I convinced David Corey at MIT to start implementing these. And now, these are one of the primary applications that people use for quantum computers.

**Seth**: One of the most exciting ones is to use it for figuring out chemical structure and drug discovery. These are wonderful applications for even quite small quantum computers, and that’s taking the quantum nature of a system that’s processing information, and as you were saying before, you were just writing the rules for how this quantum system behaves so that it behaves like another system that you want it to behave like, and this is a great application for quantum complication. And then, you mentioned several other applications for quantum computers that are not the digital quantum computation where the paradigm or the most famous example, digital quantum computation is [short 00:52:50] algorithm and factoring as we were talking about before. So I call this, quantum systems being quantum, and being quantum is easy for a quantum system, but it’s very hard for a classical system to figure out, like you or me or classical computers, what it means for a quantum system to be quantum and how it’s behaving this way.

**Seth**: So there’s, what’s called adiabatic quantum computing where you have a quantum system where it always remains in its lowest energy state, its so-called ground state, but you slowly change the parameters of the system. So it oozes over into a new ground state, new lowest energy state, and at the end of this slow process of changing, that’s what adiabatic means. It basically means moving so slowly you always remain in the ground state. At the end of the slow process of changing, you’ve changed the parameters of your system, so that the final state of the system encodes the answer to some hard problem that you couldn’t find in any other way. That’s a very cool use of quantum computers, and there’s a company, D-Wave, that’s been making these adiabatic quantum computers now for 15 years, and that’s a great application of quantum computation.

**Seth**: And then, there’s a whole host of other applications, including the use of things like light and continuous variables, things that wiggle up and down in continuous ways like the states of light, interactions of light with atoms and matter to try to make systems that can compute and solve problems that you couldn’t do otherwise there. In fact, one of the reasons why building large scale quantum computers is not so easy is that quantum systems give you a huge set of possible opportunities and ways to construct novel quantum devices and have them process information in novel ways. And the fact that they give you great opportunities makes it hard to take advantage of any one of them. So I guess, how we would say it, with great quantum opportunity comes great quantum responsibility.

**Jim**: Yeah, and we’ll talk about that in just a few minutes, the different technologies, many of them. You mentioned quantum simulation’s chemistry [inaudible 00:55:10] trying to put together an effort to use quantum simulation to find a better way to create ammonia. Ammonia, what the hell? Ammonia is the main basis for fertilizer, and it’s a huge $50 billion a year industry, and if you can make it even 10% more efficient, it’d be a gigantic win. So those are some of the things that I’m hearing about out in Interlands. One more algorithm I’d like to ask about is the HHL algorithm. And in fact, what does the L stand for but Lloyd. You were involved with that, right? Could you tell us a little bit about quantum machine learning?

**Seth**: Sure. I mean, HHL, let me mention Aram Harrow and Avinatan Hassidim, who were the two H’s there. I call it the Holy hell algorithm because HHL was… It’s easy to pronounce for that, so [Short 00:55:58] proposed this algorithm for factoring back in 1994, and it was already clear he…

**Seth**: Because if we’re factoring back in 1994, and it was already clear, he used this feature that you could use a quantum computer to extract the periodicity, how fast a wave is wiggling up and down. This is what’s called the Quantum Fourier transform, the Fourier transform is known as a classical technique for extracting periodicities. And then Alexei Kitaev came up with a variant on Shor’s Algorithm around 1996, and that was kind of all for quite a while.

**Seth**: In fact, the progress in building quantum computers, slow though it is, has been much more steady than the progress in coming up with algorithms and applications. And remember that quantum computing works by taking these waves and superimposing them on each other, adding up these waves. And around the mid-2000s, when again, not many algorithms, quantum algorithms had come up, I had the idea of looking at, let’s look at a much broader class of procedures than what people have been looking at so far.

**Seth**: So the idea, if you look at the mathematics of how you take these waves and add them up. The waves can be thought of as what are called vectors. And a vector is just a big, long list of numbers. For a wave, you can say, “Oh, let’s take the wave and let’s discretize it in space.” And each number tells you the amplitude of the wave at a particular place.

**Seth**: The first number is, “Oh, the wave, the zero or the amplitude is zero.” Then the next one is one, “Oh, look, the wave is one. The next one is two. Oh look, the next one is two. The next one is two again. Oh, look, it’s leveling off, the next one is one. Oh, look, it’s going back down. The next one is zero. Oh look, it’s back down at zero.”

**Seth**: So you see, the wave, we can encode the wave as just a list of numbers, and the advantage of quantum computation here is the list of numbers can be extremely long. Remember with 300 cubits, 300 quantum bits, the list of numbers is two to the 300. The list is as long as the total number of elementary particles in the universe. So the advantage of quantum computers is that these vectors, these digitized version of waves can be huge.

**Seth**: And then the study of how these vectors operate, these long lists of numbers is what’s called linear algebra. And the way you can add vectors together, that’s just quantum superposition in the case of the quantum systems. And another thing you can do is you can take the vectors and you can multiply them by what’s called a matrix, which is just an even bigger list of numbers now in the form of, for instance, a square rather than just a long list.

**Seth**: So linear algebra is basically the study of how you take these lists of numbers and how you manipulate them by multiplying them and adding them with other lists of numbers. And a lot of what classical computers do is just this linear algebra.

**Seth**: And a lot of, for instance, when I teach my undergraduate mechanical engineers about numerical methods of things you need to know for doing calculus or building bridges. Or programming computers to do design to understand the material properties of structures, a huge amount of what they’re doing is just taking these classical computers, programming in big lists of numbers. And then multiplying and adding and dividing them, just doing these linear algebraic techniques.

**Seth**: So it’s a big part of both a pure math and a very large part of practical and applied math, which is linear algebra. So I had the idea it’s like, “Hey, these quantum computers, why do quantum computers do a good job?” It’s because quantum systems are basically doing linear algebra. The state of a quantum system is a huge long list of numbers. We don’t have direct access to the numbers in this list but the state, they’re there.

**Seth**: And when the state gets transformed, when a quantum system evolves from one state to another, these numbers change. And the way they’re changed is by multiplying these numbers by a big matrix, another big square array of numbers. So the quantum systems, the reason why quantum computers are better than classical computers for some problems is that quantum systems are hard wired to do linear algebra. So I had the idea of saying, “Well, let’s look at just some other linear algebra techniques. Let’s see what we can do.”

**Seth**: And one very common problem in linear algebra is you have a set of equations in unknown variables, and you want to know what the answer is. And meaning, this is algebra, two X equals six. What is X? Well okay, in this case, X is equal to three. And then you have, here’s another one in more variables, two X plus Y equals seven, X minus Y is equal to two. Well, I just made that up in my head because I knew the answer because I put it in, it’s “Oh, X is equal to three and Y is equal to what?”

**Jim**: It’s our old simultaneous equations from algebra one in eighth grade.

**Seth**: Exactly, exactly. And so we have systematic ways of doing, solving these equations. Actually, these systematic ways, it’s often called Gaussian elimination it’s named after Carl Friedrich Gauss. And the reason it’s called that is it was actually invented in the west by Isaac Newton, but somehow Gauss made some improvements so they called it after Gauss. And then it was invented in China like 1500 years ago or longer, so it’s been around for a long time.

**Seth**: And the systematic way of doing this, everybody who went through eighth grade and did this knows, it’s like, “Okay, first we add together these equations or subtract them from each other after multiplying from each other to eliminate certain variables, until we end up with the answer. Oh look, it turns out Y equals one. And then we plug Y equals one back into the rest of the equations and we find, Oh, look, X is equal to three.”

**Seth**: And so that’s called Gaussian elimination and it takes some time. And if you have more variables, it takes a longer time. And actually, the time it takes grows rather rapidly with the number of variables. So I started looking at this problem like, hmm, I wonder if the numbers we’re looking for are the entries in a vector, the vector is a quantum state, it’s just a list of numbers.

**Seth**: And we can write these linear equations by saying, “Oh, let’s take this list of unknown values, we multiply them by a vector, that gives the two X plus Y is equal to two X plus Y part, or the X minus Y part that you can get by multiplying this list of number unknowns by a matrix.” And then you say, “Oh, and it’s equal to a known vector, that gives us two X plus Y is equal to seven, X minus Y is equal to two. So then we have this list of the knowns.

**Seth**: Then we are rephrasing the problem in the form of A X equals B, where X is an unknown vector, B is a known vector and A is a known matrix. So I said, “Let’s see if we can solve that.” I said, “You know what? I know how to solve that.” It was pretty funny actually, the way we came up with the solution. So Avinatan, and Aram and I, they were at my house out in Wellesley, and I just made them a big Italian lunch.

**Seth**: And the rule of coming out and working with me there is that you spend the whole day, you work really intensely for four hours. I make a large and delicious… well, I regard it as delicious, they may not, but a large Italian lunch, you eat it. While you’re eating, you talk about other things, you don’t talk about physics. But while you’re washing up afterwards, you’re allowed to start talking about physics again.

**Seth**: And we were washing up after the lunch and Aram said, “So, Seth, are you having any problems you’re working on?” I said, “Yes, I am actually working on this problem. A X equals B, and we want to find X is equal to the A inverse multiplied minus B.” And I said, “I can do 99% of this problem because I can solve the problem with the form of, I can say I can produce E to the A inverse X, where this is just some different version of the same problem.”

**Seth**: I said, “But I can’t get to the last step. And Aram said, “Oh, but that’s easy”. And he explained how you do it. And that was it.

**Jim**: That’s a wonderful story about collaboration.

**Seth**: Exactly. And then of course and then it took three months actually to get the paper written, to figure out all the… D=dot the I’s and cross the T’s and put in all the epsilon-deltas. But then we had the answer to that. And I think that that, I mean, first of all, it’s a fun problem because it solves a very fundamental question in linear algebra. That I think is great.

**Seth**: But I think the other thing that was really fun about that work is it showed, it kind of said, it was that it’s not just like, they’re just a few problems in linear algebra that we can solve, like this original one from finding periodicities from Shor’s algorithm. It’s like, “Oh, look, we can, we can solve equations with a whole bunch of unknowns”. And then a bunch of us realized, oh, this means that pretty much all of linear algebra, which is a huge field, right? Pretty much all of that would be much, much faster on a quantum computer if we could build one.

**Jim**: Which leads me to my next question. When I read about AHL, I said, “Hmm, linear algebra, what else is based on linear algebra? Deep neural nets.” Has anyone to your knowledge attempted to at least design the algorithms for what neural nets might look like on a quantum computer?

**Seth**: Absolutely. And that’s exactly what we decided to look at. So neural networks are a classical method for trying to figure out, simulate on a computer in a stylized fashion what happens inside the brain. And famously they’re used in very widespread fashion now. Artificial neural networks are used for machine learning, for finding patterns in data because it turns out if you make huge honking artificial neural networks and tune them, turn them loose on gigantic training sets of data, that they do pretty well.

**Seth**: At least for extremely important societal problems like recognizing pictures of kittens on the internet. Which actually, I was a little surprised at that, it was amazing that Google was able to train a deep neural network to recognize that there are lots of pictures of kittens on the internet, to figure out how to do it. But it’s not that strange because if you pick a random picture from the internet and show up to computer, the chances that it’s a kitten they’re already relatively high.

**Seth**: So just guessing, yeah it’s kittens is probably going to do pretty well. Yeah. So this is a great question, Jim. And that in fact is one of the primary proposed applications of quantum computers right now, is quantum machine learning in general and things like quantum origins of neural networks in particular. And I think here is there’s tremendous opportunities because we have these fantastic classic machine learning algorithms now.

**Seth**: It’s no secret that over the last five to ten years, that classical machine learning has been going wild and accomplishing things that were very unexpected. Whereas previously, actually for the previous 20 or 30 years, it hadn’t been working that well to tell the truth. Somehow this threshold got crossed where the amounts of data got big enough, the computers got more powerful, and the methods we would find got to the point where they could really do some pretty impressive things.

**Seth**: Not just beating the world go champion in go, or doing facial recognition, or pattern recognition, or voice recognition. A lot of things are coming together. That doesn’t mean I’m still going to be willing to get into a car that’s driven by some big machine learning algorithm and have it drive me around at 70 miles an hour through the streets, there’s still some glitches to be worked out.

**Seth**: Yeah, but the quantum version of this is remarkable as well. And I think that it suggests going forward, one thing we’re working a lot on, my group, is kind of a partnership between quantum methods and the classical methods. Because as you say, classical neural networks, the way that they work is you take a huge vector, a huge long list of numbers. You multiply it by a gigantic matrix called a weight matrix. And then you take the resulting vector that you get after transforming this vector in this fashion, using linear algebra.

**Seth**: And then you apply some kind of nonlinear transformation on it. This nonlinear transformation is the analog, the computational analog to what a neuron does. The neuron has all these synapses with signals coming in. Those signals get processed in a way where they get combined and added and subtracted from each other in a way which is like this multiplying, these signals, these list of numbers by a matrix.

**Seth**: And then the nonlinear part is, okay, do I fire or not? The neurons question is like, “Oh, well yeah, if the process inputs get above a particular threshold by gamma, I’m going to fire. If they’re not, I’m going to hold, keep my powder dry, I’m not going to fire.” So the non-linear part is this firing part in the neurons, the linear part is this combining of a huge number of incoming messages.

**Seth**: So for the quantum version of neural networks, a really great application of quantum mechanics is like, oh, look, the quantum systems are great at combining vast numbers of these signals encoded as quantum states. Adding and subtracting them, multiplying them by different numbers and putting them together. Let’s take those and let’s combine them with a classical device that’s going to take the results. I’ll say a classical measurements made on these state and then do these non-linear transformations.

**Seth**: And this kind of hybrid architecture, one of the nice things about it is you know it’s going to be at least as powerful as either the quantum or the classical on its own, because you could always just turn the classical part entirely off and have a fully quantum device. Or turn the quantum part entirely off of them fully classical device.

**Seth**: But by getting them to collaborate with each other, the quantum, the classical, you have a device that is at least as powerful as just classical or just quantum on its own. That’s a very exciting set of applications.

**Jim**: Yeah. It looks like an area that’s going to be well worth following, I’m looking forward to doing that. Let’s now turn to something that our investor listeners, which we have some, would be interested in. Which is the underlying technical substrates for implementing quantum computing. Two of the big ones that seem to be still hanging in there is the two leading contenders are superconductors and track ion. Can you talk a little bit about those two and then we’ll talk about some of the others.

**Seth**: Yeah, that’s a very question about… and I’m sure there’s a lot of questions right now. It’s like there’s a race to build the first scalable quantum computer, who will win? The the state of the known universe hangs in the balance, that kind of thing. And so yeah, I mean I think the first thing to recognize about how you build a quantum computer is that pretty much any quantum mechanical system can be a quantum bit because the whole quantum in quantum mechanics tells you that there’s this kind of discreet nature, this kind of digital nature to quantum mechanics at bottom.

**Seth**: So recall when I made this first design, back in 1993 for how you might build a quantum computer, I scraped around for years, like for five years or so, trying to figure out how to do this because there was no technological guidance. Nobody knew how to do it. And then finally I said, “Hold it, hold it. Really, any quantum system will work.” We can have an atom, we’ve got an electron over here, that’s zero. Electron over there is one. We have the spin of the electron, the electron spinning clockwise. We called that zero electrons, spinning counterclockwise, we call that one.

**Seth**: It doesn’t have to be an electron, it can be a proton or a neutron. Wit its spin clockwise, counterclockwise, zero or one. We could have a photon, which is a particle light where it’s polarization, wiggling back and forth is zero, wiggling up and down is one. And that is the advantage that it moves from one place to another, the speed of light, which is kind of useful if you want to communicate quantum information from one place to another.

**Seth**: Every atom has internal atomic states and these eternal atomic states can be controlled and changed by zapping them with light. Any two atoms that interact with each other, their interaction can be taken to perform a quantum logic operation. And then actually, and this of course is the basis for ion trap quantum computing that was proposed by Ignacio Cirac and Peter Zoller in 1994.

**Seth**: It’s like, oh yeah, look, you can take… or maybe it was 1995, I forget. It was a little after Shor’s algorithm. They said, Look, you can take these ions, which are atoms where an electron has been stripped off as in the old joke, two hydrogen atoms stagger out of a bar. And one says, “Oh, shoot, I lost an electron”. And the other one says, “Are you sure?” And the first one says, “Yeah, I’m positive”. Sorry.

**Jim**: Hahahaha.

**Seth**: Terrible joke.

**Jim**: That’s terrible, that’s a groaner of the first water.

**Seth**: Anyway, you stripped electrons off the atoms, they’re now ions, they’re positively charged. So they repel each other. And then you trap them in a trap that has both static and dynamic electromagnetic fields. And you can line them up in a trap. They interact with each other because they repel each other because of their positive charge and they have internal states that can store quantum information for a very long period of time and they’ve excellent quantum bit or cubits.

**Seth**: And then you can take their interactions, affect their direction interacting, and then by tinkering them in the right way, get these interactions to allow them to perform quantum logic operations on several quantum bits at once. It’s a very nice substrate or designed for making a quantum computer. The atoms are extremely coherent, hey have very long coherence times on the order of hours, right? And so coherence is not a problem.

**Seth**: The laser techniques for zapping the atoms with lasers for getting them to talk with each other. That actually, those are very well-developed. And so you have very high accuracy gates with very low error rates. And there, really the main problem is just from the technical limitation of putting together in ion traps. Once you’ve got about a hundred ions in a single ion trap, that’s about all you’re going to get into one ion trap. Otherwise there are too many limits starting to fall apart.

**Seth**: So then you have the problem of, if you want to build a scalable quantum computer to make it larger and larger, you have to put more and more and more ions together. And indeed you need a way of having different ion traps talk with each other. And this is currently the technologies we’re doing now, slow processes of quantum communication that take quite a while to move quantum bits from one iron trap to another. That I would say is probably the biggest bottleneck at the moment for making large scale computers using ion traps because the other features are great. The coherence is amazing. Impeccable, basically eternity so far as we’re concerned.

**Seth**: We’re not worried about quantum coherence in the ion traps the decoherence time is very long. The control parameters in terms of the performing low error rate gates, that still has a lot of work to do. There, remember the kind of benchmark you really want to get is 99.99% accuracy for performing individual gates. They’re still a nine or two away from that. They’ve got a factor of 10 or more for doing that, but that’s in good shape.

**Seth**: There, the bottleneck is communication time, but this is a great technology for decades because of the long coherence control characteristics. Ion traps were considered to be the front running technology for building quantum computers ever since they were proposed by Cirac and Zoller and then implemented. And this is one of the features for which Wineland got his Nobel prize in physics.

**Seth**: So that’s a great technology, one of the two front running technologies. So that was early nineties or ’94, ’95. Mid nineties, a bunch of people started talking about the possibility of using superconducting systems for performing quantum computation. As I mentioned, my introduction from this came from talking with Hans Mooij, M-O-O-I-J hard name to pronounce, a very distinguished professor of physics at the Technical University of Delft, kind of the MIT of Delft. Except in Holland, they believe it’s better than MIT.

**Seth**: So anyway, I’m not sure I believe that, but it’s great place. And Hans had the idea that you could use something what’s called macroscopic quantum coherence that had been talked about a lot by people like Tony Leggett who got his Nobel prize for elucidating how superconductivity worked. So the idea here is, and other people by the way, like Jaw-Shen Tsai, Yasunobu Nakamura were working on the same idea at the same time.

**Seth**: So the idea of macroscopic quantum coherence is the following, so a superconductor is a state of matter, typically something like a metal, like aluminum. And when you cool it down below a certain temperature, you find mysteriously, and quite remarkably, is it the resistance for the flow of current goes to zero. This is because below a certain point, and in order to get resistance, you need the thermal expectations of the system to be sufficiently great. That it will knock the electrons out of the state that they’re in, in order to get resistance.

**Seth**: But there’s just not enough energy in the thermal fluctuations to do that. So the electrons, once they start going, they just keep on going forever. And so you can build a loop, a superconducting loop where one of the states of the loop is electrons going around the loop forever in a counterclockwise direction. And another state of the loop is electrons going around the loop forever in a clockwise direction. These are perfectly reasonable States of a superconducting system.

**Seth**: Now, if you have macroscopic quantum coherence, then what happens is this macroscopic state of the electrons, we’re talking in a state of trillions of electrons here, trillions of electrons going around the loop, clockwise forever, or trillions of electrons going around a loop counterclockwise forever, you can create a quantum state, which has trillions of electrons going around the loop in a clockwise direction and in a counterclockwise direction at the same time. Just in the same way, you can have a single electron that’s over here and over there at the same time in some funky quantum mechanical sense that nobody understands.

**Seth**: So this macroscopic quantum coherence is really weird because it’s like, wow, a trillion electrons going clockwise around the loop and going counterclockwise around the loop at same time, it just doesn’t make sense. And it doesn’t make sense, but that’s the way it is. And in the late 1990s Yasunobu Nakamura, Shen Tsai at NEC, and then subsequently Hans Mooij at Delft in an experiment that I was honored to be able to participate in and helping with the theory, managed to demonstrate this macroscopic quantum coherence and showed the existence of the first superconducting cubits.

**Seth**: So that was 1999, 2000 and amazing, wonderful demonstration, brilliant experiments. And the only issue for using these for quantum computing is that the cubits sucked. That’s a technical, by the way.

**Jim**: That’s the acronym, S-U-C-K, right?

**Seth**: Right, exactly. You can get these supercurrents going in this quantum superposition and they go around the loop a few hundred times, and that was it. Then the whole thing would fall apart. They decohere, the coherence times were very short. We had no idea why this was going on. A whole bunch of people tried to make them better. A little bit of progress happened, some products and design happened. And we said, “Oh, let’s make them smaller. Let’s try different techniques for etching out the superconducting circuit. Let’s bring out all the possible methods we have for doing it.”

**Seth**: And then around, I don’t know, mid-2000s, maybe 2007, 2008. Rob Schoellkopf who was at Yale then said, you know what, let’s try making the cubits bigger instead of smaller. And let’s take these superconducting cubits, they’re part of a superconducting circuit that has these different features in them. And we have a lot of flexibility how we design these things. Let’s try making them bigger.

**Seth**: And you know what? This worked like a charm, turned out that making them 10 times as big, which in spatial extent, which made them a hundred to a thousand times as big in terms of the amount of material that was in them. This meant that they became much more coherent. And it was amazing insight by Rob and his group there, including people like Michel Devoret. Which was that, oh, what’s the problem? The problem is in the metals and the insulators that make up these superconducting cubits, even though these are very, very refined and we’re doing the best we can, there are these little tiny defects in places that are out of our control.

**Seth**: And these defects are what are causing this decoherence and causing things to be messed up. Now, if you make the system bigger, you think, oh but now you have more defects, and that’s true. But what also happens is that the electrons that are doing the superconductivity, their waves get spread out over a much larger volume. And so the electron experiences each defect to a much smaller degree. And when you do a little bit of the math, you find out, okay, the effect of each defect goes something like, okay, the number of defects grows linearly with the volume or the size, but the effect of the electron goes down as something like one over the cube of the size or something like that.

**Seth**: Or the square of the side, depending on the particular architecture and geometry. And so what happens is the coherence time just shoots up and along with this innovation, along with other innovations that were going on all the other kinds of superconducting systems. The coherence times of these superconducting circuits went from being a hundred nanoseconds, a 10th of a million of a second, down to a hundred microseconds, a hundred to a thousand microseconds went up by a factor of ten to the four of a thousand to ten thousand, even a hundred thousand over the course of around a decade or less.

**Seth**: And that meant that all of a sudden these superconducting cubits, which previously had been limited by their coherence times, that was no longer the limitation, an amazing, wonderful technological event. And the thing about superconducting cubits is because you can etch them on chips with technologies that are similar to those where you etch semiconductor chips.

**Seth**: So here you’re etching things like aluminum and aluminum oxide onto the chips, along with other kinds of transistorized controls, you have a lot more flexibility for how you put the cubits together than you do when you have ion traps. So now we have these two technologies and the ones that, that Google, when Harper Nevin had this insight, “Hey… ”

**Seth**: When Harper Nevin had this insight of, “Hey, we can do this, we can wrap this up.” and he said, “Let’s go with the superconducting technology. The time is right for that. We can build larger superconducting, chips.”. And he enlisted John Martinez at the same time. John Martinez had this strong vision for how to make this happen, and they did. They made it happen. And very impressive. So the devices are remarkable. [inaudible 01:24:26] haven’t gone away. There was an amazing, start-up done by Chris Monroe called IonQ. There’s a lot of academic research on this where IonQ, they also have a hundred cubits in their system. And then they’re actually, the thing is because of what I described before. It’s like any quantum system is a reasonable candidate for doing quantum computation. Superconducting systems are the current front runners in the technologies, but coming up behind them or a lot of other technologies, which have all kinds of potential advantages and are not yet as well developed as the superconducting systems.

**Seth**: Ion traps merely because they haven’t had as much investment for them in that. And these include things like other atom and optical technologies, all optical technology news companies like Xanadu or Jeremy O’Brien’s company. Sorry, Jeremy, I forget the name of your company right now, but it’s a great company. A company using atoms and optics, using entangled in different ways. So there are a lot of other technologies that are currently in the form of things like startups that might very well have breakthroughs and leap to the front of the path. It’s a little too early to decide, in my opinion, at any rate to decide if the superconducting system is just going to be ion traps or one of these other technologies, because they all have their advantages.

**Jim**: Now, what I stumbled across when I was doing my research where the show has the mouthful name, non-abelian anyon. Apparently that’s Microsoft’s bet. What’s that?

**Seth**: Oh yeah. This is a wonderful- I mean, this non-abelian anyons, as the name suggests are like, I have no idea what the hell they are, but it sounds super cool.

**Jim**: Exactly. That was my reaction. What the fuck? I think I know what non means. But I’m not sure what abelian anyons might be.

**Seth**: Yeah. This is a very beautiful theoretical idea that was proposed by the brilliant scientist Alexei Kitaev back in the nineteen nineties. And he was using what are called topological effects. Topological effects are things like based on knot theory, right. A knot is something, what is a knot? Well, a knot is an arrangement of strings. Well, you take a rope or a string, and you interweave it with each other in a way. And once it’s interwoven in a particular way, the actual kind of exact way that the string is lying on top of each other doesn’t really matter. What matters is how it’s interwoven. And so if I tighten it up, it’s still the same knot. Or I loosen it a little bit; It’s still the same. The knotedness comes with the topology of how it’s knotted together.

**Seth**: And Kitaev came up with this amazing idea that there are these topological excitations at microscopic levels. You have these systems where the way the system behaves depends on as topology, kind of the way it’s knotted together, rather than on the exact microscopic configuration in the system. You know exactly where the electron is. And so there are these things called these topological excitations of the feature, that how they behave and what they do depends on how you knot them together.

**Seth**: And the way you not together is actually you kind of move them around each other. I pick up one system and then I take it in a circuit. Like it takes a little conversion to move all the way around another system. And the way this is described, if you do it in time, if you imagine unraveling this in time, it’s like a braid. I have the time, the history of one system is a string. Here’s one system that’s sitting still, it’s just like a string that’s pulled tight. And then the history of the other system that I’m moving around is like another string. But I take the string and I take the front of the string and I move it all the way around the other system. So I’m braiding the history of the system around the other system. Does that make sense? It’s hard. This would be easier if I had some props with me.

**Jim**: Yeah, it’s hard to get one’s head around, but that’s interesting that it’s essentially quantum manipulation of topology.

**Seth**: Right. So you just, by taking a system and moving one system around the other, then that’s a kind of, if you unravel this in time, it looks like a braid. Where a braid is to just have a series of ropes, there’s strands of hair, and you pick up one strand and then you braid it around the other system.

**Seth**: Properties of the dynamics of the system end up being a property, this braiding. And so the topological part is the braiding part. And then the fact that what happens to the, there’s a kind of a trivial way of braiding, which is called a abelion braiding. Whereas if you do it to one system, then you to another system, it doesn’t matter what order you do, the braiding in. And that is not enough doing quantum computation. The non abelion part says what happens to the quantum systems depend on the order in which you braid one around another. And then the anyon just means that you can do anything you want with these topological systems, including quantum computation. So the non-abelian quantum computation with non-abelian anyons comes from this braiding of these topological quantum systems around each other. I know it’s like, I’ve tried to, I’m just doing my best Jim. I’m just doing my best. I can do no more.

**Jim**: Let’s hop up to how does it do on our three factors: a number of cubits, decoherence time, and error rates.

**Seth**: So great. What is good about these topological systems? It’s the fact that the way they behave is insensitive to just kind of the microscopic state of the system. Remember in a knot, I could have a knot that’s where the its knotedness and the form that they’re not takes doesn’t matter. Just like how tightly the strings are pulled or not. It doesn’t matter. You know, I have a loose knot and I move a little bit of string to one place to another. I poke around and the knot, it’s still the same knot even if I move everything around and like pick it up and I shake it.

**Jim**: Yeah. That’s topology. Right?

**Seth**: Look, topology is that what it’s doing is insensitive to the details of the geometry, right? So the great thing about these topological computational methods, and this was pointed out by Kitaev in the nineteen nineties, is that the computation that they’re performing is insensitive to the exact way that you break things around each other.

**Seth**: You know, you’re a little sloppy about how you braid the things around, who cares? What happens in terms of the computation is exactly the same. So the great thing about this from a theoretical perspective is these topological methods of computation, which are the ones being pursued by Microsoft are intrinsically insensitive to how you do this. So they’ve got an intrinsic error correction built in already. Whereas in kind of a more, if I may call it, conventional quantum computer, like an ion trap or superconducting system, you have to physically encode the error correcting in by building a whole bunch more cubits, by doing a whole bunch of more error correcting operations. And it’s a mess. So, that’s the real advantage of these topological methods. And this was, as you mentioned, this is a method that, so there they have the kind of error rate if you can make it happen, then you’re in great shape.

**Seth**: And Microsoft a long time ago, decided back in the late nineties, due to the brilliance of Michael Friedman, who is a Fields Medalist. You know, the mathematical equivalent of the Nobel prize, except even more prestigious. Fields Medalist in all things topology. He said look, back in the eighties. He was already saying, look, maybe we could use these topological methods with quantum systems to do computation. And then in the late nineties, he started Station Q at Microsoft back in Santa Barbara.

**Seth**: It’s like, let’s make this happen. Microsoft’s been working on it ever since. A beautiful set of ideas, wonderful experiments with these topological excitations, which previously didn’t exist. They now actually exist. The problem is they exist at the level of one cubit, and they haven’t yet been able to do a single two cubit operation. So from the theory side is fantastic. From the basic physics side, it’s amazing. The experiments done by people like Charlie Marcus or [inaudible 00:08:37], they’re beautiful, wonderful experiments. Kind of Nobel Prize character kind of experiments. But in terms of building quantum computers, the number of cubits is very small. In fact, they still don’t really have two cubits to rub together.

**Jim**: That’s not much. So then on our three-part model, they’re good on decoherence, they’re good on error rates, but they can’t, so far, figure out how to even in entangle two cubits.

**Seth**: Well. Yeah. So, that’s a problem. Now it’s possible that they may have some breakthrough in this, but you know, turns out that this particular method, maybe because it’s so beautiful and fancy and accomplishes all these other things, you don’t get something for nothing. So by having these topological methods, you have to struggle so hard to get these beautiful, resilient, topological, quantum excitation that the rest of the features like scaling it up. Then you take a big hit in that.

**Jim**: Yeah. No free lunch right. Now let’s move on to, let’s jump out of the fairly straightforward world of quantum computing into the wild blue yonder. Late in your book, actually it’s in the beginning, too, here and there, but especially late in your book, you start making some strong arguments about the universe as a quantum computer. And some amazing claims about how thinking about the universe as a quantum computer solves minor little problems like quantum gravity. Which is the biggest question in physics today. You know, how the hell do we reconcile the macro sphere of general relativity with quantum mechanics? So why don’t you go on a wrap here for a few minutes on the universe as a quantum computer and what that insight provides us in terms of trying to figure out how the universe is?

**Seth**: Well, Jim, this is a story that’s been very important for me personally, because the question of quantum information and quantum gravity is how I got into the field in the first place. In 1983 and 84, I was a graduate student, Cambridge on a Marshall scholarship doing this master’s in philosophy of science. And I took Stephen Hawking’s graduate seminar on quantum gravity. At that time, I said, “This stuff that I’m doing about information and quantum mechanics really fit in well with this quantum gravity stuff.” And I started working on it ever since. I’ve been working on it ever since. Not very successfully, may I say? And actually nobody else was doing this for a long time. And now just suddenly, all of a sudden, these last few years, everybody’s decided the quantum information and quantum gravity goes together.

**Seth**: So that’s nice. But in terms of the universe, as a quantum computer, the argument there- it’s not even an argument. I know it sounds completely overblown. Like, oh, the universe is a big quantum computer. It’s like, “Oh, what do you study? Quantum computation? Everything’s a quantum computer. It’s like, when you’ve got a hammer, everything looks like a nail,” but it’s not really that. That’s not what’s going on here. It’s really has to it’s related to what I was describing earlier about how you build quantum computers. So in figuring out how to build quantum computers in the first place. So it was like, “Oh, I wonder what kind of would be good substrate for building quantum computer.” And then I was like, “Huh, it could be anything.” Because everything out there in the universe, every atom, every elementary particle, every electron, every quark, every photon carries with it, quantum information. When different atoms, electrons, photons, quarks, interact with each other, that information is changed and transformed.

**Seth**: And so the carrying information, registering information in a quantum mechanical fashion, every individual piece of the universe does that. And then when two pieces of the universe interact with each other, your two elementary particles bounce off of each other, that information that they carry is processed and transformed. So when we build a quantum computer, all we’re doing is just accepting the reality that everything out there in the universe is already registering quantum information and processing it in a systematic fashion. So to build a quantum computer, what we need to do is kind of hack into this ongoing computation and see if we can’t, by tickling things with lasers or masers or, by electric potentials and stuff, if we can’t kind of guide this ongoing computation towards directions to doing things that we’d like to do, like quantum simulation or factory.

**Seth**: So my realization that the universe was really, the whole thing, was a quantum computer really came from this very practical engineering challenge. Like, Hey, how do we build a quantum computer? What can we use? Oh, we can use anything we want because they’re already processing information in a quantum mechanical fashion. These electrons are already effectively computing. And what we need to do is not make them compute or exploit them to make them compute. We need to recognize that they’re already computing and processing information and then intervene into this computation they’re already performing to program them to do something different and something we’d like them to do. Hence the title of my book Programing the Universe. When we are building a quantum computer, we’re not actually doing something that the universe wasn’t doing already, we’re simply intervening in part of the universe’s programming to perform a computation we’d like it to perform.

**Seth**: And if you ask electrons and atoms and through critical circuits in the right way, very nicely speaking their own language, tickle them with light in the right fashion, by gum they’re going to do that for you. And that’s what’s happening with quantum computation. So when you actually think of the universe as a quantum computer, I actually would say, it’s not a metaphor. It’s just the fact, right? Every piece of the universe is registering information when pieces of the universe director processing information. Information processing is computation, computation is by definition and the universe is processing information and computing in a quantum mechanical fashion. It’s just a fact, it’s not a metaphor.

**Jim**: But some of the implications that you take from that are quite astounding.

**Seth**: Well, I mean, you take this facts and I’m going to just call it a fact. Cause it’s a fact you’re going to say, “Well, what are the implications for this?”

**Seth**: Well, one implication you’ve said, as well, maybe we could understand this famous unsolved hundred-year-old unsolved question about how you make a quantum theory of gravity using ideas from quantum information processing. That would be great. You know, I’ve been working on this since the early eighties, making progress slow, and not even steady, slipping backwards. And more recently people like Lenny Suskin the last few years have advocated doing this. And now there are a lot of people working on this problem. Maybe we’ll make progress. Quantum mechanics and gravity is more than a hundred-year-old question and a deed a few years ago at the, at the hundredth anniversary of Einstein’s discovery of the theory of general relativity, some friends of mine and I, who do quantum gravity. We said, “Let’s have a conference entitled a hundred years of failing to quantize gravity.”

**Seth**: Because let’s face it. We still don’t know how to do it. And if someone tells you, they know how to do it, it just merely means that they are string theorists. So. But I think that this actually could be helpful for that. That would be great. I think there was one aspect of the fact that the universe is in fact, a big quantum computer that is really amazing. And there’s an implication of that that is remarkable. And does explain something that is very beautiful and profound about the universe, which is why the universe is so complex.

**Jim**: Ah, I was going to ask you that was going to be my next topic. Let’s just roll with that, but this will be our last topic and then we will wrap up.

**Seth**: Good. So let’s just move into that because this is a very wonderful question. And people have been asking this question as look for as long as they’ve been people. Stare up at the night sky, “What’s going on out there? What, what are all these stars?” You look at the world around, “What are all these living things? My God look at human beings. What the heck is going on here?” It’s totally out of control with complexity. Where did that come from? A very mysterious question. That’s very hard to answer. And you know, in some sense, is this question is at the basis for organized religion, disorganized religion, personal religion, et cetera. And you know, people in science have been scientists have been trying to answer this question for hundreds of years as well, also failing to do so. Actually from the point of view of physics, it’s particularly mysterious because one of the features of physics is when you look at things at their most fundamental level, they look simple.

**Seth**: You know, the standard model for elementary particles fits on a t-shirt. It’s simple. How do you have these simple laws and get all this complex stuff coming out of it? Well, the fact that the universe is performing a quantum computation actually provides an explanation for this. Because the computer, at bottom, is in fact a simple thing, but an MX of a computer, it’s a simple. The computer, it’s got information in it stored in bits. And the quantum computer is stored in quantum bits or cubits. That’s fine. That is a very simple thing. It’s something that has two possible states, which we just happened to all zeroes and ones or electron here and electron there. And then the, the elementary logical operations out of which of computations are built are also very simple. It’s like, “Oh, we just flipped the bit we moved the electron from here to there.”

**Seth**: Or we flipped this bit. If this other bit is one, oh, we only flipped a bit if the other bit is one. That’s called a controlled knot operation. It’s a very simple logical operation. And if you look at the most complicated computations, you know, that involve kajillions of bit, technical term, and performing cajillions of logical operations, that’s also a technical term. All it is, is a lot of bits and flipping these bits in a systematic fashion. Well, that’s a computation and out of a computation, even though the actual dynamics and underlying physical reality of the computation is simple. Just bits, bits interacting with each other, bits flipping in a systematic way. You can get very, very, very, very complicated behavior out of a computation. So this is great because this tells us how we can get a universe that has very simple laws, simple ways of representing information, simple quantum mechanical rules for flipping bits, it can start from a very simple state at the time of the big bang.

**Seth**: And yet it can spontaneously generate any kind of complicated behavior that you would like to think about. And for this, we just need a little extra bit of mathematics, and the mathematics is very simple. It says, we take something that’s capable of computation. We give it a random program. So we give it random bits to start out with. And we look to see what happens. And there’s a beautiful aspect of mathematics called algorithmic complexity theory that says this computer programmed at random has a finite chance of producing any complex system you can imagine. Not only any one that already exists, but any complex system that can exist in any one that you can even imagine. And that’s a very beautiful piece of math. And when we apply it to the universe as a whole, we say, “Look, the universe has a quantum computer. It gets programmed at random by these kinds of just like quantum randomness.” But when we take a computer, we give it a random program. What do we expect happen? We expect amazing, beautiful, wonderful, complex things to happen. And that’s what happened.

**Jim**: Yeah. So that the quantum randomness or pseudo-randomness or whatever it is, was the mechanism by which complexity was bootstrapped from a relatively uniform early universe. Is that it in a nutshell?

**Seth**: Exactly. I mean, there’s lots of very cool stuff in this. It’s like it has to do with gravity, and actually quantum gravity. It’s like, “Ooh, the universe was very uniform to begin with and have a state of maximum entropy. How did it get so nonuniform?” Turns out when you have gravity that uniformity bad, clumpiness is good. Matter likes to clump together to form things like stars and planets and galaxies. And then once things start to clump, it’s like, “Oh.” Actually then the spontaneously generates the information throughout conditions that allow computation to bootstrap itself into ever more complicated forms by generally ever more complicated forms and patterns in energy. So-called free energies.

**Seth**: So, first you get planets stars, galaxies. Then you get on planets, you get more and more complicated chemical clump compounds that can recombine in every more complicated chemical fashions for more and more complicated chemicals. So, “Oh my God,” that a certain point it’s like, “Oh, this group of chemicals could be RNA or something like that.” They figured out a way to reproduce their state, but with variation. Once you have things that we produce with variation, then that’s just life, man. Then life is off and running. And so there’s a very natural tendency at each stage along the game for the information processing power of the previous generation, if you like, to get bootstrapped into an ever more complicated form of information processing in the next generation.

**Jim**: Though, it is difficult to maintain that, right? The famous error catastrophe in evolution, right? It’s not easy to maintain that complexity from generation to generation. And that’s what’s so interesting about life. I had an amazing conversation with Stewart Hoffman for hours, and we both ended up at the same point. “How did pre-organic life manage to get over the error catastrophe because RNA world alone, the error rate is too high. When you get to the DNA, plus all the replication machinery, the error rate is low enough to defeat the error catastrophe. How the hell did we get over that bridge without the error correction? And we both said, “We don’t know.” It may actually have something to do with the Fermi Paradox, which may be some goddamn art if it’d only happened once. But anyway, I would put that caveat on your story, which is yes, quantum randomness creates complexity, but often the complexity breaks down again very quick.

**Seth**: Absolutely. And in fact, one of the features that exactly [inaudible 01:47:04] the same instability that makes matter and energy clump together to form stars and galaxies. And then you need instability in order to go from something that’s homogeneous and uniform to something that’s nonhomogeneous and non-uniform. The same instability that says, “Oh, simple groups of chemicals together are not unstable, but the formation of more complex groups of chemicals.” And then, “Oh, just complex groups of chemicals are unstable to the set of auto catalytic sets in say something like the RNA world, if that’s how the world can together.” But then the RNA world is unstable to things that have DNA and error correction that can be more meticulous and create even more complex forms. So I don’t to understand this, but you know, something like that definitely happened.

**Jim**: I think we’re going to wrap it up here. We’ve covered some amazing amount of ground. People who have hung with us to the end have gone on an amazing journey. So I want to thank you very much, Seth, for the tremendous episode.

**Seth**: Thank you so much, Jim. Next time, Fermi Paradox.

Production services and audio editing by Jared Janes Consulting, Music by Tom Muller at modernspacemusic.com.