Transcript of Episode 21 – Roman Yampolskiy on the Outer Limits of AI

The following is a rough transcript which has not been revised by The Jim Rutt Show or by Roman Yampolskiy. Please check with us before using any quotations from this transcript. Thank you.

Jim: Howdy. This is Jim Rutt and this is the Jim Rutt Show.

Jim: Listeners have asked us to provide pointers to some of the resources we talk about on the show. We now have links to books and articles referenced in recent podcasts that are available on our website. We also offer full transcripts. Go to jimruttshow.com. That’s jimruttshow.com.

Jim: Today’s guest is Roman Yampolskiy. He is a professor at the Speed School of Engineering at the University of Louisville.

Roman: Hey, Jim. Thanks for inviting me.

Jim: Great to have you on. Roman is the author of several books and many papers across areas including AI safety, artificial intelligence, behavioral biometric, cyber security, digital forensics, games, genetic algorithms and pattern recognition and actually a bunch more. He’s proposed a new field of study that he calls intellectology, the study of intelligence very broadly defined. For example, he places artificial intelligence as a domain within intellectology. In fact, let’s start generally in that area. In fact, the first thing I saw of yours when I was looking around for topics to discuss was your paper, The Universe of Minds, which I thought was very interesting. Could you talk a little bit about maybe both intellectology and also how that fits into the concept of The Universe of Minds?

Roman: Sure. As you mentioned, I have many interests. I stick my nose in many parts, but actually, they have a common fact. I’m looking at intelligence. I’m looking how to design it, how to detect it, how to measure it, how to control it, anything and everything intelligence. Many different fields contribute to that but they don’t have a unifying framework. That’s where intellectology comes in. The space of minds is a particular sub-topic in that where we’re trying to understand, well, what types of minds are really possible

Roman: We know human minds and they’re somewhat diverse. We have eight billion different humans running around. But can we go beyond that? Okay. We can add animals, higher level animals, primates, fish, they have somewhat different minds. We can start thinking about aliens. Well, if aliens exist, would they have different minds as well, different types of one’s wishes, desires, properties? And you can go with that. If you formalize idea of the mind as some sort of a software simulating this physical system, you quickly arrive at possibility of essentially equating all that software to an infinite set of integers. You just map it onto integers. Each integer represents some sort of a software product. You can show that there is no limits to how many different minds you can have as long as your definition of different minds include some specific level of difference. Let’s say one big difference is sufficient to distinguish two minds.

Jim: Yep. I think from the formal mathematical perspective, that’s interesting. But perhaps even more useful is to get people to open up their thinking about what constitutes a mind. I know when I’m talking to people about AI and particularly AGI, I see an awful lot of thinking channelized into thinking about minds not very different than our own. Maybe a little bigger, maybe a little faster, maybe a little smarter. But the thinking that I’ve got on the topic, at least, leads me to believe that there’s a vast space of minds available that may be very different than ours. That will have some serious implications around things like AI safety.

Roman: Of course. We can start thinking about the differences in terms of how we get there. Are we simulating human neurons? Are we evolving that software? Are we designing it from scratch? All of those lead to very different architectures, very different systems. That’s just, again, tiny drop in this infinite universe of possibilities.

Jim: Yeah, I think it’s really important for people working in this field, or even in trying to understand this field from a policy perspective to understand that when AGIs finally do arrive, they could well be very, very alien as compared to human brains. There’s no reason at all they have to be very much like human brains. It may turn out that’s the easiest way to start. But it may not be the easiest way to do it. We will soon find out.

Roman: It seems like if we just do it without care, without taking safety into account, it’s easier to create any general intelligence than to create a specific one with human friendly properties. Statistically speaking, we’re more likely to get a random possibly malevolent one if we just do it without concern.

Jim: Yeah, we’ll come back to that in a bit when we talk about AI, AGI, and AI safety. But I happened to see that you wrote a paper about one of my favorite topics, Boltzmann Brains. I want to put a warning in at this point, if you happen to be tripping on LSD at the moment, you might want to stop listening. I always warn people when I tell them about Boltzmann Brains. Never think about Boltzmann Brains while tripping. A very bad idea. It’s one of the most curious, most mind-twisting ideas out there. Why don’t you tell our audience, and remember, we have an audience of smart people, but not necessarily knowledgeable about the domain? What is a Boltzmann brain?

Roman: If you think, the universe is infinite in all directions. That’s a lot of computational resources. There is some physical theories which tell us that matter comes into existence after quantum fluctuations at random. Sometimes it’s photons, sometimes it’s a molecule, but given the infinite amount of resources periodically, something more complex will pop out. Maybe just maybe a brain with memories, with capabilities, will be such a fluctuation. They are called Boltzman brains. There are some interesting philosophical consequences of it. For one, maybe you were one of those fluctuations and you just have all these memories and universe around it as a side effect of being one such absolutely random meaningless Boltzmann brain.

Jim: Yeah. The other thing about them, if you assume, as you laid out nicely, they required assumptions, which I’ll get back to later, it’s essentially guaranteed that for instance, a Boltzmann brain could come into existence that was powerful enough to simulate our whole visible universe. For example, that we know it will happen if your assumptions are correct. That we have infinity of time and space and matter and the matter behaves the way we believe it does under quantum mechanics. That’s what I think is one of the weirdest and strangest thing about this idea. However, that’s where I tend to look at this concept. As fascinating as it is, I say, “It may not be real at all.” The reason is that those assumptions may not be real, or there may be a limiting case that makes them not able to produce high powered Boltzmann brains. Infinity is the key.

Jim: If the universe is finite, I should say, even if it’s very, very, very, very, very, very large, the probability of a high power Boltzmann brain randomly fluctuating into existence is still astronomically small as we know. Would you agree with that?

Roman: Right. If the resources are not sufficient, obviously, it’s not going to happen. But you can think about some simplifying assumptions. Maybe it’s just an instance of time, maybe a nanosecond of experience, which pops in, not a whole long life. You just perceive it as such.

Jim: That’s certainly interesting. That would open up the space a bit, say, let’s assume it’s very short. I think the other thing I’ve thought about, I’d love to get your thought on this, is even if the universe were infinite, if a sub-region of the universe was limited by its causality, let’s call it the light cone, then probably, we can rule out a Boltzmann brain that has any impact on us? Because if we don’t have causality across some unit of space time, then that is the chunk of space time that we have to use for reckoning whether a Boltzmann brain has any relevance to us. Does that seem reasonable?

Roman: It seems reasonable, but you have to self place yourself in that part of the universe. If you’re already a Boltzmann brain, which is just hallucinating things, you just perceive it as such, hallucination. It’s not necessarily true that you are part of that restricted part of the universe.

Jim: It’s true, it could be a simulation. Right? That’s what I alluded to earlier. It is certainly possible that if there were some higher level domain that was infinite in time, space, causal linkage, had matter sort of acted like our matter and with respect to quantum behavior, then we could say for sure, there would be a Boltzmann brain of sufficient power to have simulated our whole visible universe at arbitrary levels of precision. We will never know the answer to that probably.

Roman: You can also take it to the next level. It doesn’t have to be just brain. It could be a whole Boltzmann universe coming into existence just as well. I mean, if resources say infinite, we can go crazy with it.

Jim: Yeah, exactly. It’s interesting. I have chosen to preserve my sanity to take the pruning role that, “All right, all the implications of Boltzmann brains are so crazy that I’m just going to for personal purposes assume the universe therefore must be finite” I can’t prove it, of course. No such metaphysical speculations can be proven or they wouldn’t be metaphysics. But I do find it very, very useful to say, “All right, let’s just assume that the absurdities that come from Boltzmann brains can easily be turned around,” and lead us to say, “All right, we’re going to reject infinite universes and instead assume finite universes.”

Roman: Well, they can be finite, but you can have a multiverse of them as well. So it could be infinite in different directions.

Jim: Yes, it’s possible, but again, I would come back to the point that if the multiverses are finite, the probability of any of them having a Boltzmann brain in them of any substance, of any size is very, very small. We’ll get to this topic later when we talk about evolutionary computation. But the amount of random things that has to happen to produce a Boltzmann brain are exceedingly low probability. For instance, the universe the size of our visible universe, I think I’d be willing to say that a long duration high powered Boltzmann brain almost certainly has not emerged within our visible universe within the last 13 billion years.

Roman: It probably is true but then again, we’re observing from inside. We are the possible hallucination of such a brain. So whatever resources we observe are just what we see. It’s not necessarily through resources available outside of the simulation.

Jim: Of course. At the next level up, we can’t say anything actually. Right? Which then gets us to a little bit broader question of simulation. Many people have been talking about it. You’ve written about it, I believe. What are your thoughts about whether our universe is the simulation or not? Or what can we say about that?

Roman: Statistically it seems very likely. I would be very surprised if we were one real world, especially given how weird it is.

Jim: What’s some examples of weirdness that leads you to that conclusion?

Roman: Recent political situation definitely makes one pause and go, “This got to be a reality TV show. This cannot be real.” I’m joking of course. This is not at all relevant. But if you look at the statistical aspects of it. Just the sheer number of video games we’re running right now already without more advanced ability to create graphics and virtual experiences. It seems very unlikely that you are in the real world not in a video game, not in a simulation.

Jim: Let me push back on that a little bit. As I mentioned earlier, I’ve chosen to assume a finite universe, sorry about that, and further one that is the real universe. I have chosen to take the scientific realist perspective on the universe we live in. One of the items that I put forth as evidence to support that assertion, and it is an assertion, there can be no proof about such things yet, is the amazing fidelity of physical laws. Not to say that it’s impossible in a simulation, couldn’t have that level of fidelity. But it seems at least indicative to me that maybe it’s real. The fact that every electron appears to have exactly the same mass. That quantum mechanics appears to be reproducible to 14 decimal points, etc. What would you say to that?

Roman: How would it be different if you were in a simulation? You’d have very good graphics. You’d have excellent algorithms consistently providing same results. I don’t see how that indicates that it’s not a simulation.

Jim: Would it be able to simulate it the same level of detail everywhere? It appears, at least based on what we can tell from things like spectroscopy, that the same quantum laws are operating at exactly the same precision far, far back in time and billions of years from now, billions of years in the past, I should say, from light.

Roman: So I think there are some differences in physical laws. They do change a bit over billions of years. But I don’t see how it would be a problem for algorithms to be consistent throughout the whole simulation. I can have a video game where the gravity constant is the same everywhere.

Jim: But the precision also, again, 14 decimal points is way better than the physics engines in our video games today.

Roman: Well, I agree with that. But that’s not a limit on technology. There’s just a limit in what we’ve achieved so far. It’s following the same exponential improvement as anything else digital and computational. It looks to you like it’s pretty good graphics, but you have no idea what the graphics are outside the simulation. If you’re Mario in an eight bit version of a game, you don’t know any better. You think eight bits is like awesome.

Jim: Yeah, that’s true. That is true. Again, it’s not logically impossible. We certainly could be a simulation. My view on this is, if there’s a realm above ours that is infinite, then we are a simulation. The fact that there will be an infinite number of Boltzmann brains powerful enough to simulate our whole visible universe essentially makes it an almost metaphysical certainty that we are a simulation.

Roman: I think simulations wouldn’t need infinity. It’s just enough that it’s a significantly larger percentage.

Jim: I don’t know about that. I think it has to be infinite.

Roman: If the real world creates let’s just say hundred simulated ones, it’s still just 1% chance you are in the real one.

Jim: Yeah. May be true. My intuition, I can’t prove it. But my assertion is if there’s a realm of infinite, we’re definitely in a simulation. If there is no infinite realm, and it may even be… There’s a pruning rule, if there is no infinite realm, it’s causally connected, I’m going to think about that one some more, then we’re not. So that would be a program of thinking, should we ever be able to look beyond our universe to the higher realm to maybe get some hints on whether we’re in a simulation or not. Another paper you wrote was… Was it called Glitches in the Matrix? I think at the end of the day, you rejected glitches in the matrix, but talk to us about that a little bit.

Roman: So we are both interested in discovering, are we in a simulation or not? What you would look for is computational artifacts than we do computer games. There are certain things we do to make them more efficient. For example, we may not render something if no character is looking at that object. So you have observer effects you can detect. You can have some sort of… You talked about precision levels. If we see universe as a digital universe or a digital philosophy, there are certain discrete components, planks, level of time and space, which are indicative of digital underlying architecture. So those are not proves, but those are interesting things to look for. If you make a prediction, okay, we’re in a simulation, what possible glitches would I find? Then you can look for them and see if it’s in fact the case.

Jim: Yep. A friend of mine, Ann Solomon, who was at MIRI at one point. I’m sure she’s not the inventor of this, but she would always say, “I wonder what would happen if we sent a probe to Alpha Centauri? Maybe we find it’s just a wire frame? That would be interesting.”

Roman: It probably wouldn’t be because we would be getting there and observing it and would change how it is represented. It would improve rendering for distant objects once we got closer.

Jim: Yep. If it was like a game, that would likely be the case. The example I gave earlier is perhaps more interesting that the spectroscopy from galaxies very far away in both space and time, we’re not yet able to prove 14 decimal points of quantum behavior fidelity but we can get increasingly high levels of quantum fidelity that again, to your point, doesn’t prove anything but at least is perhaps suggestive that the computational loaded list to simulate the universe at that level of fidelity could be awful large.

Roman: As a cyber security guy, for me, the interesting part is if it’s a software simulation, how do you hack it? How do you jailbreak it? How do you get source code access, modify some things to maybe escape the video world?

Jim: Yeah. Can you be Neo? Right?

Roman: Something like that. Yeah.

Jim: Yeah. Back to glitches, again, scanning your paper rapidly looked like your conclusion was, “Yeah, as a bunch of crazy people claiming various glitches but there’s nothing provable at this time.” Is that where you stand?

Roman: The crazy ones are definitely meaningless. They don’t mean much. I’m more interested in how different properties of quantum physics can be mapped on to this idea of us being simulated. So observer effect is definitely one of the more interesting ones. When you actually try to perceive something, it changes how it is rendered, how it is presented to you. To me, that’s an interesting surprising effect, which is very consistent with this idea of intelligent beings impacting how the virtual reality around them is presented.

Jim: Now, of course, that’s a very controversial view in the field of quantum foundations. Earlier in the show, we had Lee Smolin from the Perimeter Institute. He would argue that there is no observer effect, but there is a measurement effect. The measurement effect turns out actually to have to do with interactions at different scales of quantum collapse has nothing to do with whether there’s an observer or not. So I think that’s still an open question whether there really is an observer phenomenon quantum mechanics.

Roman: He’s definitely a better physicist, no doubt, but my understanding is that in physics, they still have not decided what is an observer. Is it the conscious entity? Is it the measuring device? Is it something else? What is the minimal observer sufficient to collapse quantum equations and things of that nature? I might be wrong in that.

Jim: Yeah. Again, as you’ve pointed out, it’s still an area where people argue but Lee Smolin and also my friend recently passed away, Murray Gell-Mann was also of the view that whatever we were calling the observer effect really had nothing to do with a conscious observer, but had to do with the interaction between uncollapsed quantum phenomena and collapsed quantum phenomena. So there really was no observer in the loop as he would always say, “You think that moon didn’t exist before someone observed it?” He’d say, “Of course that existed.” The fact that it was a collapsed example of physical material was sufficient to essentially ground the uncollapsed quantum events that are happening throughout the moon. But again, it’s an open question, but it would be an interesting area to probe other possibilities for detecting fluctuations or glitches in the matrix.

Roman: Again, I have a project coming up. I haven’t actively worked on it. But the idea is to study exact bugs in games and virtual software, and to see if there are equivalent phenomena in latest physics. I collect bugs of all sorts. One of my big hobby is bugs in AI. I have multiple papers on that subject, just historical examples of accidents. Continuing with this line of reasoning, looking for very common artifacts of computation and trying to see if we observe them in visible universe at least.

Jim: Interesting. Now of course, in software, we can have actual bugs in the software and then we can also have hardware transients. The other classic example is a cosmic ray that flips a bit or two in memory.

Roman: Absolutely, and we can study all relevant effects. We don’t know what type of computer is running the simulation. Is it quantum? Is it classical? Is it something completely different? We have no idea about and cannot possibly figure out. But it’d be interesting if there is certain consistent mapping and predictive power in this idea. There are some papers starting to look at that, but they’re very science fiction-ey at this point.

Jim: Closely related topic that you’ve written on is unexplainability and incomprehensibility in AI. Tell us what those things are.

Roman: We’re starting to make really cool AI systems, very capable. A lot of them are based on deep neural networks. Simulations of human neural architecture to certain degree. They are working like black boxes. We have a lot of components, millions of neurons, billions of connecting weights. All different weights for feature actors of thousands of different features for let’s say classification tasks. We would like to understand how decisions are made. They make very good decisions. They outperform humans in many domains but they just give you an answer. We would like to know, well, how did you get to this decision if you denied me a loan for example, why? The best we can do right now is some sort of simplified top 10 features, well, you denied for this reason, that reason, but we don’t get a full picture. It’s a simplified explanation because we just can’t handle the full complexity of the decision being made with so many features.

Roman: A lot of people feel that it’s just a local limitation. We’re going to get better at this and we’re going to get to the point where we can get perfect explanations. Whereas in the paper, I argue that it’s a fundamental impossibility results. There are certain things a more capable system, a more complex system can never fully explain to a lower level system. Even if there was some way to compress this information, there are limits to our comprehension. We would not comprehend some of the more complex results because we’re just not smart enough to get to that point.

Jim: Yeah. I like that actually. The distinction you made in that paper between unexplainability and incomprehensibility. So in one case, it may be that a black box AI is just unable to produce sufficiently granular explanation. But the second might be that our cognitive limitations would make us unable to comprehend it. Is that the distinction you were making between those two terms?

Roman: Exactly. So we see it a lot with let’s say, universities, right? For certain majors, we require students to have certain GRE scores, SAT scores, what not, because we found that students at lower level don’t seem to understand those concepts well. So we have even like with all of us almost identical in our intelligence, we see already differences in what can be understood. Take it to extremes, take it to systems with million IQ points equivalent. That’s very unlikely that we’ll be able to follow along and go, “Yep, that makes sense. All right.”

Jim: Indeed. Now let’s talk about that a little bit. Humans I’d argue, I think you actually mentioned this in passing in those papers as well, are also not explainable. For instance, this example I love to use when we talk about this is the last sentence you said. You have no idea how that sentence was created.

Roman: Right. Humans have black boxes, and there are some beautiful experiments in split brain patients. What really happens, it seems, is that we come up with explanations for our decisions and behavior later. Most of the time we just make it up. It’s complete BS.

Jim: Yeah, the famous confabulation. There’s more and more work on that. I’m now coming in a way assuming that a whole lot of what we do is black box. So if humans are mostly black box, why do we believe we need to hold AIs to a higher standard?

Roman: Well, we hope we can get there. But for me, that’s actually evidence and argument for why I’m right. It’s not possible and I cite it in the paper and I say exactly that. If this is a simulation of natural neural networks, why would it be capable of doing something the actual thing can I do?

Jim: Yeah, of course, this is… I read Gary Marcus’s book recently. In fact, he’s going to be our guest on the show later this month. He argues actually against deep learning, or at least relying on it too much, because of its lack of explainability. He propose putting more efforts on symbolic approaches, which are more inherently understandable. What do you think about that?

Roman: I think he likes hybrid approaches, taking advantage of both methods of computation and intelligence. It seems to be what humans are doing. We have a subconscious component where some magical set of neurons fires and we get kind of a few good choices to decide, for example, for chess, and then we use our symbolic explicit reasoning to pick the best one out of two or three. It’s likely that machines will follow in the same way. It will be a very deep neural network doing pre-processing and then some sort of expert like system making final decision due to some constraints as well.

Jim: Yeah, that’s the approach project I’ve been associated with lightly for a few years called Open Cog, is taking that approach where there’s an inner symbolic layer. Then there’s the ability to plug in many different perhaps deep learning type architectures to do the equivalence of perception and classification, object identification, etc. So I’m with you on that one and Gary, that probably, it’s some hybrid, that’s going to be the way we actually reach AGI.

Roman: I think so. There are so many different methods in AI over the years. I think all of them are valuable. They’re just different branches of this bigger puzzle, and eventually we’ll learn to see how they fit together and that’s going to be the way to succeed.

Jim: Interesting. Let’s see here. Where are you on predicting the road to AGI? Again, for our audience AGI, is Artificial General Intelligence, which means more or less something like a human level of ability to solve many different problems at a human level of competency.

Roman: Looking at where we got good results and succeeded, it seems that just adding a lot more compute and a lot more data to existing neural architectures takes us very far. It doesn’t seem to stop taking us in that direction. So for a while, I think we’ll make great progress just scaling everything we have. At some point, we’ll hit some bottlenecks and that’s where these other methods, symbolic methods will allow us to move to the next level, I think.

Jim: Certainly, deep learning is just amazing, the progress it continues to make. Every week we see an interesting result. On the other hand, I thought Gary Marcus made some good points of showing that, yes, it’s amazing the results it makes, but the limitations are also pretty staggering so far. For instance, in language understanding, so far, there’s nothing like real understanding. It’s all very, very powerful statistical associations. He at least would argue that’s qualitatively different than real understanding. Any thoughts on that?

Roman: Right. But real understanding of language is AI complete. If we had it, we would have AGI. You can’t get partial AI complete solution. You either have it or not. So for a while, we’re not going to have real understanding until we. That’s too late at this point. We’ve got full blown AGI.

Jim: Interesting. Yeah, I had a couple. One of my questions I have is, you’ve written on AI complete, AI easy, AI hard, and the Turing test. Could you explain those terms and your thoughts on the Turing test, which as you know is quite controversial whether it is or is not a good test.

Roman: Right. So in computer science theory of it, it is a very useful concept of NP completeness. We found that almost every interesting problem is both difficult and can be converted to every other interesting difficult problems. So if you solved one of them, it’s like you found solutions to all of them. I argued that in AI, there is a similar set of AI-complete problems. Starting with passing the Turing test is the original one where if you find a perfect solution to one of them, you basically got AGI, you got solution to AI. If you can pass the Turing test, you can have intelligent conversation on any topic. You can be creative. You can be novel. You can do well in any domain because any domain can be a sub-domain of questioning in the real unrestricted Turing test.

Roman: So those [inaudible 00:29:33] AI-complete. I didn’t originate this term. I’m just going to try to formalize it and use it. It seems that language understanding, through language understanding, is required to pass the Turing test if it’s not restricted to two minutes with amateurs. Then that makes it an AI-complete problem. So we’re not going to see any warning. We’re not going to see partial understanding before we get to full one.

Jim: Interesting. I have to say I agree with you. I’ve been advocating on the Open Cog project for how many years now? Five years, that language understanding is the bottleneck. If we get through that, on the other side, we’re mighty close to AGI.

Roman: Right. Just go next level and I think that’s AGI. If you have full understanding, if there is no limits to your linguistic abilities, you are one of us now.

Jim: Probably or it’s just shy of it. We’ll see. It may be they also have to have some linkage to affordances in whatever domain you’re operating in. Because intelligence is not just understanding, it’s also the ability to act and solve problems.

Roman: Right. But you also notice that humans, we perceive humans as general intelligences but in reality, they’re very limited. Most of us are not general in all domains. Most of us have very hard time doing things. We require AIs to do to become AGIs. Right?

Jim: Yeah.

Roman: I cannot fix the car. I don’t speak Chinese. There’s whole bunch of things I cannot do even though I claim to be a type of AGI.

Jim: On the other hand, if we assume… Well, let me back up a little bit. I agree with you that we overestimate human capability. In fact, I like to say because Mother Nature is seldom profligate in her gifts from evolution, which we’ll get to later. It’s almost certain that humans are just above the line of general intelligence. We’re almost certainly a week general intelligence. From the work I’ve done and research I’ve done in cognitive science and cognitive neuroscience, it’s really obvious where some of those weaknesses are. For instance, our working set size, our working memory of seven plus or minus two, it would seem pretty clear that a mind sort of like ours with a working set size of 100 would be qualitatively different than five who’s the village idiot and nine who’s Einstein. If you got to a hundred, it would be a very different brain.

Jim: Some other examples on why we’re pretty damn weak is our memory. As we know, it’s low fidelity. It’s unstable. It’s rewriteable from when you retrieve it. Can be accidentally rewritten with changes, etc. So we are a pretty weak example of AGI.

Roman: I agree. We have a paper on Artificial Stupidity where we talk about safety features which basically limit artificial intelligence systems to those human levels making them not better than us and so making us capable of competing and controlling with them.

Jim: I actually sent that paper to David Krakauer, my friend who’s president of the Santa Fe Institute. Turns out one of the areas he’s interested in is stupidity. He is collecting interesting research ideas about stupidity. I actually sent to him yesterday when I was doing my preparation. So if he reaches out to you, you’ll know why that happened.

Roman: Well, I’m always a stupid expert.

Jim: He was on my show earlier, and he said, “I’d like to propose that somebody set up endowed chair as a professor in stupidity.” He says, “We need that.”

Roman: I’m very open to that position.

Jim: I thought that was hilarious. Of course, you need the smartest person around to be a professor of stupidity.

Roman: Definitely good self esteem.

Jim: Exactly. Right? You have to be fully confident of yourself. We’ve talked about a lot of things, which I was going to talk about later. But now we can hop back to some of the framing, which was, how are these things related to AI safety? In fact, maybe if you could just talk a little bit about AI safety in general, and then kind of relate how some of these ideas we’ve talked about have some bearings on AI safety?

Roman: Great question. If you look at examples of other minds we have, wild animals, for example, or pit bulls, they are somewhat different in what we’re trying to do and so sometimes they hurt us. People who are unfortunate to have mental disabilities sometimes act in dangerous ways. So those are trivial examples with a little bit of difference there, already some dangerous unsafe behaviors. If you take it to extreme, the differences become extreme, the danger grows. So a lot of times, it’s not malevolent intention. It’s just a side effect. There are some silly examples like if I have the super powerful system, but it’s not very well aligned with human preferences. I tell it, “Well, make it so there are no people with cancer around.” There are multiple ways to get to that goal.

Roman: Some of those like killing all humans are not what we have in mind. Whereas others where you’re actually curing cancer and people are happy is exactly what we hope for. So specifying those differences and making sure that the system is under our control is what we’re trying to do. But we don’t fully understand, well, how is the system different? I have other papers on impossibility results talking about limits to predictability of such systems. We cannot predict what they’re going to do if they’re smarter than us. There are other limitations and one of the papers I’m working on right now is serving those limitations from different domains. All of which are part of AICT research, game theory, control theory, systems networks, all of them have well known impossibility results, mathematics, of course, economics, public choice theory. It doesn’t seem like we have any tools to overcome some of those limitations. So maybe the best we can hope for is safer AI, not safe AI in the perfect sense.

Jim: Referring back to some of the earlier things we talked about, certainly, the fact that there is a very large universe of minds available to AI may make this problem, as you say, unsolvable.

Roman: I’m leaning towards that conclusion more and more. I’m starting to see some paradox based self-referential proofs for that. Of course, it really depends on what specific system we have, what architecture, what limits are in place for it, but it seems like unrestricted super intelligent system would not be controlled by us.

Jim: Or at least it will be very, very difficult. Right?

Roman: I’m leaning towards impossible. At that point, control will switch. There is hope that maybe there is some alignment between us. But it’s very unlikely to be by chance. It’s only if we have some sort of control over initial design and those initial features are propagated for later updates of that software.

Jim: Okay, that’s interesting. So basically, you say we’re doomed.

Roman: Well, not yet. We still have a bit of time. It’s possible we’ll make some good decisions about what to do and not to do. We have examples from other domains where we slow down a bit with research. Human cloning, genetic engineering, chemical weapons, biological weapons, lots of examples of situations where we said, “Let’s not do it just yet. Let’s study it a bit more and then we’ll decide if we need to clone humans. Maybe it’s super beneficial, but let’s make sure we do it right the first time.”

Jim: Yeah. Interesting. Now I’ve heard some people propose very extreme measures to avoid the AI apocalypse. Some of the folks associated with MIRI have floated crazy ideas like maybe we should sterilize the earth with very large EMP pulses every 10 years that would destroy all electronics to keep people from creating AGIs.

Roman: So sometimes medicine is worse than the disease and don’t propose anything which will completely destroy our standards of living or anything of that nature. Of course, if you truly believe that problem is unsolvable, and the outcome is doom as you put it, then perhaps the solutions become less crazy.

Jim: Yeah. I think a lot of that comes around to what one thinks about the singularity. For our audience at home, the singularity is a very interesting concept, almost as wild as Boltzmann brains, not quite. I think it was first stated by Vernor Vinge though there were some predecessor people who said things very similar, but it’s essentially as follows. Let’s imagine we have an artificial intelligence that’s about as powerful as a human, maybe a little bit better to keep the argument simple. Let’s call it 1.1 human power. What happens if we give that artificial intelligence the job of creating its successor? It’s not only smarter than humans, but a lot faster than humans. Fairly quickly, it produces an improved version of itself that’s 1.3 human power. Then you tell that artificial intelligence to design its own successor, and it produces a successor that’s 1.9 human horsepower. Then the next one’s 3.6, and the next one’s 10, and then one after that’s 1000. Soon it’s a million times as powerful as a human. That’s essentially the argument about the singularity.

Jim: Some people believe the singularity could occur within hours or days after reaching AGI levels of say 1.1 humans. Others say, “No.” If we were to have runaway super intelligence within hours of creating the first AGI, then it strikes me AI risk is really high. If it turns out it will take centuries to grow from say 1.1 to 100 human horsepower equivalence, then the idea of AI risk is probably very manageable. Where do you come out on this concept of the singularity?

Roman: I think it would be a very fast process. We are simulating, as you said, human intelligence just at scales of many, many magnitudes larger, with larger memories, larger access to information, so speed up would be huge. They don’t need to sleep, they don’t need to eat. They don’t take breaks. You can run many, many such systems and pedal. I think the process would be very fast. At that point, trying to develop safety mechanisms on the go would be a little too late.

Jim: Interesting. I have to say after thinking about how limited humans are, I agree there’s a lot of room above us. Which I think it’s at least supportive of the argument that the take off could be fast.

Roman: As you said, we have those limits in memory and short term memory. If they were just removed, we would already be much more capable. You see it with some savants in their mathematical ability and other abilities.

Jim: Yeah, and those people have working memory sizes no bigger than nine or 10. So what happens when you get to a hundred? It’s almost unfathomable.

Roman: Exactly.

Jim: So let’s both agree that at some point, the singularity could be fast take off. Another article that you wrote is called Leak Proofing the Singularity. Talk to us about that? How do we be prepared to deal with a fast take off AI situation?

Roman: This paper specifically talks about developing tools for us to be able to safely design, develop, and test artificial intelligence systems. Everyone’s working and making one, but do they have appropriate infrastructure to study it? To control inputs? To control outputs? If you are working with a computer virus, you’re going to isolate it under a machine not connected to internet. You will try to understand what servers it’s connecting to and so on. There is not a similar protocol for working with intelligent systems. So that was my attempt to do that. Design different levels of communication protocols. How much information goes in, what type of information goes out to prevent social engineering attacks. Overall conclusions of that paper is that it’s a very useful tool. It buys us some time, but at the end of the day, the system always escapes from its containment.

Jim: That’s interesting. So we’re doomed.

Roman: You keep saying that.

Jim: Interesting. There are other people working on this question, one group, The Future of Humanity Institute for instance. What do you think of other people’s work in this area?

Roman: It’s a very good. We take very different approaches. I looked at it from cyber security point of view, specifically side channel attacks. They look at it from, I think, more philosophical point of view being philosophers. But I think we all agree that it’s not a long-term solution. You cannot restrict super intelligence to a box permanently.

Jim: The MIRI guys I know or Eliezer is at least thinking that maybe we can.

Roman: I don’t think he does. I think he actually did experiments where he pretended to be super intelligence and was able to escape almost every time just by talking to regular people. That’s not encouraging.

Jim: He and I had a debate one time where I took the perspective that there’s no way you will ever be able to constrain a super AI. He claimed at that point, I’ll admit this was 12 years ago, that, “Yes, I can goddamn it.” I don’t know if you’ve ever met him but he’s a strongly-

Roman: I met him.

Jim: He is a strongly argumentative guy and so am I. So we had a good time. I don’t think either of us convinced the other but we had a very fruitful conversation. I’m glad to hear that he has come around to my point of you, goddamn it, that it’s going to be exceedingly difficult if it’s at all possible to be able to constrain a super AI.

Roman: What we need is more mathematical proofs and rigorous arguments showing who’s right. I mean, people love discussing this topic, but I think there is very little formal argumentation for either point of view. I’m happy to see someone prove that it’s possible and we are totally not doomed.

Jim: My guess is we won’t be able to prove it one way or the other. That this is a complex systems problem. Having been involved with the Santa Fe Institute for the last 18 years, and making the study of complexity one of my areas of strong interest, I’ve come a way to realize there are a huge amounts of domains that aren’t really very amenable to close for mathematical analysis.

Roman: At least not with existing tools. Are there proofs that cannot be done in principle?

Jim: Yeah. I mean, there’s three different cases. There are ones that can’t be done in principle. Gödel has shown us that such things exist, but I would argue there’s a vastly larger group of things that can’t be done in practice. The one I point to as a very general case example is deterministic chaos. We know that even surprisingly simple complex systems, the three body problem, for instance, even minor changes or Lorenz equations, even minor, minor changes in the initial conditions, any measurements, produce vastly and entirely incommensurable results. I suspect strongly that AGI is going to be in that case that even if in principle it were mathematically determinable, the realities of dealing with a real AI and all the huge numbers of design space questions about it would lead you to a situation like deterministic chaos or the limits of our ability to be precise would make solving it enclosed for mathematics. Whether it was containable or not impossible.

Roman: I don’t disagree with you. I just would love to see someone publish a direct proof.

Jim: Yeah, I’m sure there are people working on it. But my bet, I’ll put money on the long bets thing. Anyone who wants to argue the other side, I’m going to bet that no such proof will be done. I will not take the argument that we can prove it can’t be done. But I’m not going to say just that it isn’t done because of something like the deterministic chaos problem.

Roman: You might be completely right. I’ll share my next paper with you on some of those impossibility results.

Jim: Love it. That’s an area I’m hugely interested in as you can see. Now let’s turn from the real run away, we’re doomed, which I call the Emperor of Paperclips scenario. That being an example I think it was Bostrom… I think Eliezer actually used this way back, Yonder also, that we stupidly give a super intelligence the job of optimizing a paperclip factory. It escapes the factory and decides to turn the whole earth into paperclips. Builds interstellar spaceship so they can go around the universe, turn all the planets and all the stars into paperclips. That’s the AGI run amok scenario.

Jim: I’m saying yes, it’s possible that that is a risk. However, there’s an earlier risk. I think this is one that will resonate with you based on some of your more practical AI security issues work that you’ve done, which is I call it the bad humans with sub AGI AIs. Let’s put a very tangible example, the Chinese. I think we have a lot of things we have to worry about risks with the misuse of AGI. Everyone will decide what’s use or misuse of AI in ways that are bad long before we get to AGI.

Roman: I agree. I have the original paper on malevolent AI. How to create it, how to abuse it. That’s the unsolved problem. Even if you manage somehow to create a very nice friendly AI product, what stops the bad guys from flipping a bit, and now it’s spreading cancer instead of healing it? That’s hard to solve. Inside of that is very difficult to address. We don’t have any solutions in that domain. We only have a few papers and a few workshops even looking at that. That’s definitely a harder problem because it includes all the other concerns: misaligned values, bugs in code, poor software design, you name it. It’s still part of it. But now you have additional malevolent payload.

Jim: Yeah, not necessarily even malevolent. I kind of doubt that the leadership of China is malevolent. They probably think that they are doing the right thing. But from our perspective, at least, I think we would argue that what they’re building is the technologically empowered fascist dictatorship that even George Orwell couldn’t have imagined.

Roman: Well, we’re not limited just to very powerful governments as bad guys. It could be anyone with access to this code. A lot of his code is becoming open source. Crazy people, doomsday cults, just anyone, suicidal people, you name it, anyone can just add their own goals to AI as a product.

Jim: Yeah, if it’s powerful enough. I mean, this is where I make the distinction between sub AGI and AGI. If we have AGI then yes, one could imagine… Let’s say it’s some crank in the basement achieves the first AGI that could do great damage. How much damage can they do with a sub AGI AI?

Roman: The damage is proportional to capability. We have already seen people use ransomware scripts. They’re not experts. They are not hackers. They run a program and cause billions of dollars in damage. It’s scale. So as AI becomes more capable, damage grows, or isolation grows. But the problem is the same. You have actors who are not necessarily experts, are very powerful with access to this very powerful technology. Once it gets to general super intelligence, it only gets worse. I mean, most of the cases, think about the person next to it gets punished first. But let’s say it is controlled in some way by good guys, the bad guys can still change that fact.

Jim: Interesting. You alluded to one of the issues here is that a lot of AI code is open source. I’m sure you know that there is a recent case where OpenAI who were set up with a name OpenAI, to be open has decided not to publish the full form of one of their models, the GPT 2 language processing system. Could you tell us a little bit about things like that and whether open source is or isn’t a good idea for AI as an AI security practitioner?

Roman: Sure. I think that particular example is more of an exercise in seeing how you can not release a model, what effect it would have on the community. Would it be an acceptable practice? Would someone else quickly just achieve the same results by passing this limitation? I don’t think that particular model is that dangerous. There are some partial models and competing models which achieve very similar results. But as a general concept, yes, if you had somehow managed to get working AGI product, releasing it on the internet right now with no safety mechanisms would be a very bad idea. It’s ironic that their name is OpenAI. In general, open source software is better reviewed, more reliable, has less back doors. But this is like releasing code for a very dangerous virus or something similar to that. It’s just not safe to do so.

Jim: Yeah. That’s an interesting question. Again, the Open Cog project, now also the Singularity Net Project, which I’ve been loosely affiliated with for a number of years, they make the opposite argument. They argue that there’s an even bigger risk, which is the problem of the first AGI being achieved in a closed fashion by some power, whether it’s the US government, whether it’s China, whether it’s Russia, whether it’s Google, and that the danger of a single first mover with AGI is so large that it’s better to do AGI research as open source so that there are multiple AGIs that come into existence at the moment of the… Assuming their project is the one that gets over the line first, so that there’s many AGIs that can be used to police each other. What do you think of that argument?

Roman: There are good arguments on both sides, open or closed, but this particular argument I think is not optimal. In my opinion, if it’s not a controlled AGI, you just got there first, you’ll probably be the first victim to begin with. So it doesn’t give you any advantage to be next to it physically first. You don’t have control of it. If you don’t have control of it, you don’t have an advantage. It’s not beneficial to you. The best argument they have for slowing down is exactly that. If you get there first but you can’t monetize it, you can’t even survive it, why are you doing it in the first place?

Jim: That’s assuming fast take off. Right? I think we both agreed fast takeoff is not a bad hypothesis. If it’s slow take off, it’s a different story. I think that’s a very interesting fork in the road when we’re thinking about this. If an AGI takes off to the singularity rapidly, then it’s probably unsafe to have out in multiple hands in an open source fashion. If it’s slow take off, maybe not so much.

Roman: It also creates competition and possibly war like scenario. Let’s say one is more likely to be changed into a safe version or a safer. Now you have multiple problems to deal with. You have multiple rogue general intelligences competing, fighting, using… I don’t even know what technology to defeat the others. Humanity is a side thought. They don’t really care about us.

Jim: this is the game theoretical trap unfortunately, which is let’s assume there’s competition to produce the first AGI. Let’s get rid of the open source. Let’s say they don’t have the resources, but it’s a dozen proprietary entities around the world, corporations, countries, a small number of really rich people like OpenAI, they all have an incentive to be there first, particularly in fast take off. Because whoever gets there first can probably dominate everybody else and prevent them from achieving AGI. We therefore have a race to be first. If we assume that safety comes with a cost in both resources and time, there’s an unfortunate, perhaps deadly game theoretical trap, which there will be a strong incentive to not pay attention to safety, because if you pay attention to safety, the person that doesn’t pay attention to safety will get it to AGI first and be able to suppress you.

Roman: You are correct. But what most people don’t realize is that money in a post-singularity world has very different value. I’m not sure if it has any. So if you’re trying to maximize shareholder profit and you get there first and you have this uncontrolled super intelligence, this is your least concern how much money your shares are worth. You really have much bigger problems to address if you’re still around.

Jim: Yeah, I agree money may not be the factor. In fact, truthfully, I think the rich people interested in AGI, it’s not about money. It’s about hubris and ego. Right? They’re smart enough to realize that the world will be very, very, very different on the other side of AGI, particularly if there’s fast take off. The current status hierarchy will be completely overturned. But think of somebody like Putin, right? He has said accurately at some level he who controls AI controls the world. I think we can say with a high level of confidence that what Putin would like to do is control the world. So let’s imagine that a Russian government lab produces the first AGI or at least is aiming for it. They, under, Putin’s instructions, he says, “Forget about safety. We want to be first so we can dominate the world.” Aren’t we caught in a game theoretical trap around that?

Roman: You correctly said whoever controls AI, not whoever has access to random super intelligence.

Jim: Okay. Well, make that distinction. That’s good. Let’s work that one.

Roman: Are you in control? Are you actually telling it what to do or you just have this super powerful genie with no controls and it does whatever it wants and maybe you will be used for resources first? Your country, let’s say Russia, will be the first set of molecules converted into paperclips.

Jim: I like this. If a sponsor for AGI is rational enough, whatever that means, then they will not fall into the game theory trap of rushing for AGI without safety because they will get no benefit from it.

Roman: That to me is a strongest argument for any type of moratorium or self-restriction in this research basically pointing out. If you’re smart enough to understand this argument, if you’re smart enough to build AGI, you should be smart enough to understand this argument.

Jim: Very interesting. I like that a lot. Because it’s the kind of argument that’s accessible to any reasonably intelligent person. It requires no special knowledge in AI or engineering or anything else. It’s probably an argument that a bright guy like Putin could understand. Now, the question is, has the people around Putin made sure that he is educated with that argument?

Roman: I’m not in the know. I don’t have any insider information from Kremlin. But my concern would be that as he’s getting older, he has less to lose. It’s kind of a gamble you can take if you’re going to be gone anyways in the hopes of getting mortality solutions of problems. Just becoming that historical figure who got there. So if you have less to lose, you’re more likely to take the risk even if you understand it.

Jim: Yes, humans, they are not so rational. Right? As we know, they have all kinds of agendas that are not strictly rational.

Roman: We talked about stupidity as a big factor in all of this. So yes.

Jim: Yeah. I would say a guy like Putin is clearly not stupid, but he may have agendas that aren’t strictly rational and for the good of the human race, or even for the good of Russia. They could be purely ego-driven.

Roman: Absolutely. Again, as people get older, they change how they think, how well they think. That’s why term limits are a good thing.

Jim: Yep. We just noticed the Chinese got rid of their term limits, which is a very bad sign for exactly that reason. Now, one guy can say, “The state is me.” As you say, if he’s interested mostly in his historical record or something like that, he might go all out and he has a lot more resources than Putin does to try to get there first and to ignore this logical argument that ignoring safety is actually not the smart thing to do because you’ll get no benefit for it.

Roman: Maybe an argument in such cases is to say that the system would be more powerful than the leader and so he’d lose power to that new leadership.

Jim: Yep. I think it’s important to get these ideas out into the world because I don’t think they are out in the world all that well.

Roman: Very few people are in this space. Unfortunately, if you look at everyone working in AI safety in general full-time, it would not be a lot of people in comparison to how many work on developing more capable AI.

Jim: I think that’s absolutely right. For the game theory reason, right? What business is going to invest in safety when there’s no return for it in the short term?

Roman: Well, for big corporations, there is certain cost of unsafe embarrassing products. We saw, for example, with Microsoft chatbot, there was a lot of negative publicity. If they just read my papers, they wouldn’t make such silly mistakes as releasing a chatbot to learn from teenagers on the internet.

Jim: Well, I think that’s about political correctness, not about AI safety. I think frankly, nothing wrong with what they did. They were just too much a wimps to take the result. The world was not threatened by that chatbot but because of political correctness were embarrassed by it.

Roman: Of course. What I’m saying is malevolence is proportional to capability. Everything the chatbot could do was kind of insult people and it did exactly that. As it becomes more capable, it will do whatever other bad thing is capable of if you don’t explicitly work around it.

Jim: All right. Okay. So now this gets to my next question. To a degree, let’s say a company like Microsoft must be investing in safety probably in part ought to be driven by how close we think we are to AGI. If we’re four centuries away as some people argue, then the moral argument for must invest heavily in safety is relatively modest. It’s just pragmatic. Is the embarrassment or quality of your products worth spending X amount on? If we think that AGI is near, then there’s a strong moral argument that says we ought to be spending a lot on safety. Because to your point, if we haven’t built safety before the takeoff, it’ll be too late. So I guess that brings me around to your thoughts on how close we are to AGI.

Roman: That’s a very difficult question. I don’t think anyone knows and I don’t think there is going to be a warning before we get there, or 10 years before we get there. We’ll just kind of hit it. A lot of data, a lot of predictions point at 2045. I saw more extreme predictions and [inaudible 01:00:49] sometimes from industry insiders with a lot of access, but I think it’s reasonable to concentrate on that date for now. It gives us enough time to actually do something but it’s not so far that it’s meaningless.

Jim: Yeah, that’s kind of the Kurzweil timeframe. Right?

Roman: Exactly. He’s doing a good job with his graphs and predictions in the past reasonably good.

Jim: Yeah. I’ve heard through the grapevine, I won’t say from whom, that as part of the deal to raise a billion dollars from Microsoft, that OpenAI told Microsoft that they believe they’re within five years of AGI.

Roman: I heard seven years before as insider information. I think it’s less likely proportionately, but I don’t think it’s crazy at all. I’d give it at least 10% chance.

Jim: Okay. So we’re going to say that Roman says 10% chance within seven years. Right?

Roman: 90% not in seven years.

Jim: Yeah. 90% not in seven years, but 10% chance of a runaway. Okay. Let’s compound the things we’ve said we think are true. We think AGI will be fast takeoff more likely than not. There’s a 10% chance it could be achieved in seven years. I think I would agree with that also and here’s why. Because something else I think we both agreed with is that real language understanding is the portal through which if we could solve that, AGI will take off very rapidly. The fact that it’s just one problem, that’s a damn hard problem, but it’s just one problem, tells me that all it takes is one person with the right insight, or one team with the right set of approaches to crack that problem. So it could be on the short term. So if we have a 10% chance of fast runaway AGI in the next seven years, doesn’t that mean that we should be spending billions a year on AI safety?

Roman: I would support that. Thank you. I’ll take all the help I can get. I also would like to remind you that I think a lot of it is just scaling resources. So scaling compute and size of our data. So seven years becomes very reasonable if you just project how fast those things grow. I definitely try to work in this full-time. I hope a lot of other people do as well.

Jim: Yeah. This quick little back of the envelope calculation I think tells us as a matter of social policy that it’s very important that the powers that be, particularly people who control large budgets start to realize that it would be imprudent not to spend significant funds on AGI safety right now.

Roman: But of course, the question is, is it the problem of money? Is the bottleneck money? If I had billions of dollars, can I solve this problem in seven years? I don’t think the answer is yes, or even remotely as I think today as of right now, as of this minute known in the world has a working safety mechanism, or even a prototype or even an idea how to make one which would scale to any level intelligence. It may take 700 years to get there. So just pouring money into it would be like war on cancer or something like that. Lots of money feels good, but no results.

Jim: Now there are some results in the war on cancer but not linear to the inputs. I mean, as we know, the outputs of these hard problems are nonlinear.

Roman: But the problem is fundamentally different. You don’t get partial success. You don’t have slightly controlled super intelligence. You either control it or the first mistakes it makes may be fatal.

Jim: Yeah. But what’s our alternative if we’re going to have an AGI in seven years, possibly, 10% chance, pretty high chance? Suppose I told you there was a 10% chance we’d have a nuclear war in seven years and all out 20,000 warhead nuclear war in seven years, we would be doing everything we possibly could to make that not happen.

Roman: So OpenAI has a lot of good people with really good expertise in safety. If they really felt that they are seven years away, I think they would do things to make that a bigger number. Getting a billion dollars is good for whatever you’re working on. But I think they understand that uncontrolled AI is not beneficial to their interests, not just humanity’s interest.

Jim: So we can perhaps hopefully think that they understand the argument that you get no benefit from AI if it’s not controlled and therefore they’ll spend an appropriate part of their billion dollars even if they believe they’re only seven years away.

Roman: You get no benefit if you are dead.

Jim: That is true except maybe fame. You know? But do you care if the Emperor of Paperclips turned the universe into paperclips?

Roman: If there’s no one around to remember you, it’s not as valuable.

Jim: All right. I’m going to flip to another topic. Regular listeners of my show know that we fairly often probably at least half the episodes address the Fermi paradox. A number of the things we’ve talked about have at least some bearing on the Fermi paradox. Again, to remind our listeners, the Fermi Paradox comes from a lunch conversation at Los Alamos during World War II where a bunch of smart physicists were trying to estimate how many human level or above intelligent civilizations there were in the universe. Enrico Fermi walked by and said, “Well, where are they if there’s lots of them when I see no sign of it?” Interestingly, to this day, we’ve been doing increasing amounts of searching. So far, absolutely no sign. So what are your thoughts on the Fermi Paradox?

Roman: That’s a very interesting problem. There are many great answers I like. I can tell you about some of the answers several people propose. So there is this idea that instead of going out into the universe, advanced civilizations kind of minimize and go inside. They become more condensed, go into virtual worlds. So you just wouldn’t see them. From computer science point of view, on communication point of view, we’re looking for signals. Signals of communication, but already, our communication is encrypted, hidden. To save power, we communicate with silence. So basically, there is nothing to look at. It’s random noise, you would observe. So it’s not surprising that we don’t see them. I also think that the question is, where are they? We look around and we don’t see intelligent beings, but yes, we do. We are them. There is a number of theories, Panspermia theory, saying that we are the biological robots [inaudible 01:07:01] props sent by some distant civilization. We are here basically trying to figure out what’s our mission and looking around for instructions. So that’s a short survey of my thinking and some of those.

Jim: Yeah. One argument, it’s called the techno-signature argument is that, all right, maybe we’re just completely wrong about looking for signals. Because as you point out, even us teeny little baby AGI called humans are already rapidly moving from radio to fiber and from open communications to encrypted communications. In fact, a recent paper by some our Santa Fe Institute people demonstrated that almost any reasonably advanced civilization, their signals ought to look like noise. So we may just have been mistakenly looking for signals. The other new approach is called techno-signatures, which is that we can look for artifacts in the universe, that are signs of having been created by an intelligence.

Jim: Robin Hanson, one of my first guests on the show, for instance, makes the argument on economic grounds. He’s an economist rather than astronomer. That any advanced civilizations that’s existed for very long ought to leave techno-signatures of the Dyson sphere sort where advanced civilizations are harnessing more and more and more of the energy of their star to build more and more either biological civilization or let’s assume they were eaten by their AGI computational infrastructure. The techno-signature of that ought to be a shift towards the infrared in stars. His claim is there is no sign of that. Therefore, there may be no galactic intelligences.

Roman: Right. So I actually have another sub-field of inquiry I founded called [Designematic 01:08:59] where we try to figure out what are the differences between natural objects and engineered objects. If an object is in fact engineered, what can we say about the engineer behind it? So if I give you an iPhone, you can tell me that whoever made it is of certain intelligence. They have good understanding of chemistry, computer science, and so on. Even not mentioning things like made in China. You know? So we’re trying to generalize this. We’re trying to generalize it to biological samples. So if you look at the artificially created cell, can you tell that it’s in fact engineered and not evolved just from that sample alone with no other records? It seems like if you don’t restrict resources, something we talked about in the beginning of the show, you can tell whatever something is natural or engineered by some super powerful intelligence.

Roman: Then we look at the universe for glitches. Are we in a simulation? Those are the techno-signatures we’re looking for. Right? If it’s done well, you would not find them because they would be hidden on purpose. The only way you would see them of they’re explicitly there, a warning pops up, “This is a simulated universe designed in that year and so on.” So all of the things we discuss are very strongly connected. We’re trying to make progress in all those directions at the same time.

Jim: Very interesting. Very interesting. Yeah, for instance, one of the theories of Fermi paradox is called Dark Forest theory that there are predators, or at least it’s reasonable to assume that there are psychopathic civilizations out there. That if you show yourself, you will be killed by these psychopathic civilizations. So therefore, the co-evolutionary result is that no civilization that shows any sign of being there exists long enough to build these large scale techno-signatures because they are killed by the psychopathic predator species. For the same reason, and there may be multiple of those. Each of those shows no sign because it would call in the other predators against them.

Roman: Of course, the fun part is to combine all these ideas, Boltzmann brain simulations, super intelligent malevolent AIs with Fermi paradox and see if you go crazy or not.

Jim: Heck no. I love to entertain these ideas. But on that note, I think we have covered so much ground that it’s time to wrap up. This has been so much fun. We have touched upon so many of my favorite topics.

Roman: It is an awesome interview. Thank you so much. I hope your listeners will look at some of the papers we discussed. Maybe pick up a book or two.

Jim: Yeah, absolutely. Production services and audio editing by Jared Janes Consulting, music by Tom Muller at modernspacemusic.com.