Transcript of Episode 10 – David Krakauer

The following is a rough transcript which has not been revised by The Jim Rutt Show or by David Krakauer. Please check with us before using any quotations from this transcript. Thank you.

Jim Rutt: Howdy! This is Jim Rutt, and this is The Jim Rutt Show. Today’s guest is David Krakauer, president of the Santa Fe Institute, which we’ll usually abbreviate as SFI.

David Krakauer: Hello Jim. I’m mortified to be on your program.

Jim Rutt: Oh, I’m sure worse things will happen to you before the day is over. David has undergraduate degrees in Biology and Computer Science and has a PhD from Oxford evolutionary theory. David is my go to guy when I have a question about evolutionary theory. Full disclosure, I’m a very part-time researcher at SFI, I’m the past chairman and I’m currently a member of the board of trustees.

Jim Rutt: So yeah, I love the place and I’m probably going to be a little biased today. There’s my full disclosure. So David, could you give us a description of SFI, a little history, today’s focus? What makes it different from the many other research centers in the world?

David Krakauer: Yeah. So the Santa Fe Institute, as the name might suggest, is in Santa Fe, New Mexico. The Institute is up in an old mansion on a mountain side. We were founded in the 1980s, early ‘80s, 83, 84, by a very illustrious group of individuals, several of whom had Nobel prizes in physics and economics. What they wanted to do with something rather radical. That is use the methods that we typically associate with mathematical physics and theoretical computer science and apply them to problems that had been recalcitrant or resistant to those methods. We call that domain of complexity. We’ll talk about that.

David Krakauer: So the Institute reflects the iconoclasm of that group and of their ambitions. We have no schools, we have no departments, we have no tenure. We’re a networked research Institute with two home bases in Santa Fe and we essentially pursue difficult problems in complexity with no concern for their disciplinary provenance, typically using mathematical and computational techniques to arrive at very general theories.

Jim Rutt: Very good. Complexity science plus or minus epsilon, it’s fair to say as you did that SFI invented it. To your mind, what is a domain of complexity science?

David Krakauer: Any discipline, if you think about it, whether it’s physics, or anthropology, or chemistry, or economics, can be defined in three quite different ways. By their history, the fields that in some sense informed them, by their domain of application and by their methods. Complexity, the domain is the domain of adaptive phenomenon. Sometimes we say the domain of networks of adaptive agents. That could be the neurons in your brain. That could be traders in an economy, it could be individuals in a city, it could be an ancient civilization made up of gills.

David Krakauer: So any system that has many agents, each capable of learning and forming some kind of representation of the world in which live to some strategic end is what we studied. Historically that kind of system has been very difficult to predict. So look how good we are at predicting the orbits of the planets and look how bad we are at predicting financial collapse. So that gives you a sense of the differences between those two domains.

Jim Rutt: So we now drawn a line at strategic for complexity. I remember when I’m told this day when I read in complexity things like sand piles are sometimes considered examples of complex systems and they certainly aren’t over the strategic line.

David Krakauer: Yeah, I think that’s an interesting point. I think that gets at this distinction between a definition based on history and method versus domain. So, some of the methods used to understand avalanches in sand pile formation come from non-equilibrium statistical mechanics, which is a very important tool in our toolbox. But the domain is the domain of the complicated, not the complex. So that’s an interesting point and that’s why you always have to bear in mind which of these three perspectives you’re taking.

David Krakauer: There are many folks out there who think about complexity purely methodologically. So they’ll say things like, oh, you do agent based modeling, or, you do genetic algorithms, or you use scaling theory, or network theory, and that’s a very method approach. So I think at any given point you’re triangulating between these perspectives on what a complex problem is.

Jim Rutt: People used to often say, yeah, the complexity’s about nonlinear dynamics, right? But again, that’s a lens to look at complex phenomenon.

David Krakauer: Exactly. I mean, so that’s a good example of something that goes back to the very end of the 19th century in the work, in particular, in celestial mechanics of Henry Punkery, where he discovered these very anomalous, chaotic trajectories and so called three body systems. That kind of mathematics that Punkery was developing has proven to be extremely useful to study chaos, for example, in a whole range of different domains. But that is just a method as you say, even though it’s very deep one. But physicists would use that to explain celestial mechanics, which is not a domain of complexity.

Jim Rutt: Something that comes up in popular discussions of complexity and believe it or not, complexity has enough visibility now that there are actually popular discussions of complexity even on the internet. That’s the relationship between complexity and reductionism. Could you speak to that a little bit?

David Krakauer: Yeah. So reductionism is the most, in some sense, accomplished methods of science. It’s what we sort of learned. What are things made of, what are the fundamental building blocks of nature? We will subject any system you find to extraordinary destructive processes to try, and ascertain what their constituents are. The most obvious of which are super colliders. We can do that with anything that we observe. We can do it with people. Unfortunately if you do that with people you end up with an aggregate mass, which is not very interesting.

David Krakauer: But you can determine that they’re made of cells, the cells are aggregated into tissues and organs and so on. So that’s reductionism is understanding by means of enumerating the constituents. Now, there’s a huge problem with that. Just think about a mechanical club. If I said, “Jim, what’s a clock?” And you said, “Well, I could tell you what a clock is.” You step on it, you put upon its components and you find that there’s hands, there’s a bezel, there’re pins and wheel trains and so forth.

David Krakauer: That’s a quite different kind of understanding than the understanding of a watchmaker who can put those parts together in order to create a simulaterum of the solar system in order to tell the time. The same goes for water. You would say, “What is water?” “Well, it’s made up of lots of these hydrogen and oxygen molecules, H2O molecules.” You can even describe the basic properties. It’s a polar solvent. But if I asked you, “What is a phase of water? How do you go from a liquid to a gas to a solid?” That transcends your understanding of what it’s made of

David Krakauer: You go to the brain, you can say, “What is a brain?” “Well it’s 86 billion neurons, is about 85 billion glial cells. We understand that they conduct two signals electrochemically. We know that they are connected chemically.” But how do 86 billion cells create differential geometry? How do they make a podcast? So all of these examples I’ve given show the critical challenge of emergence, which is the flip side of reduction. Which is, how do the components come together and manifest in a collective, properties that are not trivially predicted from the properties of the individual components? We call that emergence.

David Krakauer: The history of science is a history of reductionism and with the advent to be honest, of computational power and algorithms and some new theories. We’re now only now beginning to create a science of emergence, which is where complexity tends to place more emphasis. It’s not that we don’t do reduction, it’s just that reduction has been in some sense the dominant school and we’re really pushing this other one which is, don’t just break it, make it,

Jim Rutt: I like it. A little analogy I’ve used, I love to get your reaction to it is, reductionism is the study of the dancers while complexity is the study of the dance.

David Krakauer: That’s good. Exactly. All of these metaphors, if you like, capture this fascinating property of the collectives. That’s, in some sense, the challenge is its very interesting by the region. If you look at the history of physics, how limited that has been, there are areas where we’ve done that very well. The best known is thermodynamics and statistical mechanics, which is the ideal gas law, which is a statement that relates essentially pressure, volume and temperature.

David Krakauer: That can be actually derived from an understanding of the parks. Because temperature is not a real thing, as you know, it’s just the average kinetic energy in the system. So that’s an example where we do have a theory of emergence. Another very good example is the theory of superconductivity, where electrons momentarily overcome what we call Coulomb repulsion and they form Cooper pairs and manifest this property of having zero resistance at relatively high temperature.

David Krakauer: Again, that’s an area where we actually know how to go from the atomic level constituents of matter through to this collective phenomena which is non dissipative transport. If you ask the same question of the brain, we don’t know where to begin. There’s nothing like a theory of superconductivity for thought.

Jim Rutt: Or even for life, really. Right?

David Krakauer: Definitely not for life and not for the economy or anything else that we need here to survive and complexity scientists care about. That’s the gold standard.

Jim Rutt: Another distinction which is getting some attention out in the world. A lot of it from a guy named Dave Snowden who’s maybe the leading thinker about applying complexity science into the business world. He makes a distinction between the complicated and the complex. His short form description of difference between the two is that a complicated thing, like say your clock, could in theory be taken apart and put back together again.

Jim Rutt: But a complex thing could not be. For instance, imagining taking apart a cell and putting it back together again and having it work, not going to happen. Do you find that a useful distinction?

David Krakauer: I don’t like his definition. I liked the distinction between the complicated and the complex, but I think he’s wrong about how you make the distinction.

Jim Rutt: Oh, let’s hear your version of the distinction.

David Krakauer: Okay, let’s get the right one on the table. The complicated is true. A clock is complicated. One way to think about it is all of the parts of classical rules of physics and the difficulty it might be that we can’t, by the way, reassemble it. So that’s why it’s an odd. That’s a statement of current level of knowledge and ignorance sort of statement about the domain. It’s too subjective as it’s feeling. So there are many people I know who can assemble something much simpler than a clock. Let’s say a toaster.

David Krakauer: I’m not going to say that’s a complex system because they can’t do it. Right? So for his argument to work, you’d have to invoke some omniscient being that couldn’t do it and then it’d be a sort of contradiction in terms. So it works. So the complicated thing is something essentially with the elements of what we would call stationary. They don’t adapt, they don’t learn, they don’t evolve. The engineer can evolve the clock, but the clock elements don’t. Right? The moon doesn’t become a better moon, it doesn’t become a better orbiting mass because it’s been doing it for 4 billion years.

David Krakauer: So complicated systems’ theory scales very well. So Newton’s general theory of gravity, which is of course much further extended by Einstein in his general theory of relativity, is a beautiful theory that applies to all matters. Everyone in the universe, and regardless of the size of the system you’re studying, that same elementary set of equations apply. That’s a complicated system. A complex system has this property that we would call extensivity. Which means that as the system gets larger, the description length, the theory gets larger too.

David Krakauer: So it’s like saying, if the universe was twice the size, Einstein’s theory would have to be twice as big or more. The same theory wouldn’t apply. That’s the hallmark of the complex system. It’s actually, in some respect, irreducible and the theory has to grow in proportion to the growth of the phenomenon itself. That’s because the components adapt.

Jim Rutt: That’s a very nice distinction. Though it does draw a line with things like deterministic chaos on the non-complex side of the line, presumably.

David Krakauer: Yeah, that’s actually a very interesting example and that’s a really important distinction of art. So we make a distinction between the size or description then of the phenomenon and the size or description then of the generating process. So you can write down, as Punkery did, a very short system of equations that generates chaos.

Jim Rutt: Yeah. Or Lorenz equations. There’s many examples.

David Krakauer: Beautiful example. It’s just a three dimensional deterministic set of differential equations. But if you generate the behavior, it looks new, random. So the description ends of the output grows in the system size and that’s really important. In complexity science, when we say theory, we mean theory of the generating process and not the theory of the output.

Jim Rutt: Beautiful. Beautiful. That will help people, I think, a lot. Another example of where that distinction comes in is in some of these cellular automaton examples. The rule 120, very simple to define the process, but the outputs sure as hell looks complicated or complex. But so from your definition, rule 120 and cellular automaton is definitely not complex.

David Krakauer: Yeah, that’s actually very interesting. That’s exactly true. So the rule from classes, cellular automaton rules are all more or less the 256 rules of the simple one dimensional deterministic model. A rule more or less the same lens, but their behavior wildly different. Some of them insuring universal, some of them produce chaos, some of them produced very simple crystal [inaudible 00:13:20] structures. So yes, you can’t be fooled by the description ends of the object. That’s why again, the universe is complicated, not complex.

Jim Rutt: Now the distinction between emergence on one side and the units on the other gets us into something which is often called top-down causality. What’s the current thinking on that as a valid topic?

David Krakauer: Well that’s interesting. So this is a very confused philosophical area and is, I think, unnecessarily mystified. The canonical example of top-down causality is in fact what we’re now doing. Let’s give a very simple example. I decide that I want to take a sip from a coffee cup. So somehow that thought that exists as a mental construct now is translated into the activity of hundreds of millions of neurons. So how do you go from a thought to this microscopic domain of mechanism? That’s what people mean by top-down causality.

David Krakauer: The mistake they’re making of course, is to think that the thought is not already something encoded in hundreds of millions of cells. It’s just what you have access to introspectively in your consciousness as a much lower dimensional subset. So, again, normal causality would be like a billiard ball, right? You say, I’m going to hit this board into that one and we have a theory of inelastic collision and that allows us to predict how long the ball will roll on the surface. Straightforward causality.

David Krakauer: Top-down seems to be a violation of intuition because it goes from one to many. It sort of says, “I go from a very cool screened phenomena to a very high dimensional phenomenon.” But the mistake being made is you’re not because the thought already exists in the very high dimensional domain. So I think it’s actually a flawed concept.

Jim Rutt: Well, let me give you another example that I use sometimes when people are talking about top-down causality. I love to hear your thought about it. Think about the chemical elements in one of your cells. A mixture of carbon, oxygen, hydrogen, sulfur, various trace minerals, et cetera. If they were not embedded in a system of far from equilibrium of homeostasis driven by chemical reactions, those elements would kind of disperse in a random physicsy fashion.

Jim Rutt: But because they’re embedded in this much larger thing, a network of homeostasis that operates on the scale of seconds, their behavior is very, very different. And hence one could argue that the causality of their behavior is driven by the higher order system, i.e., an organism in which they live

David Krakauer: Well. I would actually think, I think that’s a good example because I call that complex causality as opposed to top-down causality.

Jim Rutt: Could you tease those apart for us?

David Krakauer: I can. So most of us in our lives, a kind of amateur Charlotte Holmes like thinkers, right? We try to deduce from a series of events causes. Cause-effect relations. That’s what our brain likes to do. We are particularly delighted when we can find simple ones. So if you think about economics, well, what happens if I turned the interest rate up or down and I can make very simple predictions at work on average in relation to consumer demand and spending. So that’s simple causality.

David Krakauer: Now, complex causality is the example you gave where I want to understand a matrix of components, all of which are highly connected and there is no dominant factor that explains the behavior of the system. You have to somehow integrate the behavior of all of those elements over different time and different space scales. That’s complex causality. The reality Jim is, that’s much of the world, but much of our science is complicated science, which means it wants to find theories of very short description length, which we translate, by the way, into simple causality.

David Krakauer: So if you look at the structure of a solar system, what is the primary causal factor? Gravitation. You can basically get all of the different structures that we observed, large scale structures, out of gravity and initial conditions and the distribution of mass. So very simple causality. Unfortunately, when it comes to things like cancer, or neurodegenerative disease, or the state of the economy, it’s not at all clear that there is something as simple as gravity, social gravity, that’s what the enlightenment wanted, that is the dominant causal factor.

David Krakauer: So you have to integrate over all of these components and it becomes very messy and the theory becomes very large. That’s why, by the way, we’ll get there I guess, machine learning is so popular. Because machine learning put all the factors in and gives you an opaque model of enormous complexity.

Jim Rutt: Now there’s one been proposed the theory of complexity, particularly in things like biological systems or other adaptive systems, which is [inaudible 00:17:53] far from equilibrium energy flux. In the example I just gave you, one could argue that the metabolism as distributed by homeostasis within the systems is essentially the single driver which allows complexity to arise. I know the Santa Fe interpretation has generally been somewhat anti [inaudible 00:18:12]. Could you speak to that?

David Krakauer: Yeah, well it’s interesting. So, I mean there’s a sense in which what [inaudible 00:18:17] says is both elementary really true and also fundamentally false. I mean, so I guess the larger picture here is the systems that school, the Brussels school studies, are what we would call just a page of dynamical systems or time irreversible dynamical systems. We would all agree that that is one of the prerequisites for complexity. There is an arrow of time, we can get into that. It’s not useful.

David Krakauer: In other words, that elementary insight doesn’t really help us very much. There is this very intriguing allure of the one dimensional mono-causal theory. It’s everywhere. If you think about it. If you just have eyes to look for it. It’s in the economy and we call it price, it’s in psychology and they call it IQ. In any domain you can sort of find this push to build a theory around the lowest dimensional representation that’s possible. So as you say, to say that there is some energy density, or now the popular theory, the free energy principle, we can talk about that in a bit.

David Krakauer: It’s looking for these one dimensional scalers which are being optimized. Their weakness is always the same. It doesn’t really give you a purchase on the details of the system, which by the way are what you need if you’re going to build them. That was the great beauty by the way of Newton and Einstein, because the theory which is simple translates into practice, ballistics and orbits, Einstein, GPS. Whereas, these complexity versions of complication actually not very useful.

Jim Rutt: That’s an interesting insight. You’ve hit on a topic in passing time and I know that has been a topic at SFI of late. A few episodes back we had Lee Small on as a guest. He has some very radical theories of time, some of which are in opposition to Einstein. He argues that time is fundamental and spaces is emergent. He also tries to argue that maybe time is universal and not relativistic. What is your current thinking, because you mentioned the arrow of time? Where does time fit in the family of phenomena?

David Krakauer: I like to think there’s two very distinct approaches to the question of time. One of them is the physics approach to time, which is exactly as you just mentioned, is time fundamental or is time emergent? That’s another, by the way, current popular theory that time is not really there. I just don’t believe that incidentally. Then there’s another approach to time, which has to do with the direction of time, what we would call the arrow of time. That’s associated with Eddington. It says, why does time flow forward preferentially and not backwards?

David Krakauer: That’s quite distinct from what time is right? Because you can still invoke the clock as the model of time in the era of time framework. So there are two very different theories out there about the direction of flow of time. The physical one is based on the second law of thermodynamics. That entropy increases in time, disorder increases in time. The simple way of thinking about that is that there are more ways of being wrong than right. There are more ways of breaking in egg than making an egg.

David Krakauer: Now, Darwin had a very different view. Darwin said there’s another law as fundamental as the second law of thermodynamics, which is the creation of order in open systems by natural selection or by learning. We are interested in what we call complex time, which is when you combine the two theories. You combine the second law of thermodynamics with Darwin’s law of natural selection and out of that come entirely new temporal phenomenon including lifespan, or the lifespan of a civilization, or the lifespan of a city.

David Krakauer: These phenomena, complex phenomena, are not accounted for in any physical theory. To explain them, you need to integrate the physical hour of time with the Darwinian hour of time and that’s what we’re trying to do.

Jim Rutt: Very cool. I know some of the empirical work that SFI has used on the characteristic lifespan of various entities like corporations, cities and animals, et cetera, based on their size, would provide some data to work on that theory. Could you talk a little bit about the relationship between the empirical side and the theoretical side with respect to complex time?

David Krakauer: So with complex time, as I said, we’re trying to balance the entropic processes, things that fundamentally disorder a system, the ordering processes of learning and selection. Where those two dynamics find their fulcrum will determine the lent scale, the duration, the lifespan of the phenomenon of interest. So let’s take a few examples. So most of this is what I would call phenomenology. In other words, we don’t have a theory of this. So as you know, SFI researchers have done a lot of work on the half-life of companies.

David Krakauer: When I say half-life, I mean that because the distribution of lifespans is exponential. That is, it mirrors the half-life of elementary particles and atoms that are decaying through some appropriate sequence. Now where does that come from? We have some basic models that can tell us why that must be true. They’re based on what’s called the Red Queen dynamic. Which you know very well, which is that in very competitive systems where both parties are nearly evenly matched, then the distribution of lifespan is expected to be random.

David Krakauer: [Inaudible 00:23:26] would be this, if I played Casper of at chess, in every game he’ll kill me within, let’s say, 10 moves. So the games would all be very short, right? If I please someone who’s almost equally matched, then the match will be longer, right? You can ask how long will those be? If the outcome is no longer based on differential skill, it’ll be based on chance. So the company example is a very intriguing one because it suggests very efficient markets. When the lifespan of companies, for example, under monopoly, live longer than you’d expect, by conformity to distribution, you know the market’s inefficient.

David Krakauer: So that’s one example. The same, by the way, the same pattern is found in the duration of an evolutionary lineage. So, for example, how long does a species survive? How many millions of years, or a genus, or a family? Again, the same argument applies. They’re very competitive. The Darwinian process tends to optimize, to find near optimal strategic solutions, not optimum. In those regimes, chance dominates. So you get, again, an exponential distribution. But if you look at cities, they don’t follow an exponential distribution.

David Krakauer: In fact, some people, my colleague Jeffrey West, would say they’re immortal. Now that’s not true. But they live a much, much longer time, which suggests that it’s not competition between cities that’s determining their life span, but something inside of them. Where it’s something endogenous dynamic. That’s a dynamic that we still don’t understand by the way. So why cities have a very skewed lifespan towards very long lasting unlike companies and unlike evolutionary lineages?

Jim Rutt: Well, one thing that just comes to mind thinking here out loud is the cities do have a monopoly on their space.

David Krakauer: That’s interesting. So that they don’t allow any other organization access to their central resources.

Jim Rutt: Yet everybody else that we talked about has to operate in four dimensions while the city only basically operates in time. It’s got its physical dimensions essentially fixed. That’s overly simple because of course the cities grow and shrink. But to the degree that they currently occupy the land, they have a monopoly on that land.

David Krakauer: Well, I think it’s actually a key insight because if you think about it again, lifespan is being this balance between entropy and selection. In fact, it’s interesting you can make a cell immortal by giving it enough energy to fix its arrows. In other words, the Darwinian process can compensate for the entropic process if the resources are available. If you look at your own body, epithelial cells live on the order of days, neurons live on the order of decades and the brain takes up disproportionate metabolic free energy and it uses it during sleep to repair cells.

David Krakauer: So you’re right. If you have access to a disproportionate quantity of free energy, you can use it to nearly, halt the production of entropy.

Jim Rutt: That’s interesting. Entropy versus [inaudible 00:26:18] that’s very good. Very good. Now we’ve talked all around it, but let’s focus more directly on theories of complexity. So at the higher level, where are we with developing either fundamental or partial theories of complexity? What would you say are the cutting edges of complexity theory that’s being worked on and what would you describe as theory that we can safely say is sound and relied upon to build upon?

David Krakauer: So I would say that again, we would want to try and create a kind of typology of theory. One way to do that is to make a clean distinction between models and theories. One simple test I use to distinguish between those two is the difference between something that explains how versus something that explains why. I’m going to be using this distinction. So just think about the games Sin City. Sin City is a model of the city and it’s actually could be used to do science. But it doesn’t tell you why there are cities, why the cities and not just a bunch of marauding bandits or nomads.

David Krakauer: So fairly tries to do the why. So when I talk about theory, I am not talking about models. Because of course there are an infinite number of computer models in the world, nearly infinite, for many, many phenomena. They can’t, any of them, explain why the world exists in the state it does. So when it comes to theory, there are not many. The current efforts are trying to do the following. The history of theory in physics is basically based on energy and out of that cemeteries.

David Krakauer: Fundamental symmetries of nature of the kind that my colleague here at SFI, who died this year, Murray Gell-Mann, won the Nobel prize for in predicting the existence of elementary particles based on symmetries. In the 1950s, a whole series of new theories started to emerge. Information theory, game theory, cybernetics or nonlinear control theory. They were an effort to reckon with what we were describing earlier as the complex domain.

David Krakauer: One way to think about all of them is that their efforts to combine energy with information. Now if energy is about symmetry, information is about broken symmetry. You think about it in some elementary way. If you ask me, “David, when I leave the Santa Fe Institute to get to town, should I turn left or right?” That moment you’re in the symmetric state because left and right are equivalent. I break the symmetry by saying, “Go left.” The history of complex forms is the accumulation of broken cemeteries.

David Krakauer: That’s what history is. It’s what your brain does. It’s what your genome is. It encodes contingencies, decisions, make that enzyme go right, use that weapon, et cetera. So all of these new theories that are emerging are trying to reconcile the symmetric and the broken symmetric energy and information. That union feels like computation. One of the reasons why computation has emerged as not only a tool to use to do science, but is the dominant metaphor for what complexity is doing.

David Krakauer: Because it’s the one cause of unified framework of thought that integrates energetics and information. That’s the basic background, I think, to what all theories of complexity feel like.

Jim Rutt: You mentioned computation. More and more I’m seeing computation as a measure of complexity. What is the shortest form of a program that it would take to cause the complexity that one sees in a system? Is that a reasonable way to think about measures of complexity and what other measures might there be?

David Krakauer: It’s reasonable. One of the problems is that they tend to suffer from the subjectivity of human ingenuity, which is probably one of the characteristics of the domain of [inaudible 00:29:57] by the way, in which we might talk about as a philosophical challenge. Everyone is familiar with the idea of observer dependence in the Copenhagen interpretation of quantum mechanics, but actually complexity science has much Richard’s other dependence. To mean, the complexity of the phenomena actually relates in some very profound way to your subjective ability to theorize about a system.

David Krakauer: A good example is what you just gave, which is computer code. This is sometimes called algorithmic depth. It’s associated with a very early SFI researcher and an IBM, Charlie Bennett. The idea is how long is a piece of code that can take you from the random input to an ordered output. That’s called algorithmic depths. So it feels a little bit, if you think about it, like natural selection. It says, I want to go from a state of uncorrelated fear of the environment where everything is approximately equally bad to something where one thing is a very good fit.

David Krakauer: The number of generations required to create a good fit is the organic analog of algorithmic depths. So evolution according to people like Daniel Dennett, our colleague at SFI [inaudible 00:31:00], would say is actually doing a computation, but he’s doing a computation in a population over many generations as opposed to in the memory of a computer over the course of thousand years. So, there was a very strong analogy between ordering processes and algorithms achieve that in sort of state devices.

Jim Rutt: How applicable is that to other kinds of systems? People have made claims of [Komal Goroff 00:31:25] complexity as a fundamental measure of complexity.

David Krakauer: What is the big problems with this way of thinking is, even though in some sense it’s certainly right, it’s extremely difficult to measure. Komal Goroff complexity is a very good example. It’s articulated in the framework that we call Turing machines, invented by the great British mathematician Alan Turing, to solve a longstanding question in mathematics. Which is whether or not a proposition in advance could algorithmically be said to be true or false. He showed very famously that you could not. It’s called the halting problem.

David Krakauer: It was a wonderful tool that Alan Turing invented to prove a theorem in mathematics. But you can’t use Turing machines in the real world as you well know. So there have been efforts to construct alternatives. Currently the most popular framework is the theory of circuits, which can be shown to be Turing complete in some instances and they actually have a more direct mapping onto the real world. As you know, that’s something that many of us here at SFI work on, which is take a natural phenomenon like, an economy, or a conflict, or a brain, using principled means, turn it into a circuit.

David Krakauer: Then in some sense, the size of that circuit is a measure of the complexity of the phenomenon because it has a direct correspondence to it. So there are efforts along those lines.

Jim Rutt: When you talk about circuits, you’re talking about basic electrical circuits?

David Krakauer: Like, say with hands, [inaudible 00:32:48]. So, exactly a circuit that you would know well. The nice thing about circuits is, again, something you’ll be familiar with. They’re all very principled means of compressing them. So you can represent a computational scheme in a circuit, compress it, get the most minimal form, and then the complexity of that form is in proportion to the complexity of the phenomenon itself.

Jim Rutt: Very interesting. As you… Hey, you may remember I was actually involved with two different companies that used evolutionary computing techniques to either synthesize or improves circuits and both companies were very successful and those people are still around. I’m going to have to reach out to Trent McConaughey, likely can be a guest next week and I’m going to have to ask him about this, the idea of circuits as a measure of complexity.

David Krakauer: Yeah, it’s very nice. There’s actually an SFI researcher, Josh Rochelle, who’s one of the world’s experts on this.

Jim Rutt: Very interesting. Now, let’s pop up one level and talk about science overall right now. That’s something you and I have talked about over lunch more than once, which is that we’re seeing more and more science that is data-driven with relatively little regard to theory. What’s your take on this phenomenon?

David Krakauer: The way I like to describe is sort of, I eluded to this a little earlier, is we’re now experiencing a schism like the schism in the church. The schism is a direct outcome of complexity. When you think about it is as follows. If I could explain consumer choice using classical mechanics, model individuals in billion boards, we would have no machine that it wouldn’t be necessary because we have a perfectly adequate theory to explain a complex and all, but we don’t.

David Krakauer: A consequences of that has been the growth of statistical models that are completely opaque to human reason that can be shown in some very limited domains to be highly productive and more predictive than the best theory. It exists by virtue of complexity. That sort of property that we’ve now established, which is this irreducibility of the description. Computers have no concern for elegance, right? That’s not their game. Human minds do.

David Krakauer: But at the same time there is this [inaudible 00:34:47] emerging tradition of complexity science, as I said, grew out of game theory, cybernetics, information theory, nonlinear dynamics, non equilibrium statistical mechanics, theory of computation. That says maybe there’s another path that’s sort of a third way and that is develop new mathematical tools that are the analogs of the tools used by Maxwell, Kelton and others, but describes phenomena at an aggregate collective scale.

David Krakauer: For example, we talked about that the organism, the city, the society, the economy, use those as what we would call our state variables and predict them. That’s the complexity science approach. So what’s happening now is this schism emerging between what I call the sciences or statistics and prediction and the science or mechanism of understanding. Going forward, I think we’re going to see this divergence increase.

David Krakauer: So we’ll have two distinct camps. Those who are content to be told what the world will do and those who are content to understand what the world does. Science historically has been about understanding. With prediction there is a kind of utilitarian excuse. But I think, given all of the challenges of the planet, utility is paramount. So how we integrate complexity science with machine learning, I consider by the way, the challenge for thinking people in the 21st century.

David Krakauer: Gauge as science is kind of agnostic about those two. More data is better for complexity science, more data is better for machine learning, but what it presents to you is very different and I think we have a choice to make actually. It might be it an ethical choice, an aesthetic choice about which path to pursue and if we pursue both, how to somehow achieve reconciliation,

Jim Rutt: Of course we will pursue both trajectories. Actually we will be pursuing a third one which may actually upset your dichotomy, which is true artificial general intelligence that can solve the hardest problems in a way that is accessible and explainable to a human.

David Krakauer: Yeah, I do believe in that as you know. I think, I actually don’t believe that there will be a, for reasons which I think we could go into predictive theory of complexity that’s understandable. In other words, you either go this cool screened path and forfeit all those microscopic degrees of freedom that are so delicious. Jim likes to buy this kind of toothpaste. That’s what machine learning wants to tell you and you forfeit that in favor of a much more general theory of demand. Or you go down that path and you give up understanding entirely.

David Krakauer: I think it’s something intrinsic to the domain of complexity that tells us, and I think there’ll be a theory as general as Heisenberg’s uncertainty principle, that says you can’t know say position momentum at the same time. I actually want to claim that if you can do high resolution prediction, you cannot explain it because you cannot throw away complex causality. You can’t have your cake and eat it.

Jim Rutt: There’s a gap between the two.

David Krakauer: Yeah. If you maintain complex causality, which I think you need to for prediction, then you’re not going to get understanding.

Jim Rutt: Very interesting insight. Now let’s switch direction again and let’s talk about your own research. I remember back in the day your one sentence summary was, The History of Information Processing in the Universe. Then you narrowed it to, The History of Information Processing on Earth. Yesterday I looked up on your website and it now says, The Evolutionary History of Information Processing Mechanisms in Biology and Culture. Why the narrowing?

David Krakauer: Yeah, I don’t know. It’s some kind of PR nonsense.

Jim Rutt: Good. That’s what I was hoping. I was hoping you weren’t pushing out in your old age.

David Krakauer: Absolutely not. In fact it’s got better. I mean, that’s the kind of nonsense that people want me to write. But here’s the real story. I think the way you started is right. Now I’ve actually extended it. I called all my work, The Evolution of Intelligence and Stupidity in the Universe.

Jim Rutt: Oh, why don’t you say multiverse to make it a little bit bigger?

David Krakauer: I know. I think that the reason to restrict things to the earth is just the dictates empiricism. Because we just don’t have good enough data. But the theories that we work on ought to be sufficiently general to encompass any form of life. So my own work is on the vision, intelligence and stupidity. In particular the machines that exist, that manifest these effects, brains, are the most obvious ones. But societies, polities, cells, proteins, genomes, they can all be said to adapt.

David Krakauer: So I need to define intelligence and stupid for your audience. It is not IQ. To call intelligence IQ is one of those beautiful instances of cultural stupidity we might get into. Intelligence is making hard problems easy. That is establishing a mechanism or rules system that enables you to very efficiently arrive at a correct solution. So, for example, mathematics and arithmetic, or calculus, or typology, all give you deducted rule systems that can guarantee that if you adhere to them, will produce from some input a correct output.

David Krakauer: Stupidity is not ignorance, which is insufficient data to reach a conclusion. So you can’t be faulted for ignorance because you can always recover. But stupidity is the application of rules that are guaranteed in infinite time to give you the wrong answer. I always tell people, and everyone knows this already, instinctively when you’re at school and you’re the smart kid in the class, we’ll say, “God, you make that look so easy, Jim.” You’re going to explain that you have this rule and they can all do it.

David Krakauer: A stupid person has the following characteristic. Everyone looks at them and says, “Why did you make that easy problem look so hard?” Society is absolutely rife. There’s rules systems that are stupid that actually produce worse than random performance. Of course, as we all know, investment strategies are just full of this. So Rubik’s Cube is the example I like to give. I can give you a cube and if you happen to know the so called God’s algorithm on the three by three by three cube, the standard Rubik’s Cube, you can show that if you apply that algorithm, you will solve the cube in 20 moves or less. Okay?

David Krakauer: You might solve it in zero moves if I give you the completed cube. Now a stupid algorithm is like the following. You take the cube from me and you say, “David, I can solve that. No problem. Give it to me.” And you just manipulate one face. Now if the cube is not already solved in the lifetime of the universe, you will not solve it.

Jim Rutt: Assuming it is disordered enough that there is no single rotation it’ll solve.

David Krakauer: Correct. That’s stupidity. You can actually go through the history of culture looking at rule systems and actually classify them. Part of what I’ve been doing is precisely that, which is trying to come up with a much more nuanced theory of what intelligence is. One of the great advantages of this, by the way, is you can apply it at any scale because it’s dependent on the function. So even though it’s how well does the cell follow a gradient, is the rule provably optimal? How well does the fish shoal? How well does the bacterium divide? How well does Jim calculate? How well does an economy clear a market?

David Krakauer: So it’s a truly evolutionary theory of intelligence that allows you to express in quantitative fashion the intelligence or stupidity of any phenomena that’s essentially solving a competition problem. That is what intelligence is. Unfortunately, our anthropomorphism and short sightedness has caused us to restrict our discussion to one human, one species namely human, and contrive one dimensional metrics which are very uninformative, like the IQ. So that’s what I tried to do and that’s why I call it the universe.

David Krakauer: Because in fact there’s a sense in which you might, even as our colleague at SFI Seth Lloyd claims, describe the universe as computational. If that were true, actually calculate how small universe is.

Jim Rutt: That’ll be interesting to see. Might not be that smart, but at least unintelligence density, it’s probably not that smart. That’s an interesting distinction. I’m going to ask you about a couple of things I know you’ve worked on the past, which I think our audience might be interested in. One is the role that policing serves in maintaining social stability.

David Krakauer: So this is long term work with a colleague here at SFI, Jessica Flack, who you know very well. Jessica and I have been interested for a long time in what you might call robustness. Which is, if you’re in a non-equilibrium system where the second law is in operation, how do you maintain sophisticated states of order? One of the problems with complex systems is conflict. You can define conflict in a very simple way. It’s where the parts are imperfectly aligned. So conflict is when agents have misaligned or in perfectly aligned strategic objectives.

David Krakauer: The classic case is the so called zero sun condition where my gain is your loss. That’s an extreme case and it doesn’t have to be that extreme. So that’s conflict. Conflict is present all over complex systems. It has to be because there’s no global information. Every agent has slightly different windows on the world and that leads to misaligned strategic objections even if they don’t intend to disagree. So what do you do in systems that are full of conflict? You have to develop the robustness mechanisms that maintain states of order.

David Krakauer: One of those is policing. Policing is essentially impartial interventions into disputes by third parties who have an invested interest rate. We did much work on this for years. Really led by Jess’s work looking at non-human policing, human policing. One of the very surprising things that you find is that with true impartial policing, that is where there’s no bias, you can maintain complex States of order with almost no police. Very, very small fraction of the population polices. But as you increase the partiality or the bias, you have to increase the number of police to maintain order.

David Krakauer: It’s a very interesting observation because it has direct implications for what we would call police States. A hallmark of a well-ordered society is one where the number of police is relatively small in proportion to the total population. Whereas the dysfunctional society is one where actually everyone, as we all know from say Eastern Germany at the peak of the cold war, everyone was in some sense acting in this policing role, which is characteristic as a non-robust state.

David Krakauer: So that was one whole set of issues we worked on in relation to the control of conflict. We’ve taken that much further actually. We’re now at a point where we study large scale Wars in whole continent, like Africa, using techniques from the statistical mechanics called the renormalization group. We’ve discovered all sorts of universalities and this is actually with a student of ours, Eddie Lee, an opposed talk on Brian Daniels. Now we can do some extraordinary things.

David Krakauer: We can actually take conflicts, we can represent them as circuits along the lines I told you before and we can, in the computer in some sense, replicate the conflicts that we observe in the world and ask, “What is the most effective intervention in order to control the conflict, to move in a certain direction or to reduce it?” So this goes way beyond policing because you’re no longer intervening just into single individuals into a principled subset of the population to achieve a desired outcome.

Jim Rutt: I remember another result, I don’t remember if it was yours or somebody else at SFI, was that in decentralized policing where essentially there’s social sanctions for bad behavior. You also had to have a second order policing, which was to punish the non-punishers

David Krakauer: Yes. This is this phenomena that’s been recognized since antiquity, which is who polices the polices and this is a very popular paradox in the evolution of cooperation. But you can actually make that problem go away by making policing low cost. So one of the challenges of policing is the risk of policing, right? So there’s a free rider problem in other words. If you’re a police in my society, what is the incentive for you to take a risk with your own life?

David Krakauer: What’s happened in complex systems is that they’ve developed very sophisticated voting mechanisms or consensus generating mechanisms, which appoint temporary policing roles at very low cost to the police. That actually eliminates the game theoretic challenge of policing polices. It’s not so well known, but it’s crucial actually because if it wasn’t for that fact, no one would adopt that responsibility.

Jim Rutt: Can you think of a way to apply that to our society?

David Krakauer: Well, it’s interesting because one way in which we apply to our society is when you’re truly democratically elected, right? In other words, if I have confidence in your abilities, Jim, I’m going to say the following, I say, look, I’m going to enter into a semi binding agreement with you such that for the next period of time, let’s say a year, I’m going to allow you to mediate my disputes and I will not contest your decision. The consequence of me not adhering to that agreement is interesting and has to be established by some second mechanism. That’s how it’s accomplished it.

David Krakauer: It’s actually accomplished through systems of trust. If you don’t have the system of trust, right, so you have a corrupt society, then of course it doesn’t work. I think it’s very interesting incidentally that this, one of the areas where this, as you well know, this question is being very, very hotly pursued is in blockchain technology and the associated cryptocurrencies, because trust is the fundamental problem. How you achieved it, the energy cost of achieving trust and particularly in the domain of ether where you’re taught, you’re now dealing with smart contracts.

David Krakauer: So I think there’s a question of policing and the stability of ordered States in the decentralized condition is actually socially the world that we now live in and resolving that, in a principled way, I think will be very, very important.

Jim Rutt: Yeah. I know the SFI has now started a bit of a program in understanding what’s going on in the blockchain world because that is interesting. My own view is that the attempt to achieve trustlessness at the foundation of many of the public blockchains, like ether and Bitcoin, add a essentially insurmountable obstacle to making them sufficiently efficient to scale. And that what we really need is some meta engineering, which decides for a given purpose, how much trust do we have to give to some entity to make the system cost effective. I mean, we know Bitcoin is ridiculously expensive, like 5% of its net worth per year in mining.

David Krakauer: You’re correct. We have been actively engaged in this. I’ll give you a beautiful example of this, of how biology has solved it. I think that the blockchain community are slowly gravitating away from proof of work to proof a stake. I’ll give you an analog, it’s not an exact have proof of stake in the biological world. So your body is made up of billions of cells. Cancer is essentially the phenomenon where one of those cells, well one or more says, “I can go my own way.” But there’s a very interesting feature of multicellular life.

David Krakauer: That is the germline, the egg and sperm cells, are sequestered early in development and only they get to transmute into the future, not your epithelial cells. That means that there’s this extraordinary proof of stage. Every cell in your body has to work for the common good of the germline. They can’t go it alone. So we have learned, I think from the study of adaptive systems, one way of accomplishing trust, and that is to limit your propagation into the future. Right? To have a sort of a shared future.

David Krakauer: I don’t think that the blockchain communities yet caught up with, I think your observation is absolutely right that there is no fundamental decentralized means of achieving trust without imposing some institutional regularity on the system. As you know, I mean this is why companies like Facebook have closed systems. Because the inefficiency of the fully decentralized system cripples it.

Jim Rutt: Yeah. There may be a time in the future because when costs come down enough for the computational substrate that things like a decentralized Facebook would make sense. But I hear business ideas all the time for a decentralized Facebook and I throw three or four examples. So how would you do this? Right? How would you search the whole damn thing, for instance, right?

Jim Rutt: Turns out to be a problem that’s just not resolvable even in today’s pretty remarkably inexpensive computation. So people who take these things as religious doctrines, absolute trustlessness, or it must be decentralized no matter what, strike me as not likely to actually solve the world’s problems.

David Krakauer: No, I mean it’s a statement of ideology, not a statement of principle. You can learn from nature. I think bio-mimicry is useful here. I think biology has accomplished some very interesting balance between decentralization and centralization. By, as I said, the segmentation station, the germline is perhaps the best known. The other example that’s quite intriguing is outside of the animal world, in the world of palms and tree, is actually one of those things, phenomena, where a epithelial cell can become germs.

David Krakauer: So flowers can grow, reproductive tissue can grow out of somatic tissue in plants. But then the reason it’s not a problem for them is because they divide so slowly. So the timescale of strategic opportunity is actually very limited relative to the life span of the organism. So that’s why in the plant world they do achieve actually totally centralization. Whereas in the fast paced animal world we have to develop these new meta constraints to allow for partially decentralized system.

Jim Rutt: This is very interesting. I’m going to have to do some thinking on this and talk to some people I know that are in the thinking about the, decentralized autonomous organizations. See if they can find some meeting of the minds here. Because this might be a way to bridge these problems that I was referring to earlier. Let’s talk about one more of your former projects and then we’ll move on. As you know, it’s one that I’m very interested in, which is mimetic.

Jim Rutt: You did some very interesting work with Dan Rockmore, I think, and some others on the mimetic propagation of constitutions over time.

David Krakauer: Yes, this all falls into the scope of work that engage with intelligent systems, which is the means that culture has discovered to propagate good ideas into the future. It also unfortunately is also responsible for propagating bad ideas into the future, like racism or sexism, that in some sense piggyback on all of those good mechanisms. So yeah, we were very interested in how a structuring set of codes on the one hand transmitted into the future and two how they actually transformed. The constitution is a beautiful laboratory for studying that. Right?

David Krakauer: So we have this sort of 18th century structure and yet encodes a set of ideals and beliefs that very loosely govern behavior, but very loosely. It’s actually one of the characteristics as you know, the American constitution to be very minimal. The American constitution to me is much more like a law of physics. It’s highly compact and it regulates approximate behavior. In contrast, for example, to the Indian constitution, which is far longer and includes many contingencies to regulate specific behaviors.

David Krakauer: So it sort of takes the place of state constitution in some sense. So we were interested in whether or not the idea of Dawkins and Dennett of a meme is real. Our argument was, the only reason you’d ever invent the concept of the mimes was to explain phenomenon that you could not explain without it. The history of genetics is illustrative, right? The only reason we have the concept of a gene, which by the way now is eroding before our eyes, is because it could explain a certain pattern of morphology across generations into the future.

David Krakauer: This is what Greg or Mendel, the monk did, right? He explained irregular alternation of form and that the gene was like the atom. In fact, Mendel was very influenced by physics. He studied physics. He studied meteorology, wanted to look for something that would correspond to an atom that it could explain the structure of meta and wanted to discover for life the periodic table. Thus if you go to the world of mimes that’s what you would want to do. You want to say there is a periodic table of culture and there are rules of composition that would allow us to explain the dominant forms of cultural life.

David Krakauer: That was the large ambition of our constitutions project. I think what we discovered there was that there are units of meaning that are propagated in time, but they’re much more fluid than even genetic sequences, which themselves are quite fluid. So the idea of a mime is probably not as useful as some people think because it’s too mercurial. For it to be valuable, it would have to have a cohesion on the timescale of many generations. There might be some phenomenal, in fact do actually, property rights, for example. But many do not.

David Krakauer: So, what we really need to do is develop a sort of diffusive magnetics, a sort of a more cloudy concept, which isn’t as particulate as Dawkins and Dennett wanted. It’s not surprising because they came out of biology, right? And the world of Mendel. So that’s what we were doing. We were basically using machine learning techniques to discover these kind of cloud mimes and ask how they recombined to create new constitutions. We were quite successful, I think, in doing that. But with this modified concept, which doesn’t yet exist, I think at large, which is sort of a much more diffuse constellation of ideas.

Jim Rutt: Yeah. But you might find a body to look into is the case law in the countries like the U.S. and the UK that have a British common law with a series of precedents, which are kind of hazy mimes. They’re not necessarily word for word, but they’re concepts which are applicable and do evolve over time.

David Krakauer: Right. I think that’s a good example. I think, I mean, I’ve talked to Dan Dennett about this a lot. What Dan is forced to do because this criticism is in fact insurmountable, this criticism in mimetics, that what Dan is forced to do is in the end say that the word is the mime, but then you say, “Dan, that’s called the word.”

Jim Rutt: Why did you need to develop a new term for the same that we already have a term for the word, right?

David Krakauer: Exactly. Science is nothing if not ruthlessly parsimonious. So if you’re going to give me a new term, you’d better account for a new phenomenon.

Jim Rutt: Okay. Complexity science has, you and I both believe, great applicability into the world to problems that society and the whole world face, but often those things are perhaps best delivered into real world problems through metaphors of complexity. What do you think about that? As I call it, the complexity way of seeing, which may or may not even be formally scientific?

David Krakauer: Well, actually I think we’re now at a point, I’m much more optimistic. I think that both are true. So I would say the complexity way of our thing is vital. That actually we could enumerate, and I’ll mentioned some areas of direct application, that have an impact on everyone’s life. Like I said, everyone who listens to this podcast. So on the metaphorical front, one of the great curses of the history of human intellection is simple thinking. Finding simplicity where it does not exist.

David Krakauer: We’ve already covered this in terms of looking for simple causality. Wanting to live in a one dimensional manifold, the manifold of the IQ, for example. As you know, I mean you can almost write an entire monograph about how society’s ultimate ills are the result of society’s desire to simplify unnecessarily. So in that respect, complexity is vital because again, everyone knows that you can’t talk about climate without talking about culture, without talking about economics, without talking about energy, without talking about transport.

David Krakauer: But somehow, through a completely attenuated and deficient educational system, we’ve been raised to believe that you can and the problems are disciplinary as opposed to transdisciplinary. I think the simplest instance of complexity thinking is to be probabilistic. So anyone with a love of sport understands perfectly well that the outcome of a game is not deterministic, right? Unless it’s like me playing a basketball game against Chicago bulls. So in the domains where you have knowledge, you’re subtle.

David Krakauer: But somehow when it comes to other areas, areas of science or politics or economics, people become all sorts of weird binary automata. They become deterministic. Huge problem. So thinking about probabilistic phenomenon as opposed to deterministic, for me, is like the first base towards the complexity metaphor. But you can go beyond that and start thinking about connections and systems of dependence over long timescales.

David Krakauer: So you can say, “I’m going to maximize my short term economic interest,” and that’s fine, but you have to understand that it’s going to compromise your long-term economic interest. Any investor knows that, right? But you can extend it further and say, “Well, perhaps my long-term interest depend on my house not being under water in 20 years’ time.” There is a way of thinking in terms of systems that is immediately applicable by anyone who has the time to put into it that would be transformative of the stage of the world.

David Krakauer: That’s just a metaphor. What’s so neat, I think, about complexity science is that for every metaphor you have, there is a rigorous principle that corresponds to it. So when we talk about things like weather and the economy, there’s nonlinear dynamics. When you talk about systems of independence or dependence there’s a network theory, right? So you can sort of dive deeper and reveal underneath the metaphor the model and underneath the model the theory. I think that’s why complexity science is so enriching because it justifies a system of folkloric beliefs.

Jim Rutt: Interesting. I’ve been seeing it happen, but slowly. I’ve been associated with the Santa Fe Institute since 2002. I got my first exposure to what I would now call complexity science back in 1997 when I read John Holland’s, Adaptation in Natural and Artificial Systems. I think the first metaphor I used was co-evolutionary surfaces as a way to think about mergers and acquisitions in corporate America. But I’ve seen these kinds of ideas propagate, but relatively slowly. What’s your view on how the view is getting out into the world?

David Krakauer: I think it’s, we’re sort of in that accelerating phase now. It’s very curious. I mean, look, I mean we are essentially kind of sloths only when it comes to exercising on brains that we conserve cognitive energy almost more than we conserve anything. So there is a reluctance. But I think the world is suggesting to us that we need to engage with this complexity more fully because of the technologies of connection, because of transport, because of globalization and all of that implies you can’t really hide from complexity any longer.

David Krakauer: So that will change. Then the educational system hasn’t caught up yet, as you know. I mean, you go to school and you still take a Biology class, or an English class, or a Civics class. What does that mean? Somehow the disciplinary worldview is militating against this complexity view. We need to find a way of dealing with that directly.

Jim Rutt: Our education system is probably worse than it was when I was a kid in that everything I hear from teachers is that a remarkable amount of their efforts are now going to teaching to the test in a very reductionistic fashion. They’re not even interested in mathematics or biology, they’re interested in what’s on the standards of learning exam that the kids have to take.

David Krakauer: I know. It’s a disaster. But this is something that you and I have in common, which is where I’m very optimistic, is in the world of gaming. Because if you look at the sophistication of simulations and the sophistication of game designers, which is a community that we actually engage with now increasingly, they’re putting into their games precisely these kinds of complex intuitions that the world needs.

David Krakauer: So, the three paradox is kids going home being reprimanded by their parents or playing Sin City or Minecraft, but I would actually argue that in some very deep way, they’re developing an intuition that they’re not getting at school that will prove invaluable in their lives.

Jim Rutt: Yeah, I would say the things I learned playing war games back on the map board, little cardboard pieces, were more of at least as much value as everything I learned in K to 12 education.

David Krakauer: Right. I do want to take you into this area of application because it’s something so important to point out, which is that complexity science has changed everyone’s lives. So, I’ll just give you a few examples. Theoretical immunology and epidemiology. That is the application of nonlinear dynamics, agent based models and network theory to disease. The standard protocol that the WHO uses to develop the flu vaccine is based on mathematics and methods that we in part develop here and in Los Alamos, right?

David Krakauer: So everyone who goes and gets a vaccine is touching our science directly. Network theory. Anyone who’s now online and using Google, or Facebook, or Instagram, or any other network enabled software is in one way or another, touching algorithms and principles that were developed by SFI and affiliated researchers. Another example, the development of compounds with pharmaceutical value. Frances Arnold at Cal tech, who recently received the Nobel prize for her work, spent a lot of time here, developing systems for optimizing proteins, using evolutionary algorithms.

David Krakauer: In particular, the principles of neutral networks. Again, work developed at the Santa Fe Institute. Reversible computation, quantum computation, how to solve the problem of energy dissipation, key production in circuits that are getting smaller and smaller and following Moore’s law. The whole theory of reversible computation in part was developed early on by SFI researchers who then went on to work in quantum computation, which is obviously a very hot topic. Markets. Again, this isn’t close to you.

David Krakauer: Financial time series prediction, leverage cycles, new evolutionary theories of macroeconomics developed in complexity economics in SFI. So there are really numerous areas by now where methods which seem a little bit [inaudible 01:03:33] have found application. I like to tell people the following, if I was a physicist studying black holes, which generates a huge amount of interest, that will not be relevant to you, or your life, or anyone else you know, for several billion years. But if I work in network theory, what I have to tell you will be relevant in your life within a year.

David Krakauer: So there’s something very interesting about complexity science and the domain of complexity. That is the methods that we developed to understand that domain are almost immediately useful as opposed to methods that tend to get developed to deal with phenomena that are more remote from everyone’s everyday life.

Jim Rutt: Certainly the real problems that our society is facing, the monster of them all climate is clearly a complex systems problem at multiple levels. Weather itself is a complex system, but then when you couple it to culture, economics, politics and religion even you have a network of multiple complex problems.

David Krakauer: Exactly. Actually in that in developing, as you said, the metaphors, the models and the theory to reckon with that kind of world is what the world needs through its leaders. Actually I would claim in the 21st century, I think this is without making any political editorial remarks about current leadership, the one thing that is absolutely clear is that incapable of thinking through complex problems.

David Krakauer: That’s got nothing to do whether they’re left or right wing is because they just don’t know how to think about complexity. I think being exposed to these ideas is going to be critical for our long-term survival.

Jim Rutt: So you’re throwing your hat in the ring, crack 2024?

David Krakauer: Someone else, she suggested to me with, this is kind of a feat accent, I would go nowhere man.

Jim Rutt: I’ll tell my listeners, despite his posh British accent, he’s actually a bit American since the time he was born until he can run for president.

David Krakauer: Yeah, I mean look, go back and see what kind of accents the founding fathers had. I bet they were completely messed up. One thing I would suggest you make a plug here is we recently, I just edited a book, which you can buy on Amazon or anywhere else, called Hidden in Plain Sight. That book basically reviews 30 years in complexity thinking at SFI with contributions from many of our notable researchers. I think that’s a good place to look for your listeners to get a sense of what’s been going on and the kinds of ideas that we’ve been generating.

Jim Rutt: I just bought that book, actually. It’s sitting near the top of my, to be read, stack.

David Krakauer: Put it at the top.

Jim Rutt: All right, David. This has been a wonderful conversation. Unless you have a few minutes, you want to take a whack at the Fermi paradox.

David Krakauer: Yeah, I will take a whack at the Fermi paradox.

Jim Rutt: Before you do that, let me remind our audience. The Fermi paradox goes back to Los Alamos during World War 2. Some bright young physicists probably were talking about all the intelligent species that must be out in this large galaxy and these many, many galaxies. They were trying to do some estimations of how many, and there was lots of them. Enrico Fermi, one of the most senior and esteemed of the scientists came over to their table and said, “Okay, where are they?” So that’s the paradox. Maybe they should be there, but there’s no sign of them. But maybe they’re not there. So, David, Fermi paradox.

David Krakauer: I already told you what I work on. I work on the Evolution of Intelligence and Stupidity in the Universe. It wouldn’t surprise you that my answer to the Fermi paradox has two components. The first is that we’re too stupid and the second is that they’re too smart. So let me explain that. The history of science and the history of not seeing the world, right? Gravitational waves, subatomic particles, the gene, the neuron, the history of science is the discovery of signals that are a clue to how the universe works.

David Krakauer: It takes a huge amount of time to find complex patterns. So I would claim it would be extraordinary hubris to believe, given the relatively short duration of empirical theoretical science, that we would be in a position to detect a signal generated by a vastly more intelligent form of life. So that’s the two stupid answer. But then there’s the two smart answers for them. This actually comes out of work that we did here at SFI. My colleagues, Chris Moore and Michael Lachman. They proved a beautiful therapy.

David Krakauer: They showed that any sufficiently optimized signal using principles of information theory would be indistinguishable from noise. So if aliens are communicating with each other intelligently as opposed to broadcasting absolute garbage through their homegrown television studios, we actually wouldn’t be able to tell the difference between a signal and noise. That’s because they’re too smart. So as civilization advances the technological signal starts to converge on the biological signal, becoming more and more efficient. Therefore being almost indistinguishable from background radiation.

David Krakauer: So my answers are, we’re too stupid and they’re too smart. That’s why we haven’t found them.

Jim Rutt: But you think they’re probably out there?

David Krakauer: There’s no doubt.

Jim Rutt: No doubt. The 14 year old boy still lives on in Krakauer.

David Krakauer: 14, the 10, the eight, the seven, the four, the two and the one. Yeah.

Jim Rutt: I will say I’ve been reading more and more intensely on the Fermi paradox for the last 10 years and I’ve now gotten myself back to pure agnosticism. There’s just too many unanswered questions. How hard is the high information fidelity DNA architecture to achieve? How about the, you carry out Excel, maybe that was once in a galaxy’s long shot or maybe not.

Jim Rutt: Fortunately, we’ll get some clues to some of these answers as we start analyzing the atmospheres of earth like planets and things of that sort. But they’re still alive now. It’s a purely agnostic. I can see it going either way. Yes, they’re there or no, they’re not.

David Krakauer: But you see, I would say one thing just it’s really important to remember, which is that really on the order of a decade ago, we didn’t even knew that they were exoplanets.

Jim Rutt: That is true.

David Krakauer: Right? So, now we know that every single planetary system has at least one X subject. So this is remarkable to me that we’re talking about why is it it’s taken, has it taken so long to discover life? But we haven’t until a decade ago discover that even planets were earth like. So I think that it’s too early to call this one. I wouldn’t be in the slightest bit surprised if given the rate of accumulation of exoplanet detection and as you said, incredible advances in cosmology and astronomy, that we wouldn’t detect unequivocal signals of biological life in the next decade or so.

Jim Rutt: Could easily happen from the atmospheric studies coming from the James Webb telescope, I believe, will be where they’ll find it first. Well David, this has been everything I would hope it would have been. A wonderful, far ranging, erudite conversation. Got some applicability to the real world. Who would’ve thunk it?

David Krakauer: I know. Well Jim it’s always an absolutely marvelous occasion talking to you. I hope this has been some interest to your listeners.

Jim Rutt: I believe they will definitely like it. Production services and audio editing by Stanton Media Lab. Music by Tom Mueller at