The following is a rough transcript which has not been revised by The Jim Rutt Show or by Jordan Hall. Please check with us before using any quotations from this transcript. Thank you.
Jim: Howdy. This is Jim Rutt and this is the Jim Rutt Show.
Jim: Howdy! This is Jim Rutt, and this is the Jim Rutt Show. Listeners have asked us to provide pointers to some of the resources we talk about on the show. We now have links to books and articles referenced in recent podcasts that are available at our website. We also offer full transcripts. Go to jimruttshow.com. That’s jimruttshow.com.
Jim: This is another in our extra episodes focused on the coronavirus-19 pandemic. Our guest today is Jordan Hall, a person well-known to many of our listeners. He’s been on the show twice before. Jordan is a former corporate executive, with various interesting experiences. He’s one of the most insightful and broad thinkers and writers and talkers of our time, and he’s a multidimensional troublemaker, one of my very good friends, and one of the people’s opinions I respect the most.
Jim: I’ve asked Jordan to come on, and we’re going to talk broadly about complex systems dynamics, and the ability to respond to crises like this, but not only this, because this is not the only crisis our civilization’s going to confront over the next 20 or 30 years. With that, Jordan, let’s jump in. Why don’t we start by talking about the intelligence to action art, which we sometimes call sense-making?
Jim: One of the things I notice about our current crisis is there were intelligence reports floating up as early as late December that there was a chance of a pandemic. Chance is unknown, but much higher than 1%. At that point, it would have seemed reasonable for our civilization, or this country, to invest let’s say $10 million in doing some early pre-ramp for testing. By January, there were much stronger signals coming in from the intelligence field, and it would’ve been optimal to spend $50 million maybe to get real testing ramped up, put in some preorders for ventilators, et cetera, but none of that was done. The intelligence was never turned into sense, which was never bridged to action. If we were designing a complex dynamic system to be able to respond to situations like this, how would we do it differently?
Jordan: Well, I think one of the things, that’s actually has been noticed in this particular environment, is a nontrivial fraction of the people in the country were, in fact, aware of the situation, were making strong recommendations, and were themselves making good choices. It’s actually an interesting, what you might call it like a sense-making monocropping problem. I’m going to frankly point back to the notion of the blue church as being a significant issue.
Jordan: If you recall, the basic structure of the blue church is that you have a lot of very centralized decision-making. You have sensors out to the edge. You’ve got intelligence agencies, for example, that are watching what’s going on, and then they’re handing the information analysis up-chain. It gets handed off to, say, epidemiologists, and epidemiologists… You’ve got this large network of vertically oriented structures that are trying to take information, sense-make, up, up, up, but too many choices. There’s bottlenecks, right?
Jordan: Certain decision-makers have to make choices, and oftentimes, if an event occurs, where either the speed of the event, this exponential growth rate is a real problem for what we’re dealing with, or the complexity of the event, the set of characteristics that make it difficult to really even understand the nature of what’s happening, maybe we’ll go into a little bit more detail on that in a moment, or we call it this subtlety, like the fact that there’s a probabilistic, this fat-tail characteristic that Nassim Taleb talks about a lot. Those three kinds of things are just really, really hard for that kind of sense-making or choice-making infrastructure to respond to well. It either has to over-prepare, because it’s just designed to move at that thickness, or it has to under-prepare.
Jordan: What would be, potentially, very nice, would be to have a mechanism where we have a substantially more distributed capacity to make choices, and where the sense-making fabric is wired up to that in a number of different levels. You can imagine that if there was a way for… Imagine if you had a highly, highly distributed CDC kind of function, and you were getting some signals in the environment that there was something that needed to be taken care of, and nine people could decide, we’re going to ramp up our spending on this kind of medical supply, and you didn’t actually have to ramp it up to a large department head, but a small number of people could choose, they had resources that they could allocate to particular levels of protection, then what would end up happening is, as the signal became stronger, you would actually just see a larger fraction of the overall portfolio of choice-makers making concerted actions. In fact, you could even see them building to larger collections of choices.
Jordan: What would then occur is that, if the event actually hit, you would have a reasonably well-designed response plan already in place. Then, also, because if you design the system smoothly, like the way the special operations in the military work, the ability to then rapidly upgrade capacity, on the basis of things that have already been prototyped, would also be part of your design. Does that make sense?
Jim: Absolutely, and I’m going to extend it a little bit. Let me know if you think this is resonant with what you said. Suppose we had allocated a billion dollars a year for these kinds of adaptive responses, took it right out of the taxpayers’ money, through some selection mechanism, hopefully with a fair amount of randomness, and it had 10,000 people involved in making these decisions, any group of 10,000 people could allocate some level of authority which they had to invest, right?
Jordan: Yeah.
Jim: Further, this thing could have something like a crowdfunding mechanism, so any one of these 10,000 people could kick up a crowdfunding site, based on their reading of the intelligence, and say, “Hmm, I think putting some money into getting ready for testing in December of 2019 is smart and ahead of the curve, and oh, by the way, my signing authority, or my little pod of 10 people that I can get to cosign with me, our signing authority is 100 grand, so we’re putting 100 grand in this. By the way, we’re sending this notice out to the other 10,000 that we have done this.”
Jim: That’s a signal, which will cause other people to look at the evidence and either say, “Hey, these guys are ass-clowns, and are worried about something that doesn’t matter,” or, “Hmm, maybe I ought to think about this.” This way, we get a swarming, adaptive, gradual and partial buildup of resources aimed at these problems prior to the point where you have to get a bureaucratic person to make a decision.
Jordan: Yeah, and that’s a… Obviously, there’s all kinds of holes in that design, but it doesn’t mean that it’s a bad direction. I think that’s a good direction of what would be a very functional design. It’s actually interesting to notice that one of my friends, a guy named Adam Robinson, uses signals in the financial markets to help him get a sense of what’s going on in the world, and that’s actually not that different.
Jordan: For example, if you take a look at stock prices, you actually saw, for example, the airline stocks, particularly international airlines, began to move into a strong negative territory, in comparison to broader markets, quite early into the event. To the people who are in a position where they have skin in the game, and therefore have both interest and motivation to have really sensitive awareness of what’s likely to happen in the future, we’re already discounting airline stocks well into… We’re talking like early January/late December, and I’m not meaning Chinese airline stocks. I mean international airline stocks.
Jordan: If you could imagine, if there was a way… I don’t know exactly how to do it, but if there was a way for them to create a long position, which is basically what you’re saying, in response mechanisms… In some sense, like right now, it’s funny. If you took a trading pair, and you compared airline stocks to telecommunications companies, like Zoom Conferencing, you would see, of course, that Zoom Conferencing has gone way up and airline stocks have gone way down, so you’re able to come up with a long social infrastructure that is responsive to the environment.
Jordan: Now, if you could… What you’re basically saying is, can we build an infrastructure that’s more the direct hands-on that has a similar construct? Now, a bunch of people reverse, with some form of skin in the game, have some mechanism to choose to make on their own account, on their own decision authority, certain qualities of investment on the basis of what they think is unfolding, and then you have some mechanism where other people can… Then again, or sorry. Yes, you can do that.
Jordan: Then the next thing I just wanted to bring up is this notion of… How do we say it right? It’s like chaos, and the transform from chaos into order, the very earliest stages are the hardest, right? You just haven’t got a clue of what’s going on. I’ll give you an example right now in hospitals. This is not the case in China or Italy, because they’ve already been through it. The details of what exactly is going to go wrong in hospitals as the case count starts to ramp, the answer is we don’t know. It’s actually a mystery. It’s fog of war.
Jordan: The learning curve, on that front, is extremely high, but once you’ve learned a little bit, if you have mechanisms in place to scale those learnings across the entire system, you can very rapidly respond. If you actually wire up a system where you’ve got highly distributed capacities to allow people to use their local sense-making, and the choices that they make appropriately, skin in the game, send the signal out to the rest of the environment and actually prepare the actual infrastructure in a heterogeneous way, and then you’ve also built the mechanisms of being able to take early prototypes and scale the learnings of those prototype systems quite quickly, then you have a response infrastructure that can handle both rapid and complex and subtle systems dynamics.
Jim: Yes, however, yes, you could wait to learn from experience, but I would suggest another potential use of, shall we call it, this anticipation fund, is one could do simulations, I mean, with live bodies, right? For not a lot of money, a few million dollars, you could take-
Jordan: Nice.
Jim: A medium-sized county someplace, and say, “We’re going to literally overrun it with bodies, and simulate a pandemic and see what breaks.”
Jordan: Awesome. That’s funny. If you could just find a way to construct us an environment where people were empowered and incentivized to do that, where making the wrong bet caused them to feel some kind of negative consequence, or making the right bet gave them an asymmetric positive consequence, then yeah, you could absolutely see some small town going, “You know what? We’re going to try this.” Then those learnings would be at least… They’d be meaningful. In those early stages, meaningfully more than zero is a significant differentiator.
Jim: Yep, well, that’s where I see this complex system dynamics response unit. They’re the ones that are going to see it, and they can basically bribe people into it, go to a medium-sized county and say, “Hey, guys, we’ll pay you a $3 million bonus to participate in this. We’re going to spend another $7 million rounding up a bunch of people from central castings, to put ketchup all over themselves coming out of their various orifices, and we’re going to imagine that you guys have a zombie apocalypse grade pandemic here in your county. Are you in?” Right?
Jordan: Well, so what’s-
Jim: Sooner or later, someone’s going to say yes.
Jordan: What’s interesting, as you’re saying that, is it is somewhat well-known across the Internet, that there was, in fact, a coronavirus simulation that was done at the level of, I think, the World Health Organization, or some kind of transnational organization, not that long ago, like the-
Jim: November, I believe it was.
Jordan: Yeah, so the key is not necessarily… There’s something about that, that the notion of doing that kind of an idea exists, and the challenge, I think, is to figure out how to take the learnings that have already been done by organizations like the military, and try just to learn how to scale those kinds of ideas into the broader environment. What I mean there is that, round about the Vietnam War, we began to realize that we’re not fighting the Nazis anymore, and that the nature of the field of combat had a lot more pace of change and flexibility, and by the way, variants of different kinds of enemies, getting into fourth generation warfare and asymmetric warfare, and so the military just had to learn to build a completely different capacity to deal with something which, in the military domain, is quite similar to what we’re talking about at the level of just complex systems dynamics, in general. It goes, okay, great.
Jordan: Then the next point is just to recognize that, I guess in some sense, obviously now, this is not esoteric. We’re spending… I don’t know exactly what the bill on this fucking thing is going to be, but I know it’s going to be pretty big, and-
Jim: It’s certainly with a T, right?
Jordan: Yeah.
Jim: We’re talking several T’s. That’s why I’m saying, you can afford to have a slush fund of some considerable size, that as long as you can design it fluidly, and as you say, with skin in the game, and feedback dynamics, it makes a shit-load of sense to have a system dynamics response team funding serious money every year, doing war games, developing capability, making bets, sending signals, et cetera, to avoid getting whacked for big T’s, right?
Jordan: Right. Now, we shouldn’t under, how do I say this, underestimate the challenge of actually building something like this because, of course, you’ve got to deal with the fact that… For example, in the current circumstance, as I think you’ve tweeted, there is a series of decision points, where a set of interventions on the part of the country radically reduce the impact, but the interventions have a cost, and the information environment is subtle and complex, and so the ability to make a choice to say, “Okay, I’m going to make this a bold intervention at X cost, which guarantees that I’ve actually made an investment at this cost,” and then not know for sure that you actually made a wise investment, runs into all kinds of political complications, as we’re seeing right now, literally.
Jordan: We’ve got, on the one hand, some cohort of people that are saying, “Look, shut everything down, quarantine everything, do it for four weeks, and then…” What was it called, The Hammer and the Dance, I think, that meme got out.
Jim: Yeah, that was really good, The Hammer and the Dance. If you guys haven’t read it, look it up. Type in Hammer and the Dance into the Google, and you’ll see a damn excellent essay.
Jordan: The dance part is then, okay, let things open up, and then keep an eye on things with testing and tracing, all kinds of other protocols, and boom, you can migrate. This is like an efficient frontier. Of course, that’s going to cost something, all right? That’s going to have real impact on the economy, and on human beings. Let’s not be unaware of that.
Jordan: Then you’ve got the other side, and by the way, the other side is not batshit insane. This [Levitt 00:15:18] guy, his models are not even vaguely unreasonable, and his models say it’s actually not that big a deal. If we just sort of sat around, you would actually see that it’d be a pop and a fizzle, and it’d kind of be over, and then you would save all that energy that you would otherwise have put into the economy. Of course, there’s a whole bunch of people, who don’t want to have negative economic consequences. You’ve got actually a real significant challenge that, in many cases, even as you can definitely do stuff that’s rising up from a distributed network, there are points at which choice-making reaches levels where having a mechanism that can resolve that level of tension is, how do I say it, decidedly nontrivial, like exactly how you do that is tricky.
Jordan: Now, of course, the answer to that is an answer that’s going to be tricky for us to deal with, which is compartmentalization. If I’ve got the entire United States has been trained as a single object, and the whole thing has to make a single choice, as a chunk, obviously what that means is that we’re putting more chips on that bet, whereas, for example, in the case of Europe, Italy makes one choice, and Spain makes another choice, and the Netherlands makes another choice. To be sure, particularly in the case of a pandemic, there’s contagion cross-border. We should be aware of it, but at least there’s a decoupling, so in principle that should allow there to be heterogeneous choices, and then an ability to basically have a resilience, right? You’ve polycropped instead of monocropped.
Jordan: There’s something to be… We have to really think about how we actually construct the actual network, or the geometry, of our choice-making infrastructure, so that we can have the capacity to make these kinds of choices in a way that don’t create very, very heavy, like all at once, big things that have to be made at the top, but can actually allow smaller pieces to actually make effective choices that are, in some cases, maybe weighed differently. Does that make sense?
Jim: Yep, and this is one of the things that’s also coming clear to me is that we should be, as a society, really thinking carefully how to build our society in a much more modular fashion, so that you go down to the county level, and the county level has a level of resilience to be able to stand in place for 60 days.
Jordan: Right.
Jim: Some extra warehouses, et cetera, in which case, Westchester County, when the New Rochelle thing started, just said, “All right, boom, Westchester County’s now on lockdown.” Westchester County, by the way, is self-sufficient for 60 days. No one’s going to suffer. We would’ve probably nipped the New York outbreak in its bud, but because Westchester County is, in a confusing fashion, interwoven with New York City and everything else, and that there’s no simple way to break the circuits, there was no simple decision to be made.
Jordan: Right.
Jim: And so no decision was made until it was too late.
Jordan: Yeah, and so this is… I think if we pop up above… I don’t know if you mentioned this in this call or what we were chatting about before. The message that, certainly, I had been sending, and I think you and I have both been very much sending to the world at large is this particular crisis, the COVID-19 crisis, and then the financial crisis that is connected to it, is an example of just the normal. This is the world we’re living in, and it’s a world where we’re going to be experiencing a very large number of systemic perturbations that will have characteristics of exponential consequences, and they’ll have characteristics of complex systems consequences, where, for example, a response at the medical level actually has implications in terms of unemployment lines.
Jordan: We actually just have to re-engineer the way that we do things as a civilization to just be adaptive to the environment that we find ourselves in. I mean, that phrase should just be obvious, and then we just should look at the reality and say, “Okay, what are the environment we find ourselves in,” and then you start making the moves. If we want to talk about what’s the… What are our possible end states? How might this thing play out?
Jordan: I can tell you one thing, with an extremely high level of confidence, like damn near certitude, is that it’s not really a matter of ifs, we make these sorts of changes; it’s a matter of when, and how painful? If you take the exact scenario that we’re in, which is, if we had just been paying attention better, and we just had built the capacity to respond to the signals we were getting and made better choices earlier, we could’ve saved ourselves an enormous amount of pain, and an enormous amount of money, all right? Take that lesson, and apply that lesson broadly.
Jordan: This event is the equivalent of the first guys who were getting sick in China, because I can look at this event and say, “Oh, wow. This kind of thing is going to begin to happen more and more and more, and the longer I wait, and the longer I stall, changing stuff that I need to change to be able to be responsive to reality, all that’s happening is that I’m delaying the implications and raising the stakes. Eventually, I will for sure change, because reality has a very strong capacity to make me do what it wants.” The best choice is just to figure out how to adapt to it now, so you minimize the negative consequences, but-
Jim: Of course, that-
Jordan: Human beings are tough to sell on that.
Jim: Of course, that requires people moving from a linear, essentially Newtonian, approach to the world to a true complex systems approach, which is slowly propagating, but it is, by far, not the norm, or how even very well-educated and experienced people operate.
Jordan: Yeah, that’s for sure. I mean, the problem, of course, is that it’s a lot of these things. Even if you hold those kinds of ideas, call it, cognitively. You’re having ideas. If you haven’t actually experienced them in your life, then it’s hard to do, but, I mean, fortunately, in some sense, everybody right now is getting some sense of it. We’re seeing exponential curves. Everybody’s producing exponential curves, and we’re getting to watch the dots. Every day the dot stays on the curve, we’re like, wow, look at that. That’s what an exponential curve looks like, and there it is.
Jordan: Of course, by the way, Elon Musk’s critique is sort of beautifully accurate and completely irrelevant, which is that there are no exponential curves in nature. It’s definitely a sigmoid, or, by the way, there are things that are exponential, and then collapse. Those are also things that exist in nature, but the first part of the sigmoid, treat it as an exponential, because in terms of making choices, that’s the way that you have to deal with it.
Jim: Absolutely, and we do not want to get into the top of the sigmoid here, fans. Let’s be clear about that, right?
Jordan: Right. I think that’s the moral lesson is, to the extent possible, just recognize that this ain’t the 1950s, and there’s lots of good reasons to understand that the environment that we are in, largely because of the world that we’ve built, right? This fact that we have billions of people, or millions of people, moving every day through giant global transportation networks, it’s a big reason why this thing was able to hit so hard and so fast and surprised so many people.
Jordan: The fact that we have increasingly networked our electronics, or the power systems, into cybernetic systems means that there will be a cyber event of this sort, at some point in the future. Fill in the blank. There’s just a very large number of phenomena that are part of the world that we live in, that have these kinds of characteristics, and so hopefully what will happen is that the pain and the learnings, the forced learnings, the unpleasant learnings, or this particular event, will lead to a choice to actually respond with long-term effectiveness, to actually say, “Okay, let’s really do the hard work of re-engineering systems that can actually be adapted to this environment.” That would be a really great thing.
Jim: Yep, absolutely great, and that would be a big positive learning, to help our civilization head in a meaningfully more intelligent way towards what comes next, because we are going to have a whole bunch of these things between now and the end of the century, whether it’s the backlash from climate, or whether it’s the hacker attack from hell, or whether it’s… It certainly will include more and more of these pandemics, so we’ve got to be ready to deal with a world that’s much more complex than we’re ready for today, because we’re there.
Jim: Let’s hop back to something we talked about a little bit before this call, which is how to start to take action and prototype in this area between sense-making and small scale action-taking and large, bureaucratic movements. I think we talked about the agile philosophy as a way to essentially start action, prior to big, bureaucratic buy-in.
Jordan: Yeah, so the military went through this development and built up special operations. Then the software industry went through the same basic kind of learning curve, and built up Agile, all right? I mean, you know this. There was the waterfall technique, which works really good when you’re building moon rockets, but as the software industry began to deal with the reality that the landscape was changing rapidly, and it was very difficult to actually know what the end state was going to look like.
Jordan: They had to change their development methodologies to use this thing called Agile, which basically meant that you built something, and then you saw what happened. Then you built more, and then you saw what happened. You built more. You built an iterative feedback that included… basically built a tighter OODA loop, if we know that language.
Jordan: We can talk… I mean, obviously, I tend to go more theoretical, but if you talk about it in terms of practicality, if you think about the same basic mechanism of empowering people to respond quickly, you can also think about your response capacity could be using a lot of Agile methodology. It’s actually interesting to contemplate what that would look like and how that would empower people, particularly now that we’re talking about things like UBI going from insane to management. Isn’t that crazy?
Jordan: You and I were, like 10 years ago, were saying this needed to happen. Universal basic income needed to happen. I think we were pretty much laughed out of the room. Then Andrew Yang is laughed out of the room three months ago. I think everybody right now is only dickering on how much and how long.
Jim: And how to implement it. It’s quite remarkable.
Jordan: Yeah.
Jim: Let’s ask-
Jordan: By the way, just to… What I’m saying is that you could actually imagine… Imagine universal basic income is, in some sense, conceptualized as a standing reserve of people’s time focused on this general purpose decentralized sense-making and choice-making, where you actually could have makers, people who are making mechanical capacity, could be looking at what’s going on and say, “Oh, I’m actually going to spin up a project group focused on being able to fabricate N100 masks, because as I’m seeing the data, it’s telling me that N95 masks aren’t actually a very good idea,” and you get a tiny, little microfabrication facility that’s able to prototype something small, but if it works, then again, the ability to scale ideas is actually very easy.
Jim: Yep, and having that communications link to share. I’ll give you an example. Our local Staunton Makerspace is already working on prototypes for shields to go over masks, so that they’ll last longer. They can be 3D printed. Somebody up in the DC area designed it. Unfortunately, they didn’t post the design publicly. They require you to go and ask permission, which our people have, and gotten the file. Hopefully, this file, once it’s vetted, will get out into the world, and makerspaces all over the world can start building these things, giving them to hospitals, that will triple or quadruple the useful life of their masks.
Jordan: Mm-hmm (affirmative), yeah, and it ends up with a… It’s a very different topology, but you said something that I think is very important, is the communication channels, the ability to have open comms, so that good ideas actually can be perceived and flow, not closed comms. Then this ability for people to identify good ideas and up-regulate those ideas, and this is very much like David Snowden’s point. That’s how you operate in complexity is there’s never an endpoint. You’re always moving, right? Because the system’s always changing. Your basic choice is more of this and less of this. You’re constantly moving through an environment of up-regulating certain things and down-regulating certain things.
Jordan: Building an overall system, at the economic and at the sensing level, that has that kind of a characteristic to it, is the story. That’s the thing that we’ve got to figure out. It doesn’t look like… This is the thing. It doesn’t… We have to work it like a punctuated equilibrium. Almost everything we’re building right now is a combination of 1950s and ’70s era kludges, and then this little eruption that happened in software, this trying to figure out how to operate in a non-1970s way, but nonetheless still doesn’t fully operate the way that it naturally knows how to be.
Jim: Yep, we had a very interesting conversation on the podcast with Dan Mezick, who’s trying his damnedest to put Agile into big corporations. Here it is 2020, and I was horrified how little progress they’re making, right? It’s happening, but very, very slowly. As you say, most of the world, it might as well still be 1975. I imagine if you looked in the government, it’d be even worse.
Jordan: Yeah, well, I happen to know the IRS is still using, what is it, COBOL? For running their entire software system, some sort of… Yeah, I think it is COBOL, which gives you-
Jim: Yeah, 1950s vintage language, absolutely. Well, let’s go to one last topic before we wrap up here. I mean, these extra podcasts are aimed to be short. That is, one of the things that seems to be lacking in our national response, certainly lacking in our national response, is coordination, vertical and horizontal coordination. As you know, one of my pet models of the past, it’s on a smaller scale, of course, is the Apollo mission control system, which was this astounding real-time and non-real-time series of decision loops all linked together with 1960 vintage technology, but it worked unbelievably well.
Jim: Is it possible to imagine building something like that, so that we could have a truly coordinated, operating on multi-time scales, multi-geographic scale ways to respond to problems like this? So that we don’t have one governor doing X and the other governor doing Y. We don’t have the president sending a signal that’s at 90 degrees from what the head of NIH is saying. If we had a… Is it possible to be able to have on standby and activate on need, a mission control system for our whole society to deal with one of these major systemic risks?
Jordan: Yeah, I think the answer is yes. I don’t think it’s trivial, but I think it’s yes. Let’s just try to make it somewhat concrete. All right, so imagine this. Imagine… By the way, none of this is in some sense hard. Imagine if you had a database that had everybody in it, and it had a mapping of all of their skills, and maybe a certain kind of psychological profile. It’s like the way they think.
Jordan: Then you also had some, the aforementioned sense that that group is sensing things, all right, so looking around and seeing stuff, and there was a way to escalate this sensing to the point where you could activate a much more intense investigation, right? It’s like, think of it, if you do a model, like the way that… You actually know this, because you built a software model of it, the way that the human brain actually runs attention, right?
Jordan: There’s peripheral attention, where you’re sort of paying attention broadly and diffusely to lots and lots of things. Then there’s a way for there to be a phase transition, where you actually give very focused attention. What was that? All right? You hear a crackle over there, and all of a sudden, the entire system allocates resources to pay very focused attention to that problem.
Jordan: Imagine if you had that, where you could actually have something where diffuse attention of our decentralized environment is orienting, orienting, orienting, and then, when a certain amount of energy is pointed to something, it pops, and you could then do… You push a button, and the right expert, the right people of competence around the problem domain, swarm to the problem domain. Maybe you get 30 distinct working groups that are all looking at it, and all of their outputs are put into a general location, so you actually get this… It’s a Bayesian distribution of distinct perspectives of these different working groups.
Jordan: By the way, I would imagine you’d actually probably want to iterate that two or three times, which is to say that you take the output of the first process, take all of that. Make that the input of an iterative process, where you have to deal with entirely new groups, or you scramble the groups. You actually have a really powerful model for actually shifting cognitive bias and group think and stuff like that.
Jordan: My guess is that you could probably build something like that, that could do two or three iterations, and not that long, like a period of a week, with 40 or 50 people. You could get to an 80th percentile of what our collective intelligence, what our capacity as a population, has to be able to make sense of that event, and at a very high level, like a radically higher level than we’re capable of right now. Here are all the important questions that we don’t know the answers to. If we want to ask them, we have to expand this thing, and so it’d be like a phase transition, from a diffuse environment to a very concentrated environment, which could then have another phase transition or de-escalation.
Jordan: De-escalation would be, so as far as we can tell, red alert. We’re now in a red alert. You heard the crack. You look over. It’s not a tiger. Keep going. Or, okay, actually we need to pay more attention. What is it, the parasympathetic nervous system? Which is it? I always forget, because the naming always throws me off.
Jim: Yeah, it’s like if you hit your knee and your leg twitches. That’s your parasympathetic nervous system.
Jordan: [inaudible 00:32:49] triggers arousal. This is why I keep Daniel around.
Jim: Well, arousal would be deep in the limbic brain.
Jordan: No, let’s just go, one of those two systems. I look over. I focus my attention, and then I actually get more pattern recognition that the event that I’m looking at is a real situation. Then what I get is, of course, at the biological level, I get a much higher level of arousal, which includes the activation of the limbic system, right? I start seeing the adrenalin. I have a higher respiratory rate, and a substantially larger amount of neurological resources are focused on, what the fuck is that, right?
Jordan: Now that the larger system is now poised to really orient its attention, and now increasingly, as you say, at that point if my focused attention working group started throwing out signals that this is important, then my distributed actuation potential guys are going to take a look at that and say, “Oop, better pay attention now.” I’ve got my makers working on building masks. I’ve got… Hospitals are starting to gear up. That’s a mechanism that can actually have a nice feedback loop.
Jordan: I think the answer is yes. I don’t think it looks like Apollo, but I think we can take a lot of the learnings of what they did at an abstract level, and then say, “Okay, topologically, how can we actually replicate that, using contemporary tools to do something that is functionally equivalent to what they did, but is simultaneously able to use the whole of the population, is able to operate at light speed, and is able to do so in the context of an arbitrarily large number of possible threat domains.”
Jim: And at an arbitrary scale. Of course, an arbitrary scale doesn’t mean always the largest, because these things aren’t always going to be this big.
Jordan: Right.
Jim: But there will be other ugly things, right? For instance, SARS was handled right. We did bottle that up. But there’ll be something between SARS and this, so this thing needs to be agile with respect to its allocation of resources, be a bit ahead of the curve, never behind it, but know when to essentially de-escalate.
Jordan: Exactly.
Jim: I think we’ve got a great design, very, very high level functional specification there. Now we have to get some whiteboards and get goddamn down to it.
Jordan: Yeah, well, my understanding is we’ve got $4 trillion to work on it, and more, so this shouldn’t be that hard to do.
Jim: You would think, just the shit that falls out of people’s pockets. Any final thoughts, Jordan, on how people should be orienting to all this?
Jordan: Well, yeah, I think that from my point of view, we need to be looking at this, again, from a complex perspective, so let’s take a look at the spillover effects. I mean, it’s relatively obvious right now that the financial and economic implications are at least as significant as the medical implications, and we also should be looking at the mental health implications.
Jordan: There’s a whole lot of different things that are going on right now, and being myopically focused on the thing on the ground, because it’s very salient and has a bit of a Hollywood movie feel to it, is unwise. We actually need to have our vision much wider in taking a look at the whole set of choices that are being made, and there’s a lot of choices that are being made that are going to have extraordinarily significant lasting implications. I don’t really feel particularly confident in the quality of the folks who are putting those choices together just yet. We, the people, may actually have to start taking matters into our own hands and presenting them with what we think are, in fact, the right ideas.
Jim: Yep, that makes a lot of sense. It’s time for we, the people, to step forth. Well, thanks, Jordan. As always, unbelievably interesting. I’m sure our audience learned a lot and had some serious stimulation.
Jordan: All right.
Production services and audio editing by Jared Janes Consulting. Music by Tom Muller at modernspacemusic.com.