The following is a rough transcript which has not been revised by The Jim Rutt Show or Dave Snowden. Please check with us before using any quotations from this transcript. Thank you.
Jim: Today’s guest is Dave Snowden. Dave is the creator of the Cynefin Framework and originated the design of SenseMaker, the world’s first distributed ethnography tool. He divides his time between two roles. He’s the founder and chief scientific officer of the Cynefin Company and the founder and director of the Cynefin Center. His work is international in nature and covers government and industry looking at complex issues related to strategy and organizational decision making. He has pioneered a science-based approach to organizations drawing on anthropology, neuroscience and complex adaptive systems theory. He’s also an academic and has published in the area of complexity, but no need to go into all that. You can look him up online.
Dave’s also a returning guest. He was one of my earliest guests way back in EP 11 when I barely knew what I was doing. In fact, I recall I recorded my podcast with him from a hotel room, something I’ve seldom done since because of the crappy irregular nature of hotel internet, but it was a good conversation nonetheless. If you like what you hear today, go check that one out. So welcome, Dave.
Dave: Yeah, it’s good to be back.
Jim: Yeah, I always enjoy talking with Dave. Very interesting person who has been looking at complexity, which regular listeners to the show know is an obsession of mine from a mixture of the scientific perspective and especially from the practical and applied side, which I think makes Dave a particularly interesting thinker. Today, we’re going to center our conversation around a document for which Dave was the co-author called Managing Complexity (and Chaos) in Times of Crisis: A Field Guide for Decisionmakers. No doubt, we’ll go a field from there, but I think there’s a very interesting place to start. But before we do that, I would like to ask Dave to do a brief introduction to his Cynefin framework.
Dave: Okay. So Cynefin actually originated based on the work of Max in knowledge management when I was in IBM looking at informal and formal systems. Then it mutated over time into a complexity-informed framework. So it works off the basic principle. There are three types of system, ordered, complex and chaotic systems. There are phase shifts between them and we often use the metaphor of solid, liquid and gas because that indicates a phase shift between states, but it also introduces the concept of the triple point. So it’s that point where it’s equity probable whether something will become solid, liquid or gas. And in Cynefin, that’s called the aporetic domain.
And aporetic is a keyword. Every Greek school child knows it, but nobody in the West knows. It means a question you can only answer if you think differently about the problem. So you can’t just straightforward to answer it. So that’s a key aspect of Cynefin. And then in other levels, it divides order into two, clear and complicated, it has liminality. So it’s a three-level framework.
Jim: Now, one things that people in the world of complexity science and also applied complexity science like to talk about is the distinction between the complex and the complicated. What’s your take on that?
Dave: Cynefin does that and I mean it’s also useful to go back. The Latin roots of complicated is to unfold, whereas the Greek origin of complex is entangled. So something which you can fold, you can unfold and fold, it stays the same thing. Something which is entangled is constantly shifting and constantly changing. So for me, it’s a really important distinction and human beings have learned how to make things complicated. We’re actually quite good at it as species. It gives us predictability. The trouble is when we assume a complex system is complicated, then we get it radically wrong. Alicia Juarrero, whose new book is coming out shortly, she wrote Dynamics in Action, which is a brilliant book, she has this wonderful metaphor. She says, “A complex system is like bramble bushes in a thicket. Everything is entangled with everything else and the only thing you know with certainty is that there are unintended consequences. So whatever you do will have unintended consequences.”
Jim: Yeah, my own definition, which I actually I’m surprised, God bless that it’s not bad by the Santa Fe Institute last fall, and I probably hijacked some of this from your thinking and other people’s, is that in a complicated system, typically, you can take it apart and put it back together again because the logic is implicit in the statics of the design. While in a complex system, you can’t take it apart and put it back together again and have it work the way it did before because so much of the information is in the dynamics. For instance, you can’t take a human cell apart down to its chemicals, put them back together again and expect it to work. It won’t. You can’t stop the economy and then restart it and expect it to be where it was before. While you can do both of those things with a lawnmower motor or even a 777 Boeing airplane, you can take it apart, put item back together again, at least if it’s on the ground and it will still work. What do you think about that as a useful distinction?
Dave: Yeah. Paul Cilliers, who was one of the first people to work in this space who was a good friend and he died of a brain hemorrhage, he made the distinction between an aircraft and a mayonnaise. So an aircraft is complicated, a mayonnaise is complex and I’ve always liked that image. I think the other thing I’d add to it though and I think this is where it starts to get controversial, in the complex adaptive system, there’s no linear material causality. And that’s a really important principle because the complex system has dispositionality and is modulated, but it doesn’t have causality in any meaningful sense of the word. And that’s why it’s radically different as a way of looking at the world.
Jim: Yeah, it does not have easily mapped causality. And in fact, even in a deterministic complex system, they typically will pass through so-called deterministic chaos where in theory it might be possible to map cause and effect, but in practice it can’t be done.
Dave: Yeah, and you get into quantum layer in there and the properties of the hole are always different from the properties of the part. Superconductors is a good illustration on this because people can’t predict superconductor quality from the electron behavior. But once you clump enough electrons, all of a sudden, you get superconductivity. And I think that’s an interesting feature because it was the experimental physicists who found. The theoretical physicist said it wasn’t possible. So that interaction between theory and practice and to discover emergence is a key aspect of working with complexity.
Jim: Indeed, in fact, regular listeners know that emergence is one of the main lenses we use here on The Jim Rutt Show to look at things and I like to point people to Harold Morowitz’s book, The Emergence of Everything, as an interesting starting point, fairly arbitrary. It divides the world up and the universe up to 27 or 28 emergences and it’s a good introduction to the topic for those who would like to learn more. One final topic before we actually drop into the booklet is the idea of constraints and why is that so important to your lens?
Dave: I think this is partly derivative of Alicia’s work as well. So she makes an important distinction between enabling constraints and governing constraints. So an ordered system generally is governed, it’s contained, whereas a complex system is connected. And this is one of the big differences between complexity thinking and system thinking, by the way, is most systems thinkers define systems by boundaries. But from a complexity point of view where everything is connected and connectivity is what matters, you don’t necessarily have boundaries within the system. And we’ve now extended the whole concept of constraints. We have a whole typology of constraints now that we use actually in conjunction with constructive theory, which we’ll probably come to later.
And constraints can be mapped and they can be changed. So you can’t control the output of a complex system. You can’t say, “I want to be here. This is how I get there,” you have to start off by describing where you are and where you can go next. I sometimes call this the Frozen 2 strategy. I don’t know whether you’ve seen Frozen 2 yet.
Jim: No.
Dave: It’s a great complexity movie, right?
Jim: My granddaughter likes Frozen, the first one.
Dave: Well, Frozen 2 is better than Frozen 1. I think it’s a great complexity movie in the middle of it-
Jim: All right, I’ll look it up.
Dave: The real heroin of the movie, who’s the younger sister without the magic, sings this beautiful song subsequently made famous by a Ukrainian refugee, which is, “All I can do is do the next right thing.” Now that’s what Stuart Kauffman calls the adjacent possible. So in a complex system, you need to map where you are and map where you can go next. And what we say is you start journeys with a sense of direction, you don’t have goals and that’s really important. And the more you know about which constraints are in play, the more you can manage those constraints, the more you can influence emergence. I sometimes use the metaphor of a series of magnets, some which I control, some which are controlled by people I’m aware of, some which are just changing arbitrarily.
And in the middle of those magnets, there are cast iron discs, some of which are different weights, different densities, different connectivity. If all of the magnets keep the same polarity and the same strength, I can predict what will happen to the cast iron discs. If I change one magnet, I can predict it. If I change two magnets, I can predict it. Three magnets, I can no longer predict it. Which actually takes us back to the origins of complexity theory and three-body problem.
Jim: Exactly. The three body problems is not solvable, right? Something as simple-
Dave: We close modulators. So the more modulators I control, this is a key aspect of the field guide, if I can control enough of those magnets and I’ve got real-time feedback, I can influence what happens to the system. If I have delays in feedback or I don’t control enough modulators, then I’m going to fail. And constraints and constructors are the two main types of modulators in play. And so mapping those becomes the critical first thing to do. Don’t talk about where we’d like or what the universe should look like. “Where are we now? What are the constraints? How many do we control? How do we get real-time feedback loop?” Those are the sort of critical practical questions you have to ask.
Jim: Yup. It raises another question for me. Now I think about it, when I was reading the booklet, one thing that I don’t recall that was in it, though I won’t say decisively it wasn’t, is the sense from many people who work in complexity that when we think of as a complex system, it’s in general open. Real-world examples like life is a local reversal of second order, second law of thermodynamics, but it has to maintain the second law at a broader scale, so there are semi-permeable membranes between life and not life. And of course, that’s where energy and material comes in and waste products go out. You can argue the same is true in a company or even an economy. So the idea of complex systems as being open or at least semi-permeable to an outside, how does that fit into your model and your thinking?
Dave: Yeah, it does and we talk a lot about that. And to be honest, I can’t remember whether it’s in the book or not. I regard that one as one of those self-evident things, to be honest, is a complex, is open as a system level, it’s not subject to the second law of thermodynamics. The overall system of systems it might be. And I think that’s key. And particularly in human systems and I think there’s some evidence of short-term teleological cause in human systems, we’re able to generate energy in ways we haven’t thought of yet in terms of the way the system develops. So yeah, and that’s my point about system thinking. System thinking deals with systems with boundaries. By definition, closed. Complexity deals with systems which are multiple connections. By definition, open. And that’s an important distinction.
Jim: Well, another interesting thought just came from your response, which is short-term teleology, right?
Dave: Yeah.
Jim: How would you compare, contrast or discuss the similarity between short-term teleology and the complex system concept of top-down causality?
Dave: Well, I think, yeah, that’s where it gets curious and I’m still playing with these ideas, all right? This comes out of new materialism, which is one of the most interesting develops in philosophy at the moment. And it links in with our work on narrative. So we’ve been working with mass generation of narrative now for 15, 20 years and that’s bringing in delusion concepts of assemblage. So I’ve often said an assemblage is a stranger attractor as a trope. And that links in with the Lander’s work and the new materials who say things like narrative actually has material reality in human systems. It’s not some abstraction. And the way an assemblage forms is, once it forms, it creates an attract well, stranger tractor and you can’t escape it.
Jim: Well, you can escape it, but it requires a shock that breaks the parameters of the system.
Dave: Yeah, that’s one way. The other way is what Deleuze and Guattari call lines of flight. So you have to find a weakness in the system, which allows you to escape the attractor mechanism. And that’s some of our practical work. If we’re mapping culture in the company, we’re effectively mapping the assemblage structures which determine how people see. And if you want to change the culture, you can’t just say, “We want a new culture.” You’ve got to actually shift the current one in the right direction. And if you want a dramatic change, you need to find a line to flight. You need to find a leverage point which will produce dramatic change within it. So I think top-down causality in human complex systems is, if it’s going to be effective, it has to be focused on changing on the three things you can actually manage in a complex system, which are the boundary conditions, the catalyst for the attractors and the allocation of energy and using energy as a catchall.
Those are the only thing three things you can actually manage which is why complexity is a lot easier as a way of understanding the world than business process or reengineering or the whole of scientific management because you manage what you can manage, right?
Jim: Yeah. It’s interesting. In my business career, I realized looking backward, I was a intuitive complexitarian before I had any idea of the concepts, probably because I was just too lazy and too bored by the idea of business process engineering. Remember some guy named Peters was peddling all this shit back in the ’80s and ’90s and kept trying to say, “We ought to be applying this stuff.” I remember reading one of his books and I go, “What a crock of horse shit this was,” literally threw it.
Dave: It was like Nonaka’s book in knowledge management. The first time I got it, I read the opening chapter, which basically said, “The East is about non-categorization and the West is about categorization.” And the first chapter has a rigid two-by-two categorization matrix. So I threw it in the bin and it launched a whole movement, so I had to buy another copy a year later, which taught me a lesson. You’re right. And a lot of people, they keep blaming Frederick Taylor and scientific management for stuff, right? When actually what they’re really doing is they’re blaming the popular forms of systems thinking, which manifested themselves in BPR, learning organization and everything which came afterwards.
Well, I got into a lot of trouble with Peter Drucker for criticizing Taylor and I still remember it. I got friends of mine knew Frederick Taylor speech, which was your famous vice presidential debate and having been picked up from a pile of humiliation on the floor, he decided I was redeemable, took me out for lunch and then we talked together after that. But one of the things we both said is scientific management, if you go back to Taylor, respects human judgment and applied complexity is bringing human judgment back into the domain big time. But what we’ve had from the ’80s until the current day with BPR and everything which followed, particularly what Gary Klein calls Six Sigma.
Jim: Yup.
Dave: Yeah, it’s a lovely name, right?
Jim: No, that’s great. Yeah.
Dave: Is an attempt to remove humans from the equation completely into automatic processes. And the trouble with all of that, sorry to get into statistics, is it assumes everything operates in the center of a Gaussian distribution.
Jim: Exactly.
Dave: [inaudible 00:16:29] tail of a Pareto distribution. And if you’re in the tail of Pareto distribution, you can’t determine outcome in a dance. You’ve got to respond to the present.
Jim: Exactly. You hit it right on the nose. I still recall during the 2008 financial crisis, some idiot CEO of a major financial corporation came on and said, “We couldn’t be expected to plan for this. It was a 16 Sigma event.” And I said to myself, “Now there is a guy who should not be allowed to be the CEO of a major financial corporation. Because if you apply a fat tail distribution and the few data points we have, it’s approximately a once-in-a-hundred-year event would’ve been the 2008 financial crisis.
Dave: Yeah, but that’s always the wrong way of saying things statistically. You should never say something as a one-in-a-hundred-year event, but it’s interesting, because one of our big current projects is we’re looking at distributed decision making. So not delegated, but distributed. So you distribute decision making into combinations of three roles, not to people. And one of the roles can actually be an avatar, so nobody knows who it is. Now that actually allows us to take at about 40 or 50% of the admin cost of traditional bureaucracy, but it increases transparency and it makes smaller decisions at the front. Now that’s a fat tail method. It says you can’t work out in advance what people should do. So you have to create a process with sufficient diversity, which can work it out at the time. And that’s when these big switches in thinking which complexity brings us to.
Jim: Yup. One last thing on Taylorism before we move on because I’m with you, I actually respect the work of Taylor and find it interesting.
Dave: He did not humanize the workforce for God’s sake. It was Betty Crap before he came along.
Jim: Yeah. And ended up with Ford paying five times the rate that they were typically paying for work. I like to say that Taylorism is to organizational thinking what physics is the complexity thinking. It’s really useful to have as a … You need to understand Taylorism before you can start thinking about how you want to build dynamic organizations from a complexity lens. Because you can’t ignore physics and you can’t ignore Taylorism. They’re both useful reduction, or as I sometimes say in my own analogy-
Dave: Physics is a bit broader than that, but I agree with the point, right? Neoclassical physics, yes, I agree.
Jim: Yeah. The point is that when we’re studying the dance, we want to know both about the capacity of the dancer and the design of the overall dance, so the two are both important. Anyway, this has been a fascinating little intro. Let’s get into the book a little bit. The title of the book is In Times of Crisis. And so when you were putting this together, what did you intend to mean by the word crisis?
Dave: I was working with you anyway. They’d adopted Cynefin in the future systems. They’re accurate as a standard model for understanding complexity in government. And we do a lot of work in government, particularly distributed decision making as I said and I said as an engagement. So we were going through a design process, so they were producing material and COVID hit and COVID changed everything. COVID switched our market from early adopter to early majority. And so we repurposed the funding and we said, “Let’s write a book about this based on complexity principles,” so what should you do in a crisis? And then when we finished it, we realized, if you dropped off the first stage, it was actually a book about how you manage complexity anyway.
So that is about a crisis. It actually is more generically how you manage a complex adaptive system from an organizational perspective and it’s a mixture of theory and practice. So it originated in the concept of COVID. It’s still highly relevant because I’m 69 as of today, as of a week ago. I’m going to see at least one more plague in my lifetime, if not two. And we’re not getting ready for those things. The next one’s probably going to be bacterial, which is even more scary.
Jim: Or fungal.
Dave: Yeah. Well, you’ve been watching the wrong film.
Jim: I know, I know. I have talked to some people. They say, “Probably not that scenario,” but there are some fungal-
Dave: We’re actually getting early signals of some strange bacterial deaths coming out of the [inaudible 00:20:46] area at the moment. But the point is, you can’t … This is the point we made about fat tail earlier. If you got a system where you can’t anticipate the future, and after COVID, nobody disputed that anymore, they finally realized what you and I and others people are saying, you have to build a system which can handle what I famously called and shouldn’t have done in the Pentagon once unknowable unknowns. The things you can’t know until they actually happen.
Jim: So you think Rumsfeld borrowed that from you?
Dave: I did a presentation which did known, unknown, unknowable on two dimensions with Poindexter in the Pentagon. And three weeks later, he came out with a speech.
Jim: Oh, good-
Dave: It might be me or it might just be Johari Window which generated it, but if he did, he forgot the unknowable element.
Jim: Interesting.
Dave: So actually our extremes were unimaginable unknowables.
Jim: Unimaginable unknowables.
Dave: Unknowables. I’ve written the blogpost on that because actually that’s what we’re increasingly facing.
Jim: Yup. And it is interesting that we certainly will be facing more pandemics and unfortunately an awful lot of the general civilization level learning from COVID will turn out to be horrendously and dangerously wrong.
Dave: We can have the first major heat deaths this year as well at scale and we’re not ready for that either.
Jim: It will happen. Within the next 10 years, it’s guaranteed to happen pretty almost. Whether it will happen this year or not, it’s hard to say. When the system flips from La Nina to El Nino is probably when the good chance we’ll see the first massive heat deaths. And that will either … It could be this year, could be next year.
Dave: So one of the things we’ve been looking at, I argued this eight years before lockdown, because one of the things we’ve actually done now and we’ve done it in Scandinavia, Columbia, Wales, India and Australia, is actually use children as ethnographers based on local school activity. So children going out every couple of weeks, gathering stories from their community, which by the way ticks lots of box in the school curriculum unless you happen to be in Florida, at which point there isn’t a school curriculum anymore, right? But in any civilized country, you’ve got that sort of balance. And what that allows us to do is to measure attitudes at a school level.
Now what I said is that if we have a major pandemic, we need to have that data because we can’t afford to lockdown at a country level. So you have to have a way in which you can measure in near real time the attitudes of people to what’s going on. And this is one of the key things about anthro-complexity as a discipline. We think attitudes are very malleable, they change very quickly, they’re highly volatile, but they’re a key indicator because they’re a lead indicator, not a lag indicator. And in uncertainty, you need lead indicators as fast as you can get them.
Jim: Yeah, indeed. I’m going to hop in here. Actually, I’m going to push back on that one, you can’t afford a lockdown at the country level. I think this is one of the false learnings from COVID-19 that has to do with its particular attributes, which is it wasn’t deadly enough with a death rate per exposure on the order of 0.5%, maybe, something like that. And probably we made the wrong call on trying to do nationwide lockdowns for something that was level of danger, particularly with the most of the statistical risk being at the older end of the tail, not prime workers. Now suppose it had a 10 or 15% fatality rate centered like the flu epidemic of 1918 was at 40 year olds, in which case a nationwide lockdown considerably more Draconian than COVID-19 might be the right intervention.
Dave: I don’t disagree with you, Jim, but we also … COVID, we discovered things as we went along. For example, it particularly likes Neandertal genes.
Jim: Which genes?
Dave: Neandertal genes.
Jim: Yup, yup, yup, so that’s why it hit Sub-Saharan Africa much less severely.
Dave: Yeah, and my genetic heritage, because I’ve had it measured is 50% pure Celt and 50% pure Viking. So I’ve got two revenge cultures in my blood, right? But there’s no Neandertal in any of them. Whereas my wife, and therefore, my daughter has actually got [inaudible 00:25:04] in origin, so they actually have Neandertal genes. They’ve had it several times. I’ve never had it, or if I had it, I didn’t [inaudible 00:25:10].
Jim: Yup, at the 90th percentile of having Neandertal gene because I’m a highly variegated American mutt.
Dave: I was disappointed. I thought I’d be more exotic than that, but I wasn’t, right? My mother was born [inaudible 00:25:21] so I was expecting something fun. But I think what you’re standing to see is you need to have data about how humans feel about things. I think this is one of the differences we got with Santa Fe as well. We don’t see humans as acting like agents in the way that you see with termites. Human attitudes are fascinating. So we talk about assemblage, affordance and agency, for example, rather than mindset.
Yeah, and if we can measure the assemblage structures, then we can measure how different parts of the population feel about different things at different times. And that’s key data in terms of managing something like a pandemic.
Jim: Well again, just going to push back a little bit here, it’s perfectly possible to build agent-based systems that do adapt and do have something like attitudes that are dynamic and emergent from the mean space.
Dave: I think key phrase there, something like. We actually use the human agents themselves. And also by the way, we also use abstraction. Abstraction comes before language in human evolution. If you want to understand humans, you’ve got to work through semiotics more than a text.
Jim: Yeah, I would put that one in a question mark. I think the order of origin of language and semiotics and a possible mental ease are still unsettled. So I’d be a little careful about saying-
Dave: Fairly stable. We do know from the fossil record. It’s not like material engagement theory. Human beings respond to artifacts in the world in quite fast ways, so there’s a body of stuff around that.
Jim: Nonetheless, still also a lot of controversy about particularly concepts and language, but we can move on. That’s not there. So let’s get down to the next step where let’s say a crisis appears and let’s use the next pandemic, which is statistically quite distinct from COVID-19. Well, maybe is that a good example, another pandemic? Because we actually have enough of a template here, but unfortunately, it’s possibly a wrong topic. What else would be a good example to start working through for-
Dave: A bacterial plague will be different from a viral plague. Yeah.
Jim: Yeah, right. Let’s do that. Yeah, let’s take a nasty bacterial plague that is statistically quite distinct in terms of its age distribution and its fatality rate and everything.
Dave: Just to depress you on this, before COVID, I was working with one of the big Ebola management teams in the states and I was told I couldn’t talk about evolution because it was a controversial theory. And Ebola is going through the parasite to symbiotic evolutionary phase, which means it’s learning to survive, so it’s becoming more infectious, all right? And if we look at that, what we talk about in the field guide, there are three fundamental things you just need to do and do now, all right? So one is you need to build your informal networks. And informal network is what human beings use to make decisions in a crisis. Yeah, not formal systems because there’s …
And the example I normally give is a difference between Singapore and the UK. So in Singapore, everybody does national service. They all spend time in the armed services. So they have dense networks across social classes and government departments. In the UK, everybody goes to the same three private schools and the same two universities. So the networks are quite perverted, right? So one of the big things we talk about in the field guide is very rapidly building informal networks across silos, method there called entangled trios because you’re focused there on how knowledge will flow when you get a crisis. You don’t know what knowledge you’ll need, but you clear the channels for rapid deployment.
And in the 100+ interviews I did around the field guide, every senior executive I met basically said they fell back to people they trusted. So expanding that network is key. Second thing is mapping what at the right level of granularity. This is a key principle of anthro-complexity. Complexity scales by decomposition and recombination, not by aggregation or imitation. And if you can actually store your knowledge at the right level of granularity, you can repurpose it very quickly for something novel. You don’t get a … It is called acceptation and evolutionary biology.
Jim: Yeah, we talk about this a lot on the show, acceptation as a major …
Dave: [inaudible 00:29:51] principle.
Jim: … driver of evolution in the least the biological sense and also-
Dave: Finally, [inaudible 00:29:57] technology evolution as well.
Jim: And as I was about to say, Brian Arthur and his work on technology, who we have had on the show by the way, is a great example of how acceptation is most, though not quite all, of technological innovation as well.
Dave: And [inaudible 00:30:16] is one of the best people in the world on this and he knew Brian as well. There have been loads of debates about this. So the principle is you need to map what you know at the right level of granularity that you can … And the classic case is the radar machine where somebody notices a chocolate bar melted in their pocket and we get microwave ovens. The right level of granularity there is not the radar machine. It’s the magneto. So we’ve done a lot of work on that and that’s another-
Jim: Really? Let’s pause here because this actually touches another very interesting, the right level of granularity. What does that mean exactly? And I’m going to challenge you because you mentioned his name in our pregame conversation, John Vervaeke, who’s been on the show a few times, has a concept called relevance realization. You don’t have to use his word, but what does correct level of granularity actually mean in practice? This is one of these things that’s hugely important, but hard to understand.
Dave: Exactly, right? Granularity matters, right?
Jim: Hugely, but how do you know what’s the right level of granularity?
Dave: Well, it’s interesting. Microbiology now, for example, is challenging the concept of taxonomy because it’s going down to a finer grain level and we can see more commonality. The way we do it, for example, when we’re doing constraint mapping, and we map constraints onto a grid between energy cost of change to time to change. And the simple principle is, if you can’t agree on it, break it down until you can agree. And that actually works really well as a heuristic. So keep breaking it down to you agree where it’s placed, then you’re at the right level of granularity. And human beings are actually quite good at this. We did a big project with one of the lighting companies where we got engineers to effectively treat all of their technologies like we treated customer stories.
I indexed them abstraction, semiotics. We smashed the databases together. We got five clusters out of that. We got two new businesses including the reuse of a technology originally designed for urine-saturated staircases in a football stadium, which got repurposed into a rather jazzy plastic rock, which changes colors in your swimming pool, right? But the key thing, human beings don’t have a problem if you ask them the right questions. It’s getting the granularity right. The trouble is we tend when we do analysis, and this [inaudible 00:32:38] problem, they like things to fit into categories. We talk about typology, not taxonomy, a lot and most people don’t know the distinction, but basically, a taxonomy has major problems with poorer boundary conditions. A typology looks at things from different perspectives.
So granularity isn’t too difficult to get out in practice when we are mapping attitudes, it’s the stories people tell to a kid on the streets. That’s the right level of granularity. What we don’t want is somebody else interpreting that or somebody aggregating it until we need to put it together in a novel combination.
Jim: Interesting. A couple other things you call out in this book early in a crisis is, one, I think it’s a very good topic for discussion, is set Draconian constraints.
Dave: Yeah. I learned this. I was C-level for a lot of my life, all right, before I moved into research, after IBM acquired the company I work for. And the more you get promoted, the fewer decisions you actually get to make and the more you meet angrier and angrier customers. That’s the price of promotion. People think it’s good news. It isn’t, all right? Most of the time, you’re coordinating decision making in a C-level position. The only time you don’t do that is when there’s a crisis when you have to act very decisively, very quickly to stabilize the situation. And the key thing you do there is you don’t try and solve the problem. You try and stabilize sufficiently, that there are more options available to other people to solve the problems.
So a good example in COVID is the New Zealand Prime Minister. She broke the law to lock New Zealand down. She just did it, right? That gave New Zealand far more options going forward than we had in the UK or the US or Sweden where they waited until there was no alternative, by which time the options had been reduced. And we’re now starting to train leaders actually in how you make option increase in decisions rather than decisive decisions. That’s almost like a changing phrase now.
Jim: Yeah, that’s actually very interesting, “What are the decisions which preserve your optionality?” And of course, optionality is one of those things that people don’t see naturally, unless of course you’re trained in finance. If you’re trained in finance, you can actually see why optionality has actual value.
Dave: Yeah, but there’s a more prosaic example, all right? It’s called a dishwasher-stacking problem, which causes more divorce than anything else. So some people just put things in the dishwasher as they find. Other people think about what else is going to go into the dishwasher during the day and position it accordingly, right?
Jim: Yeah.
Dave: I say I form the latter category. My wife is a former category and we almost get divorced over it on a regular basis, but that is called anticipatory thinking. You’re doing small things now, which give you more options in the future and reduce the amount of things you’ll have to do in the future. And this is an area I’m talking with a lot of cognitive neuroscientists at the moment about, is, “Is this natural talent? Is it innate? Can it be trained? What’s the combination of things?” So I’ll give you another illustration. I go walking with three doctors. My wife thinks it’s make me safer. It doesn’t. One doctor would be safer. Three doctors, they’d still be arguing about what was wrong with me in an argument, who is responsible later. None of them have walked until they reached their 50s where I’ve been walking in the hills since I was five or six. And one of them said to me the other day, “How the hell did you find that track?” because we like wilderness walking. I said, “Well, I hadn’t thought about it before,” but I said, “Well, I’ve been looking at that slope for the last five hours and I’ve been mentally registering the vegetation patterns and I’ve been creating this model, so that, when we get there, I can make economical decisions.”
Now craftsmen do the same thing if you look at them and I think this is a vastly understudied area, is, “How do you identify and train people in anticipatory thinking and anticipatory action?” And the process reengineering revolution damaged that hugely because it moved us over to a functional model of human decision making rather than the judgment model of human decision making.
Jim: That’s very interesting. So in COVID terms, closing down New Zealand probably trumps closing flights from China, was an example. There were some of those that aided in optionality. What were some counterexamples in COVID?
Dave: Well, the UK and Sweden are interesting because both of them tried to pretend COVID didn’t exist.
Jim: Yeah, particularly Sweden. They were quite emphatic all the way through, right?
Dave: Yeah. Well, so was Boris, all right? This is Britain trying to outcompete you guys with Trump, all right? So he went around shaking hands of people with COVID in hospital and got surprised he ended up with it, all right? So they didn’t want a lockdown. Everything we did in Britain was done two weeks late. And interesting, the same thing happened in Sweden, but Swedish culture is more naturally restrictive than British culture. And again, that comes back to what I said before, you need to know what your culture is because that would indicate what sort of measure you can actually take in terms of the way it works. And what are we going to do if we actually have to put in what the Chinese did in Shanghai, which was excessive. But what happens if we get something with a fatality rate you talked about, which requires us to do a Shanghai solution in Republican states in the US? What’s going to happen?
Jim: Yeah, that’s why I’m so afraid of the wrong lessons we have learned. I live in a very red part of a purple state. Very red. My electoral district voted 75% for Trump in 2020 after seeing that ass clown in action for four years, which I think-
Dave: I think you create the Gilead line. I was with Zach and several other people, right? Common friends on [inaudible 00:38:33] Jordan and those guys seven years ago. And the best scenario we came up with on the future of the US wasn’t Civil War, it was the Gilead scenario. Red states become redder, blue states become bluer because people move and then you get Gilead.
Jim: Yeah, that’s-
Dave: All we have to do is to create a wall. It’s just the wall is in a different place, all right?
Jim: Interesting. All right, let’s move on here. We could talk about this one for all day, but we do want to cover some ground here. Something that pervasively is in this document and I know it’s related to your work with SenseMaker is that you advocate from the beginning starting comprehensive journaling. Talk about that. When I think about as a business guy, I don’t think I ever gave the order comprehensive journaling. What is it? Why is it so important?
Dave: We wouldn’t give that order. We call it gamba because everybody likes gamba, all right? So it’s called gamba. Yeah, and it picks up off that Japanese manufacturing concept. And where we’ve put it in place, the first time we ever did it was with the US Army in Afghanistan. So that’s when I was teaching just war theory and other things at West Point. And we basically said, If you keep your field notes up to date journaling, you don’t have to write a patrol report.” That gave us 100% compliance and we got much better data in real time from multiple agents. We could combine that with machine data. We could improve ID detection and things like that.
Now we’re now doing that in big companies. So we’re basically getting rid of reporting. We may be about to do this on clinical trials in favor of journaling. So you give people time back by getting them to do something continuously. Now that gives you a better mechanism, but it also gives you a network you can ask questions of in real time. And interesting, just as a sideline, this one thing we just put into it is, I was at a conference last year, I won’t name which conference. Well, one of the male keynote actually slapped a female keynote on the bummer. She walked off the stage and said, “Well done, lass, so I’ll see you in the bar later tonight.” And we got him on one side and said, “You can’t do that,” and he said, “Why not? They enjoy it.” And at that point, we gave up, right?
Jim: Are you sure you had accidentally gone through a time tunnel back to 1977?
Dave: Well, it was like 1950s, all right? But either way, talked to the woman. She said, “Look, this happens all the time. We just don’t report it because you suffer secondary abuse.” So one of the things we’re now doing, for example, if you’ve got the gamba system, if something happens, which is a microaggression or a worry about fraud where you’re not going to report it because of what will happen, you report it as a microaggression in SenseMaker and you index it. It’s then immediately encrypted, so nobody can ever see it again. And we look for a pattern in multiple reports and then we trigger an anticipatory alert to the company. Now again, that’s complexity principles. Then what we want to do is to generate the right training datasets from human interaction, then we can use AI to trigger alerts. So journaling is actually a way of reducing the time burden on people and giving better data to companies. Hence, [inaudible 00:41:42].
Jim: Yeah. We well know from who those of us have done turnarounds and things of that sort how much time is wasted in zombie reporting. I remember I came in as turnaround COO one time for a company. It was one of those classic companies that were I think 300 … This was the days of fanfold printed reports.
Dave: Yeah, I remember that.
Jim: And so I gave the orders to the IT department, “Just stop making half of them and see how many people complain.” So we went from 300 to 150 and I think I got one complaint, so we reinstated one of the 150.
Dave: We’re now deployed in one very large public transport infrastructure project at a micro project management level. One of the reasons for that is that big projects always go wrong. So you need to get weak signals early and you don’t get that through traditional reporting. So again, all of these things, this is complex … It’s the granularity principle we talked about earlier. You want lots and lots of small things from lots and lots of different perspectives continuously coming into the system, then you can find out what the hell you should do. The more you make it coarsely grained, the less you can do that.
Jim: And of course, particularly in a nonstationary situation like a crisis. Yeah, if you’re running a nail factory in 1955, maybe static reporting works. But if you are in, let’s say, the music business in 1995, that’s a good way to die, right?
Dave: Yeah, and actually, I’ve got some of these ideas when I was building the decision support systems for EMI back then, which is one of my early projects along with Guinness, all right? But I think the other thing you’re trying to do here is, by creating a network which has utility for ordinary purpose, you can then activate it for extraordinary need.
Jim: Yeah, and I’m thinking out loud here, one of the beauties of journaling is you’ll see near real-time new categories that didn’t exist before.
Dave: So you also have lessons learning, not lessons learned. And we know that lessons learned as a disaster because people remember things differently even an hour after they’ve happened. So the closer you can get to the point of learning, the better.
Jim: Yeah, I like that a lot. And now this also touches something I’ve become obsessed with over the last couple of months, which are these new large language models. I’ve been writing some code, been doing some cool things.
Dave: Yeah, I’ve seen your posts on that.
Jim: And even the most interesting stuff I haven’t even posted yet, but it seems to me that LLMs and embedded vector spaces, etcetera, ought to be really useful tools in being able to do some automated aggregation, summarization, automatic clustering, things that would be difficult to do other than via intuition could be automated now at least as side channels around a journaling infrastructure. Have you guys started playing with that yet?
Dave: Yeah. And one of the things we’re doing at the moment on methods in the Agile community and elsewhere is break all methods down into their lowest coherent unit, so that they can be combined and recombined in different ways. So we’re creating a multivendor, multimethod approach rather than a single-framework approach. Now we’re looking at [inaudible 00:44:50] as actually a way to assemble those components because the system can know all of the components which a human can’t remember and can assemble them. So I think there’s some powerful stuff on this, but I did actually sign the hold-for-six-months document.
Jim: Yeah, I did two for a sneaky reason, which is I believe that these LLMs will actually empower the periphery. And I think empowering the periphery is good even though it will cost-
Dave: Yeah, we have a historical slight disagreement on this one, all right?
Jim: Yeah, I know, I know. So it’s-
Dave: The fact that they’ve written the first malware is significant.
Jim: Yeah. But yeah, you can learn how to make nerve gas, but you can learn how to do that on the internet, right? You don’t need an LLM to do that.
Dave: Yeah, not quite as fast and the-
Jim: Not quite as fast, not quite as easy. So I signed it because I wanted to give the open source alternatives to the big boy model, six months to catch up, which will empower the periphery. And I am now quite convinced after talking to some of the open source people that any attempt to cage LLMs ain’t going to work, because while the open source alternatives might be six months behind the big boys, by the time you get to GPT 4.5 or equivalent, it won’t matter anymore. And so this will be a peripheral empowering tool. We just have to be willing to deal with that. Nothing that we can do about it.
Dave: Yeah, I wrote a paper for the Itel community the other day, which I said, “No software engineers should be allowed out of college without a basic training in ethics,” and I still held to that. Because the trouble is a lot of the people developing these things have got no understanding of ethical implications when they’re doing it. It was like there was some guys working on the equivalent. I was working within New Zealand. So they had this wonderful avatar, you talk with it, it solved mental health issues. And I said, “Well, what are you going to do when a bad actor uses that to create mental breakdown in people?” They said, “Oh my God, people wouldn’t do that. Would they?” I said, “Well, I’ve just checked, they are using your technology for God’s sake, all right?”
Jim: Go look at TikTok, right? It may not actually be its intent, but certainly it’s a fact doing it. Yeah.
Dave: So I think … Yeah, we put all our methods into open source, all right? So everything in the field guard is open source, so I’m part of that movement, but I don’t think … We haven’t got the level of ethical awareness in society to handle what the technology is now doing and that’s what scares the out me.
Jim: Yeah and it’s true as we often say in the game B world. I think it was David Sloan Wilson who said, “We have the institution, medieval institutions and the power of gods and the wisdom of pygmies,” or something. It’s a problem, it’s a problem, but we are not going to solve that problem today. So journaling, interesting. And I think with today’s-
Dave: Also by the way, the thing journaling does is it saves employees a huge amount of time. It allows them to be more engaged. So we’re looking at that in clinical environments, and because once we’ve got journaling and we’ve got these entangled trios, which is how we build informal networks of roles, we can allow people to make decisions very quickly in the field without reference up, but with better auditability. So I think this concept of distributing into finely grained networks is key in terms of resilience.
Jim: I would certainly agree. Let’s now move on to what feels more like management. This is management in my view and yours too, but in more traditional management, which what you called creating specialized crews in the face of a crisis. Talk about that a little bit.
Dave: It is getting people highly focused. So we talk, for example, about a continuity crew. If you try and carry on managing business as usual, you are going to get confused. You are the senior exec, you’re going to have to manage this bloody crisis. So you need to have a deputy you can hand over business as usual too. And we talk about a journaling crew, we talk about a devil’s advocate-type crew. So you need specialist functions which are collective, not individual because … And all the evidence is, if you look at crews in military environments, is specialized crews, which are based on roles and role interaction, actually have more intelligence than some of the individuals. It’s another emergent property.
So we are using that and saying, “You’ve got a crisis mate. You need to tick box on these crews. You need to have people focused on this. You need to spend as little as possible making decisions, more of seeing the results of all of these networks and all of these interactions.”
Jim: Yeah. I will say, in that section, well, maybe it was a little later, there was one thing I said, “Hmm, I wonder why he did that.” You talked about sizes of these crews and you chose three out of the four, which seemed to me the natural scales. You had five 15 and one 50 and you missed, to my mind, the most important scale of all, which is 50.
Dave: Well, yeah. And I’ve talked about that with Robin, all right?
Jim: Yeah.
Dave: I’ve moved on a lot since actually wrote that handbook. I’ve been looking at biological scaffolding and that really talks about two groups. And one of them is the 50, all right?
Jim: Yeah. Let me give the reason why I think 50 is so important. When I’m building companies, I focus on getting to 50 or business units 50, because 15, which is the third Dunbar number, that’s the group that you can still run the company around the lunch table. You don’t need any formal structure at all. And also the mathematics of combinatorics of consensus still work at 15 barely. At 50, you got to have the first structure. And also it tends to be the size at which you can be at least partially self-sustaining. Think of the platoon in the military, which is about 50. And so that’s why I think I was [inaudible 00:50:33].
Dave: Yeah, I think we went with the standard ones, all right? But I say I would now … The writing I’m currently doing, I’m really talking about two numbers. One is between three and five and the other is less than 50. And what we’re really saying is, if you look at evolutionary history, we evolve to make decisions in extended families and clans.
Jim: Yeah, which are about 50, not 150.
Dave: Not 150, but you always knock the 150 down because that’s Dunbar’s work on acquaintance. And by the way, he’s got some really interesting stuff now on 5 million as the maximum size of society you can get to before it loses cultural coherence. So he’s got some mathematics behind that.
Jim: Yeah, that’s the SNP ought to take that one to heart.
Dave: Well, but the key thing in terms of … I’m the member of the Welsh Nationalist Party, so we’re with 3 million, so we’re coherent, all right?
Jim: Yeah.
Dave: We’re defined by not being English. That’s the most important thing if you’re a Celt. But the interesting thing is we evolve to compromise in group. There are never more than five active decisionmakers in an extended family. So we actually use groups of three for decision making where you have to bring silos together. So if you have five or six people from each of several silos in a room, they fall back to their silo-based mentality. If you take one person from each silo in a group of three or five, they will compromise. And that’s the reason why the smaller number is the most critical one.
Jim: Yeah, it’s interesting. When I designed businesses, I’m a great believer in three as the management team, not two and not four or even five.
Dave: Actually, asymmetry is important. I would agree with you. The best companies I’ve been in have had no more than three people at the top and they generally have three people each reporting to them, all right?
Jim: Yeah.
Dave: Yeah, going back to Terry Pratchett, if you know him, he said, “If three princes are set on a quest, it’s inevitable the third one will succeed,” all right? So three’s a really important number for humans and we use it a lot. It’s why we talk about entangled trios.
Jim: Yeah, and a very simplistic reason, it actually was just a fond experiment I came in as the third member of a dysfunctional management team once when I was doing a turnaround and everything changed immediately for the better, because with a triangle, if any side gets a communications jam in it, you have an alternate route.
Dave: The way I always explain it is if you go out for a dinner with a stranger, it’s very stressful on both of you. If three strangers go out for a dinner, it’s not stressful because there’s always somebody to pick up the slack while the other person observes. So all of our crews, we recommend a five or less.
Jim: Yeah, if the work will can be done. Sometimes it has to be … I think about it as fire teams, squads, platoons and companies.
Dave: We take a lot of military models across. The military have evolved sizes, which are very similar to ones we know happen in nature.
Jim: And it’s a competitive dynamic, right?
Dave: Mm-hmm.
Jim: So it’s not a bad place to look. I’m going to drill in a little bit on the journaling as actually some of the design work we’re doing in Game B World now, which is one of the things I have found and others have found is that doers and writers about doing are amazingly not the same people.
Dave: Never the same people.
Jim: And that embedding, we’re now calling it the documentors’ guild with the doers who that … They still have a community of practice themselves as documentors, but they’re actually embedded with doers at least for a period of time. What do you think about that model?
Dave: Yeah, we’ve done work. For example, when I was doing knowledge capture from engineers in the North Sea for one of the big oil companies, I actually used final year engineering students as journalists. When I work with senior consultants in hospitals, we use junior adopters as journal keepers. And there’s two reasons for that. One is you won’t get the really senior people to keep journals. The people who are doing things who’ve got experience couldn’t be asked about writing it down. People in training are used to writing things down. But also you get knowledge at the right level of abstraction because if an engineer is talking to an engineering student, they don’t talk in the same way they talk to another engineer and that’s actually really useful. So-
Jim: Yeah, they have to be able to map essentially.
Dave: Right. So that’s the way we do it. We, now for example, put young people who’ve people in their first six months of service with people within two years of retirement with people on the senior management fast track as a three. Deploy 15 or 16 of those trios is a problems. You get solutions you wouldn’t get otherwise.
Jim: Very interesting, which actually gets us to our next topic. You mentioned the word earlier on and I must confess, I don’t believe I’ve ever heard or seen the word before which is aporetic. Let’s expand that one out a bit.
Dave: I’m quite proud of this. I’m the first person to ever, ever got Derrida onto the boardroom of American companies. Nobody’s done it before. So it’s a very common word in Greek. Greek students will say, “I’m having an aporetic moment,” to their teacher. It’s Greek in origin. Derrida famously said, “If you know the answer to a question, it’s not a question. It’s a process.” The only valuable questions are ones to which you don’t know the answer until you see things differently. So we’ve developed a whole body of methods around how you put either physical linguistic or aesthetic, which put people in a position where they can’t go with what they’ve already done. It’s impossible for them to do that. And that’s called an aporetic moment. And it’s absolutely key in a crisis. You move people into aporia before you move them anywhere else.
Jim: And aporia, if I can play it back to you, is the realization that business as usual ain’t going to cut it.
Dave: Well, mathematical paradox is a good one. It’s the famous like the liar’s paradox, “I always lie.”
Jim: “I’m a Cretan and Cretans always lie.”
Dave: Yeah, that’s it. Yeah, if you look at it, I introduce liminality into Cynefin when my daughter and I were looking at Caravaggio’s Seven Mercies because I saw the liminality of the light in it. Yeah. So that’s the type of aesthetic aporia. So human beings, if you put them in a position where they can’t see things the way they’ve already always seen them will see things differently.
Jim: Although you and I both know that in business, the resistance to change is gigantic.
Dave: Yes, less than it was though. I’ll talk about that in a second, but what you can’t do is tell people they should look at things differently because that doesn’t make a blind bit of difference. That’s a new age fluffy bunny approach to organizational change. It’s all very nice, but it won’t work. We’ve actually found … One this COVID did is it triggered … We’re into applied complexity, right? If you look at the maps, that’s where we’ve got dominance in many ways, all right? And we went from having to explain the theory to get sales to people asking us, “What will you do for us?” which is completely … And that’s in marketing life, is where you’ve flipped from early adopters to early majority and that’s where we are now. And a lot of people have said, “I don’t explain the theory anymore in most of the sales calls I have.” It’s, “We can do this.”
Jim: It’s interesting. We found the same thing on our Game B work and we called it suddenly we had 10x as many ears to hear as we did before.
Dave: COVID was wonderful for those of us working in complexity. It gave people a real-world example of what we told them what was going to happen for years. Almost drove me insane, but-
Jim: Yeah, that’s all right. Yeah, that is interesting. And I would say that I see the connection between aporetic and the idea of liminality, which is very, very important.
Dave: [inaudible 00:58:25] thing you’re doing and I think this is important in human sensemaking, is you’re using language which is unfamiliar, which forces people to think differently. So for example, when I got the Harvard Review article out on Cynefin, which is what made it famous, right?
Jim: Right.
Dave: They kept saying, “We can’t call it Cynefin. We’ve got to call it decision framework,” and I kept saying, “Well, then it’s not going to be a notable article because everybody will make assumptions.” And then I discovered I have no rights. So if you don’t know it, if you sign a contract with Harvard, they have complete rights over content. You have no rights over your own content. So I had to get really strappy and we ended up with it called Cynefin. So Cynefin allows you to tell the story of what the word means rather than have people assume meaning, which is what they would otherwise do. That’s a form of aporia. So we talked about acceptation, we talked aporetic, we talked liminality. We’re very careful how many new words we use, but we use them where they’re absolutely critical because otherwise people will just go into the normal mode of thinking.
Jim: All right. So that then is the natural bridge to the next topic, which is, as was again you mentioned earlier, is acceptation, something we talk about a fair bit on The Jim Rutt Show.
Dave: Yeah. So I’m part of a group which used to meet before COVID at Lake Garda every year in Mussolini’s former palace. Peter Allen always drew his bedroom and I always got the servant’s quarters, so I’m pissed off about that. I feel victimized on that front, all right? And there, we were bringing together lots of different data people. So we had Robin [inaudible 00:59:55]. We had Pierre Carlo, we had Brian [inaudible 00:59:59] looking at acceptation as a key capability. And we’ve really focused on that at scale in terms of our work in knowledge, right?
So the example I mentioned earlier is a good one. We actually pulled in … We’re working with a lighting company and they had the idea if you bought lights as a garden feature, they’d have a whole new market. Up until that time, people bought lights to light their garden. So what we did there is we basically pulled in 3,000 or 4,000 narratives. And the key thing we’ve always done is get people to interpret at a high level of abstraction where they don’t know what the right answer is, which is a problem with surveys. So nobody knows what the right answer is. That creates a cognitive load. It goes from what Kahneman called thinking slow to thinking faster to thinking slow. So we generate that.
So we had those stories. We then had the actual technologies indexed. We matched the two databases together at a metadata level. And then we presented the clusters to marketing and say, “Why are these technologies associated with these customer stories?” Now that’s a form of forced acceptation. So the guy who noticed the significance of a chocolate bar melting in his pocket, lots of people had noticed chocolate bars melted in their pocket before, but they just swore [inaudible 01:01:16], their trousers cleaned. He realized it was significant. So the focus on acceptation is associating things through a level of abstraction, not in the concrete.
Jim: Yeah. And an example, when was reading this, I said, “Hmm, I don’t think they mentioned it in the document,” but I saw a phase change occur as an acceptation, which was what had been a very limited use of technology for people with serious immune compromise, which was telemedicine suddenly within weeks or days became rolled out at a staggering scale. And to this day, fortunately, I’m able to do some percentage of old person medical consultations via telemedicine, which we couldn’t do before for all kinds of bizarre regulatory reasons, no technical reasons. It was basically a bash out of a basin of attraction into a new basin that made a lot more sense for everybody, but it took a shock to get us there.
Dave: And I think, yeah, there’s two things. You mentioned shock before and I agree with you. Shock can change the distance between basins of attraction. Abstraction does the same. So we think the role of abstraction in human systems is it shortens the gradient or the distance between ideas.
Jim: Or it lower the ridge between two basins is another way to think about it.
Dave: Yeah, and we now think that’s one of the main roles of art in human evolution. It allows you to make completely novel and unexpected connections.
Jim: And also ectasis, psychedelic drugs, religious experiences, etcetera.
Dave: Yeah, I think there’s a mix. We need to be careful about the modern research of psychedelics. People have forgotten what happened with that in the ’70s, all right?
Jim: Yeah, ’60s and ’70s, some good and some bad, right? There was …
Dave: Some very bad, all right?
Jim: … yeah, about 1% of people become psychotic when they engage with psychedelics, which is a high price to pay for the benefits, but there were also some great benefits the whole opening up of-
Dave: I think your point is you don’t need to do that. You can work at scale across the organization. It’s a matter of how you present the data to people that matters. And that’s where we’ve always focused, is we always start with metadata before you look at the raw data.
Jim: Give me an example of where you’ve been able to use that kind of method to lower the ridge wall between-
Dave: I just gave you an example, the one I just gave. We had 3,000 stories told by customers. We hit at questions about light and shade and the way that they interpreted their data. We then matched against the technologies and we ended up with urine-saturated staircase technology being used for garden ornaments, right? And that general approach, so if we’re looking for example at civil change, which we do a lot of work on, getting children to interview adults in their community means that it’s people from the community themselves who ask people questions, we then don’t allow people to look at the stories. They look at the metadata structures and only when they see a pattern in the metadata do they read the stories, which showed an explanation. And that can get people to make quite radical changes that they wouldn’t otherwise do.
Jim: Got it. Got it. All right, let’s move on here. So many interesting topics, so little time to talk about them. Another topic, as we start emerging from a crisis towards something like stability, you emphasized the need to maintain cadence and control. And you make a distinction, though it isn’t really that clear what you meant by it in the document at least, the distinction between cadence and velocity.
Dave: Yeah, so cadence is rhythm. So in a cadence, you can do something, stop, do something, stop, do something, stop. Sorry, apart from mounting walk, my other hobby is I’m a roadie, in cycling terms and cadence matters. If you can keep a constant cadence on your pedal and use the gears to adjust it, you can add velocity as easier with lower energy principles. So that was the principle that I was getting across. You need to keep up the pace of this and the gearing changes when you hit the hill of a crisis, hence the need to continue to use sensor networks, informal networks, knowledge, distributed knowledge and the whole lessons learning, not lessons learned. So the whole point about cadence is you need to keep up the pace of working at the right level of granularity, not just go back to old ways.
Jim: Got it. And then, as you know, that is certainly a key role of leadership in business is building expectations about cadence. I used to say about my companies that both a promise and a warning to the VCs, which is the Rutt approach to business is the Hunter Thompson approach to life, faster and faster until the thrill of speed exceeds the fear of death. And by the way, that is not sustainable for more than about three years. So at the end of three years, you better bring somebody else in to replace me, but they better be prepared to maintain this velocity we achieved, though they probably don’t need it to make it go any faster.
Dave: You need to realize I do endurance sports, not sprint sports. So mountain bikers are adrenaline junkies. I can’t stand them or I don’t take the bike out for less than 50 kilometers.
Jim: Yeah, so that’s very different. I am a brawler. I like to play American football. As a kid, I liked to fight.
Dave: American football, you get lots of rest periods. If you come and play rugby, we don’t get the same amount of-
Jim: Oh, yeah. I liked being on the defensive line in football where maximum output for about 15 seconds, right?
Dave: I can watch American football as an anthropologist, but not as a sports fan. As an anthropologist, it’s fascinating, but as a sport, it’s just a … Yeah, I don’t.
Jim: But don’t get me started on the boredom of soccer or so called F-U-T-B-O-L.
Dave: Just do cricket one of these days, which is really exciting. It takes five days.
Jim: When I worked for Thompson, which at the time was a hybrid British Canadian American company. We had a fair bit of number of Brits and just to annoy us. When the Brits would get together, they would be deconstructing the previous week’s five-day matches and nothing to drive Americans away faster than that.
Dave: And baseball is what we call softball and it’s played by girls in schools, all right?
Jim: Well, actually my daughter was a fast-pitched softball player. And let me tell you, those girls are tough, tough as nails. That’s in some ways a more dangerous game than baseball because the balls are moving almost as fast and the distances are one third shorter. So literally you are now inside the human reaction window. And so the possibility against-
Dave: [inaudible 01:08:27] same in Ireland with the added danger as somebody’s like to beat you over the head with a shinty stick by accident if you do it.
Jim: All right, let’s go … I just have to tell you what, just because you just triggered a memory. My wife’s anthropology teacher in college, university, loved American football, not as a sport, but as an anthropologist. She particularly said she liked to track the padding of each other’s butts.
Dave: Yeah, and the rituals are fascinating. If you go to West Point, when they pull the flag on at halftime and you suddenly realize the significance of that, all right? So American football is nothing but a collection of rituals interspersed by television adverts and the occasional checking of a ball.
Jim: And 15 seconds spurts of violence, which makes it so much … That’s the essence of it. That’s why I liked it. Just be able to go all out for 15 seconds, right? All right, one last thing before we move to the futures of your thinking is you had a quite nice example in the book about how to design strategic interventions starting with stories. Maybe you could take us through that arc.
Dave: Yeah, so that’s this principle of … It is the difference of understanding where you are, not where you want to be. So most strategy starts off by a group of executives getting together with some very expensive consultants and sit in the room and deciding their three or five-year plan. Why American companies adopt the planning cycle of Soviet Russia, I’ve never understood, but they do, all right? In complexity theory, what matters is to map the present and identify where you can go next, right? Now future stuff, we’ll talk about in a minute. The way we do that with SenseMaker is we can use the whole of the workforce to gather micro scenarios. We can present situations. We can get responses. So that allows us to map where the organization is and what it understands is possible next. So that’s a much more energy efficient approach than this future state and close the gap.
Jim: Well, of course, the other issue is we know from complexity science, the ability to predict the future is pretty minimal.
Dave: Yeah, that’s my point about adjacent possibles of the Frozen 2 strategy. We can actually produce … I remember the first time I showed what we’d done to Stuart Kauffman and said, “We’d just done fitness landscapes with narrative,” and I thought he’s going to kill me. He did actually quite like it, all right? I waited until he had a bottle of wine first and I’d said nice things about his latest book. But what we’ve done is we’ve taken the fitness landscape concept into narrative topographies and we use those for cultural mapping, for safety mapping, for attitude mapping, but they’re also done for strategy. Because what they indicate is where you can go next, not where you would like to go next. And I think that’s the key switch.
Jim: Yeah, that turned out … That was the first application of complexity thinking to business that I ever did. This is before I had any formal exposure to the literature other than having read Stuart Kauffman’s original book, Origin of Order, and John Holland’s book on genetic algorithms, which was I designed a co-evolutionary fitness landscape for doing mergers and acquisitions work at Reuters, Thompson.
Dave: I used GAs to choose between neural networks. [inaudible 01:11:54] more effective.
Jim: Yeah, I did some of the first work on encoding neural networks into GAs back in the year 2001 after I retired. But anyway, that’s another story, but let’s finish off your story here about how you used narrative. Well, the thing I found most interesting, you got the landscape and then you actually got one step beyond that, which was the challenges that are implied by the landscape.
Dave: And I think what you’re looking at, I mean this is where we know we’ll be taking constructive theory sideways from physics.
Jim: We’ll talk about that next. So don’t dig into that quite yet.
Dave: Okay. So what we’re really doing is to … And this is called vector theory of change. So what you look at is you look at a landscape topography of narrative and you say, “What can I do tomorrow to create more stories like these and fewer stories like those?” Now you can engage anybody in that question and that allows you to create what we call fractal representation. So I can show the narrative landscape for the whole of a country, which means a president can look at it and say, “What can I do as a president to create more like this view and like that?” but from the same source data, I can present the data to a district council who can say, “What can we do to create more like this or fewer like that?” to a school principal. So what we’re doing is, from the same source data, we’re basically aggregating at people’s level of competence to act, which means actually the topographies look radically different, dependent on who’s looking at them.
Jim: Appropriately so based on level, right?
Dave: It is. So that what it means, is everybody’s moving in a direction which is relevant to where they are. The overall system is aligning, but you get rid of these average interventions across the whole company. And again, that’s tails of distribution, not the center of a distribution-type work.
Jim: Yeah, and the one scholarly report I ever wrote on combining complexity and organizational theory, I made the point, the different parts of the organization are at different places in the exploitation frontier.
Dave: And [inaudible 01:14:07] engage people. We’ve just done a big project on residential care home for the Netherlands government where we’ve had continuous narrative capture from relatives of the medical staff. And that’s given us the ability to look at something and say, “Well, how would you get somebody to tell more stories like this and fewer stories like those.” Now if you go to a medical person and say, “How do you increase patient safety?” They’ll get defensive. If you say, “We need more patient stories like these and fewer patient stories like those,” or, “Have you noticed, when you captured the story from the patient you interpreted, they interpreted, there’s a difference between it?” You’re asking people questions at this very fine level of granularity, which means they can make changes very quickly. And I think that’s key. It’s lots and lots of small things. And this is a key, lots and lots of small things happening in parallel. You can afford more failure, which means you get more learning, and basically, you can steer the ship, to use the metaphor. You always navigate complexity.
Jim: Interesting. Let me toss a softball to you. You still need abstraction to be created, but it’s very local and it’s very warm context rich as-
Dave: Now we did that in high abstraction metadata. That’s the stuff we originally developed on the DARPA program working for Poindexter. So we had to stop people second guessing what resulted people wanted when they put data in. I’ll give you an example on 360, which is extensively gained in most companies. So we’ll … Actually, every time you interact with a leader, you describe the interaction and you put it onto six triangles. One of the triangles says, “In this experience, the leader was altruistic, assertive, analytical.” So we put three … You can’t say anything negative about the leader, but then the leader sees the pattern, they’re all assertive, analytical, they’re not altruistic. So they can make small changes.
So that’s a key concept that we developed and patented. And we now use symbols as well as language, which is even better because you don’t want people to … You want to gather data from people at scale, but you don’t want them to be able to trace input to output. The minute you can trace input to output, you haven’t got a measurement system anymore. You’ve got a gameable system.
Jim: Yeah. Whose law is that? I forget the name of the guy probably or-
Dave: [inaudible 01:16:20].
Jim: Yeah, once a measure becomes worshiped-
Dave: That’s Stratton’s variation of Goodhart’s.
Jim: Goodhart’s law. Yeah, Goodhart.
Dave: And it’s Stratton’s variation, but everybody quotes Goodhart’s. But that isn’t Goodhart’s. It’s Stratton’s variation.
Jim: Got you. All right, well this has been very, very, very interesting, but let’s now move way out into the fringes. As I saw in one of your blogposts, I go, “What the fuck?” And then I did a little bit more research. I said, “Okay, not as crazy as it seems.” You have proposed-
Dave: I like that it’s Jim, all right?
Jim: Yeah, I consider that the highest compliment when someone says, “I read something you wrote Jim, and after thinking about it, I realized it wasn’t as crazy as it initially seemed.” So I consider that as a compliment, not an insult. And that is you have proposed using constructor theory, which is something, a physics theory I am aware of, developed by David Deutsch and friends, which is actually when I’ve been thinking about in terms of physics, it’s essentially a dual. It’s the same physics, but looked at from 180 degree-
Dave: Switch perspective.
Jim: Yeah, switch perspective, which is focusing on enumerating what is impossible rather than the rules of what is doable. And it is literally a dual because it’s the same thing, but the different lens is amazingly powerful, so-
Dave: And it’s challenging an optimistic interpretation of everything. I always think we went as far as we could with quarks, all right? So it’s challenging that concept.
Jim: And so this is an interesting, controversial and still early fork in physics, but then you want to take this a step further and apply it to social design. So make that move for us.
Dave: And I think, actually in social systems, it’s a lot less controversial than in physics. So I first came across it about seven, eight years ago. The way we develop stuff within the company is we spend about five to six years in the theory stage. Then we develop a method consistent with the theory. Then we spend four or five years experimentation. So that’s how Cynefin went through the process, right? This one, which is called Estuarine mapping, that’s the name we gave it. Actually in six months, it has gone where Cynefin went, it took 10 years to get to. So it’s just taken off in ways we didn’t expect.
So we come back to that constraint mapping thing within the field guide. So we map the constraints and then we place them onto a grid between the energy cost of change against the time to change. And that can be done in a workshop where we can now fully automate it, which is even more powerful. And then you draw a line in the top, which is called the counterfactual line. So anything to the top of that line, the energy cost of change or the time to change is too high, therefore it won’t change. And we’ve now added a liminal line behind that which says, “Well, we can’t change this, but somebody else could.”
That’s actually quite interesting when that’s been presented to C-level. They’d say, “Well, I don’t mind if you change that. Yeah, why didn’t you ask earlier?” And then bottom left is called vulnerability or high variability. And then what you do is you cluster the stuff, you know you can’t deal with the stuff top, right? So you’ve stopped worrying about that. And everything else, you basically say, “What are we going to do with it? Are we going to increase or lower the energy costs? Are we going to increase or lower the time?” So you end up with 50 or 60 micro actions to change the dispositionality of the system, so that the things you want are more probable to happen than the things you don’t want.
Now, it’s been quite fascinating. I reckoned it would be conflict resolution, which it has been. So we just did one with one of the big car groups on electric charging. And before that, they had four warring groups. At the end of it, they ended up with I think, 20 projects. Nobody would’ve thought of the projects up front. They couldn’t see now why they wouldn’t do them. And nobody had to take a position on how it would look at the end. So again, this is what you’re … You’re changing the topography of the system. The way I’ve summarized it, which has taken off a bit, is say, if you can make the energy cost of virtue less than the energy cost of sin, life is a lot easier.
Jim: But less fun.
Dave: No, I disagree … Well, yes and no, all right? We won’t get into limbo and purgatory and stuff like that, but I think what’s fascinating to me is this seems to have hit a buzz. We’ve now done, I think, 20 projects in the last three months and at C-level and with people saying things like, “Well, what we’re focused on is, we know if you could change that, it might produce a good result.” So we’re focused on how we change it and see what happens, rather than deciding what end result we want. So I think it’s much more consistent with complexity. It’s getting more sophisticated. We are freezing it next week and putting under version control for the first time. It’s been very fluid the last six months as we’ve developed it.
The language has been changing, the representation has been changing, but it does provide effectively … An academic looked at it the other day, several, and they’ve been pressing me to write the book, which I’ve still got to write because they said we need an alternative to Porter for strategy. And I remember one of them who said that to me 10 years ago, looked at this and said, “You’ve done it. That is the complexity alternative to Porter’s five forces.” It will get refined, other people will get involved in it, but it’s basically dispositional management rather than outcome management.
Jim: Okay. Then I understand that and it sounds like it could well be a very strong sauce, but I don’t yet see how that actually maps to even an analogy or metaphor with constructor theory.
Dave: Oh, well, constructive theory, basically, it says you got construct … We have bad constructor in as well. I didn’t go to that for safer time. So a constructor is something which produces a transformation when things pass through it or contact with it. The difference between us and David [inaudible 01:22:30] is we would say, in human systems, a constructor can change in the act of construction. It doesn’t have to stay rigid. So we’ve also said, in human systems, counterfactuals relate to actually how people feel about the subject as much as physical reality.
Jim: Yeah, that’s a big metaphorical jump in constructive theory. We’re talking about things that are literally impossible, right?
Dave: Well, yes and no, but yeah, also, if you take the new materialist stuff, then actually they are impossible because of the nature of the assemblage structure. So as I say, I think mean didn’t just use constructive theory. We brought on some of the other stuff we’ve been doing and we make sure we make it very clear. We’re not saying this is the instantiation of David’s work in social systems, but we acknowledge the source of a lot of the ideas. So the fundamental principle is you establish what can’t change and then whatever has the lowest energy gradient will win. That’s the fundamental. And if you look at Deutsch’s paper on evolution from constructor theory, that’s actually brilliant. It’s evolution as an energy reduction process rather than a survivor of the fittest.
Jim: Yeah. And that’s factually very interesting because that again gets us back to our Game B concept from the very beginning. The first week we talked about Game B is we actually applied the chemical concept of catalysis. What actions can be taken to reduce the activation energy for various things X, which we decide are desirable? How similar is catalysis to your idea of minimizing energy?
Dave: It’s one of the metaphors. It’s like we use [inaudible 01:24:14] as a metaphor as well.
Jim: What was that word?
Dave: [inaudible 01:24:17] fungi. We use that as a metaphor. Yeah. Because that creates a healthy ecosystem. That’s informal networks. So yeah, we have different actions you can take. Once you’ve got a cluster of constraints or constructors on the energy time gradient, you can either shift the energy, so you can make it more expensive in energy costs for people to do things if you don’t want them or you can lower it if you do want them.
Jim: Yeah, Me Too being a good example on the raising the energy cost of certain activities.
Dave: I was watching morning show the other day. I actually binge-watched morning show all the way through. And that’s actually quite brilliant because that shows how the Me Too movement flips from background misogynism into active in terms of the way it works. And again, you can see that as a phase shift, right?
Jim: Yeah.
Dave: So as I say, as we’re developing that, what we’ve actually found is it allows companies to have a completely noncontroversial mapping of the space in which they can operate before they start the political stuff.
Jim: Now let’s talk about the relationship, because as I’m trying to bring my head around to thinking about this as a strategic framework, the E and the T, energy and time dimensions. Some things just take time. Some things cannot be rushed. Famously making a baby takes nine months. Building a decent piece of software also takes nine months at a minimum. And tripling the manpower probably doubles the time, doesn’t cut it by three. So talk a little bit about the E&T dimensions and how they’re related.
Dave: So the time one, the other example I often give is London taxi drivers. If you don’t know, they have a 40% pass rate in the test, so they’re given two points in London and they have to describe the route out and the root back turn by turn from memory, mentioning every major landmark.
Jim: After two years of study typically.
Dave: Two and a half years, yeah. And the reason it takes two and a half years for the hippocampus to physically change its structure to accommodate the knowledge. And we know there’s a whole body of human systems where you might take 10 or 15 years to gain the experience socially or whatever. So some things are time dependent. So that’s what that means. And there are ways to shorten time gradients in some cases. Energy we are using as shorthand for resource and attention. Attention is as important as resource.
Jim: And so then you have a two-dimensional space, and then presumably, if you’re thinking in strategy terms, there’s some frontier that trades off time and energy.
Dave: You got it.
Jim: And you want to be at the frontier, and then for macro strategic reasons, you choose where you want to be on the frontier, but the first thing you want to make sure is that you’re not behind the frontier. Is that reasonable?
Dave: Yeah, and what we’re also doing is we’re mapping how the energy time grid is seen from different parts of the organization. And what we’re about to start doing in military terms, because it’s got a lot of excitement in military strategy, is to do red/blue team [inaudible 01:27:21]. But what it actually does is it combines grand strategy and tactics in the same framework. So it breaks the linearity between strategy and tactics because it all goes into the same system.
Jim: Great. Well, very interesting stuff and I really look forward to revisiting this with you again when this is cooked a little further. You said you’re working on a book on it. In fact, originally, we were going to do this podcast based on that book, but I think that book has taken a little longer than you thought.
Dave: Yeah. It takes a little longer than we thought. We decided we’d write three books rather than one book, so that that’s the consequence. Yeah.
Jim: Well, Dave, I really thank you for engaging in an extraordinarily interesting and deep conversation today at the intersection of complexity science and the real world and would love to have you back in the future.
Dave: Always a pleasure to be on.