The following is a rough transcript which has not been revised by The Jim Rutt Show or by Joe Norman. Please check with us before using any quotations from this transcript. Thank you.
Jim Rutt: Howdy! This is Jim Rutt, and this is The Jim Rutt Show. Today’s guest is Joe Norman, a freelance complex systems and data scientist and aspiring homestead farmer.
Joe Norman: Hey, Jim. How are you doing?
Jim Rutt: Doing real good. Joe’s been associated with the New England Complex Systems Institute, often identified by the acronym NECSI, and has published articles with people there. So he’s another complexity guy, like me. But I must confess, I’ve been a Santa Fe Institute bigot since I first read John Holland’s adaptation in Natural and Artificial Systems, and soon thereafter, Stuart Kauffman, Origins of Order. Truthfully, I know relatively little about NECSI. Maybe, Joe, you could give us a short version of your at least academic life and tell us what drew you to NECSI and what it’s about.
Joe Norman: Sure, absolutely. Just to make it even easier, we call it NECSI.
Jim Rutt: NECSI, even easier.
Joe Norman: Yep, yep. So it’s very hard to say, and then moderately hard to say, and then easy to say, so NECSI is good. I did my PhD at Florida Atlantic University at the Center for Complex Systems and Brain Sciences. There I studied the self-organization of perception, dynamics, pattern formation, all in the domain of visual perception. After that, I was really looking to expand the scope of what I was looking at and NECSI had caught my interest. I had attended a course there during grad school, a two-week long course taught by Yaneer Bar-Yam, who’s the President and Founder, as well as Hiroki Sayama, who’s now at Binghamton University and a very fine complex systems scientist, I will say.
Joe Norman: What really caught my eye about NECSI was an attention to real-world problems, including engineering and design problems. So how complex systems not only is an interesting way of understanding the world, but what it can actually do to make differences for real problems that people face. I headed up to NECSI at that point, spent a couple of years there as a post-doc. I’m currently an affiliate there, and I’m also an instructor at the Real World Risk Institute. We also have an acronym for that. RWRI, and we say RWRI, a lot of R’s in there. That’s Nassim Taleb’s school. We run it three times a year out of New York City, and I just do a few hours of lecturing there each time. That’s another point of involvement with the complex systems community and its application to real world issues.
Jim Rutt: What drew you to the study of complexity? As we both know, it’s a relatively small field.
Joe Norman: Well, actually, I can say it kind of runs in the family. My father worked for many years at MITRE Corporation and he developed himself into, and really he’s a pioneer in the field of complex systems engineering. So when I was a kid, there was always… you mentioned Stu Kauffman, for instance, John Holland. These books were around when I was a kid, and I’d pick them up and look at them and read them, kind of casually browse them. I was much more interested in skateboarding, but it kind of started to sink in, I think, at that point. As I got a little older, I started looking during college at philosophy of mind, and that drew me towards some systems thinking. For instance, of Francisco Varela, Humberto Maturana around autopoiesis, and obviously this sort of development of AI from symbolic AI through connectionism, et cetera. Really, I started to link up all those ideas when I was naturally kind of pulled towards these ideas of complexity and emergence and self-organization. So really, a lot of connections started to light up for me then, and it’s just been a gradual one step at a time since then.
Jim Rutt: Yeah, it’s interesting. People take different paths. My dad was a cop, so I never saw any of this kind of stuff. But in the later days of my business career, I stumbled on to … It’s an interesting story how little chance happenings change one’s life. I happened to see a one paragraph little blurb in Scientific American about something called genetic algorithms. “What the hell is that?” I said. “Sounds interesting, let me look it up.” I can’t tell you the exact date, but it must have been within a few months of Amazon starting. One of the earliest books in my Amazon books ordered is Adaptation in Natural and Artificial Systems. I read that, and I go, “Whoa! This is quite interesting.” Then I found Origins of Order, read that, and then from there I kind of used that parallax to discover the Santa Fe Institute and started drilling into that. I started using complexity thinking in my business career, particularly things like coevolutionary fitness landscapes, which actually turns out to be very useful in thinking about corporate growth, and particularly [NMA 00:04:26].
Jim Rutt: Then when I retired, I started doing some work that came to the attention of Santa Fe Institute, and they invited me out there. I was going to go out for a year as a researcher in residence, and ended up staying ten years, and ended up as the chairman of the place. Weird things, all from reading one paragraph in Scientific American. And as I hoped, complexity science turned out to be a deep enough domain to keep me from going back to work. That was my real fear, right? It’s rich enough to last a lifetime.
Jim Rutt: Of course, as we both know, there’s a lot of things that call themselves complexity science. What do you describe as a domain of complexity science that you’re interested in?
Joe Norman: Well as I said, I’m really interested in this idea of applied complexity science. So not just the understanding of some of the themes and features of complexity, emergence, self-organization, etc., but how do those actually impact our decision-making? You mentioned an interesting one, coevolution and coevolutionary fitness landscapes. So how does coevolution, for example, fit in to how we structure organizations? What does that mean for how we expect them to work? My current professional interest is addressing those questions of, “Okay, how do we now go apply?” But with respect to what is complexity science per se, it was really Stu Kauffman that crystallized the idea in my mind about what’s really essential about either a set of phenomena we’re interested in or a system that we’re interested in, or a set of objects, whatever it may be, is that what’s crucial, what’s essential about them isn’t actually in the stuff that they’re made of as we usually think about it. It’s actually in the organization of the stuff.
Joe Norman: This started to really click for me. It started to make a lot of sense, because especially in the West I’ll say, we’re very much exposed to reductionism almost as an underlying … We treat it as an obvious assumption underlying all of our investigations, the way we understand the world. For me, like I said, it was really Kauffman and his focus on organization. I remember, for a time at least, he used the term “propagation of organization” that really stuck with me. It lit up a light bulb in my mind, where I said, “A-ha. This is really the key. We’re keying in on something essential here.” What systems are, and how they produce behaviors, how they interact with other systems, etc., are all really about patterns, and these patterns exist at different scales. So its organization is really important because it’s about constraints and structure on interactions. Complexity is really about the way interactions give rise to phenomena.
Joe Norman: These are some of the themes that really capture me, and I find important and essential to understand the world. Some of the more, even currently, edge thinking that I’m really attracted to is put forward by folks like Chris Alexander, another SFI guy, where all of a sudden we have … Not only are we moving beyond reductionism in terms of emergence of novel properties, but also the way that whole systems, say a single organism, give rise to what we can part out and analyze as the parts of the system. So very much unlike the way we construct artificial systems often, an organic and growing complex system develops and will actually synthesize its own components, functional components, etc. This is very much beyond what reductionism as a philosophy can really speak to.
Joe Norman: I feel like that’s really where the edge of the thinking is, is how do we now not only have emergent properties, but how do we have things like functional properties, say, in the organ of an organism arising out of not only that piece of the system, but that piece and how it relates to the context that gave rise to it? I just feel that we’re only at the beginning, really, of grappling with that kind of a dynamic.
Jim Rutt: Great. I love it. I think we must have come in through a very similar door. The simple metaphor I used for people back home to explain complexity versus reductionism is I would say, “The study of the dancer is reductionism. The study of the dance is complexity.”
Joe Norman: Oh that’s nice, that’s nice. I really like that because the dance is something that is kind of playful, it’s organic, it’s not a recipe. It has structure, but it’s not sort of preformulated. I think that’s so much of what we experience in the world is really the spontaneous kind of, could call it game playing or dancing. We don’t really have a great handle on that. In fact, even the structure of formal systems proper don’t play well, in a crucial way, with that kind of spontaneous playfulness that we actually observe in the world.
Jim Rutt: Even built systems that involve humans have that attribute, even if the designers didn’t intend them to, because humans are willful little sons of bitches, right? I worked in corporate America much of my life, half start-ups and half big corporations, and would always tell every new CEO, “All those buttons and levers that are theoretically on your desk? Most of them aren’t connected to anything!” Right? You can press all those buttons, pull all those levers, not a goddamn thing will happen because the corporate equivalent of deep state will keep on doing what it’s doing in its own self-organizing, self-interested agency risk fashion.
Joe Norman: And if they do manage to do anything at all, as you mentioned, most likely whatever those consequences are will be unintended ones, as opposed to whatever that executive imagined the effect would be. We just have so many interdependencies, and subtle effects in variables, and unidentified relevant variables, etc. So when you pull on that lever, you think A is going to happen, and maybe it happens, maybe it doesn’t happen, but you often get X, Y, and Z instead.
Jim Rutt: Indeed. The other topic you mentioned is one that’s also of considerable interest to me. I will say I’m not an expert in it. I probably do need to read more on it. This is this, as you say, cutting edge thinking about the distinctions between wholes and their components. Some people use a phrase which to my mind is still a little hazy, downward causality. That somehow the whole creates an ecosystem, an environment that supports the existence of the parts. So I think actually the reality is something even more strange than that. Have you heard of the term downward causality? Do you have any thoughts on that?
Joe Norman: Oh yeah, absolutely. I mean, downward causation [inaudible 00:10:21], this is an attempt to grapple with this issue of, “Well, if emergence is not merely epistemic but, in some sense, ontologic, then what does it mean when we say something new emerges? Is it somehow contradicting the lower level properties if not everything’s arising from them?” So downward causation is an attempt to start to deal with that problem. Now, I also have had … I don’t have any strong feelings on downward causation except I wonder if it is the right frame. And I actually really liked the way you framed it, with respect to sort of a novel domain emerges that supports new dynamics that wouldn’t be present otherwise without that domain. Now is that top causing bottom? Maybe. Maybe that’s not the right way to think about it, though. Maybe it’s emerging constraints, and then from those constraints, you can get novel structures and patterning. But I don’t have a firm commitment one way or the other, but I do think it’s at that edge that I was talking about, and downward causation is one key word, call it, that signifies that issue that’s at stake.
Joe Norman: I don’t know if anyone has done work that’s very convincing on that. You know, there’s a fellow, Eric [Hoell00:11:20], who has done some work about using the idea of error correcting codes and core screening to account for some of that. But none of it that I’ve seen has been totally satisfactory in terms of, “Okay, this really captured the essence.” And I do like, again, what you said in terms of a developing and emerging domain that then enables novel dynamics to happen.
Joe Norman: I mean, you think of something like Turing morphogenesis patterns, Turing patterns, and Turing was really on to something with that, no doubt about it. But what it leaves out is the fact that the media that those patterns are occurring on in biological systems is actually provided by the organism itself. So when the skin, say, develops into stripes on a fish, those stripes and the development of those stripes can be somewhat accounted for, at least, by reaction to fusion systems, things like that you can think of it as. There’s this kind of interesting causal loop where, yes, but is the organism developing that’s generating the domain for those Turing patterns? So there’s something we’re not quite capturing when we just look at Turing patterns.
Joe Norman: In fact, I have some – unfortunately unpublished, and that’s just sort of a priority stack issue – but unpublished work I’m doing to actually deal with some of that issue, and how you can think of nested layers of pattern formation where, say, a bounded organism is forming and creating conditions for other sub-patterns to be generated within that domain.
Jim Rutt: I also think it’s hard to get one’s head around until one thinks about temporal depth, right? Particularly with respect to natural systems. One of the truest statements about biology is nothing makes sense in biology without respect to evolution. So if we think about the whole providing an ecosystem that has some degrees of freedom for evolution to work in, we can start thinking about this whole and parts coevolution, essentially. You can look at series of generations. Don’t look at a static developmental framework in a short period of time, but rather multiple generations. The whole provides some degrees of freedom for the internal parts.
Jim Rutt: A person’s liver can be bigger or smaller without killing him, though it probably had some impact on their phenotypical fitness, and vice versa. There’s some changes at the top, let’s say the organism and all of its components, including its brain and its social systems, discover a new way of eating, or find a new food that’s richer in things that are good for the liver. So the liver can get smaller, and so less of our energy is spent on liver function and more can be spent on brain function. Over a time depth, you have a very interesting coevolution going on between the phenotypical full organism level and its structural components.
Joe Norman: That’s absolutely true. One of the things that Kauffman really hammers home is that these functional roles that different structures play over that evolutionary time are pretty fluid in the sense that just because a structure plays function A today, doesn’t mean it couldn’t play some other function tomorrow. A striking example of that is the jawbone in our long distant lizard-like ancestors. At some point in evolution, it began to become – perhaps, we’re not sure – but perhaps vestigial in the sense that it became small and not responsible for articulating the jaw anymore, and there was another bone that evolved that did serve that function. But the structures didn’t disappear. What they did is migrated towards sort of where our inner ear now exists and became our inner ear bones that allow us to transduce pressure waves into nerve impulses, otherwise known as hearing. It’s something that was once for the mouth to open and close, became later for hearing, which seem worlds apart yet evolution doesn’t mind. It will grab what’s available, and through that process of coevolution, discover really interesting, novel functional purposes. This is something of practical import.
Joe Norman: You mentioned that you found coevolution really useful for thinking about, say, organizations. This is because things like this can emerge, and they’re not something that anyone would necessarily imagine. They’re not necessarily even pre-statable in terms of state space, which is a really interesting possibility.
Joe Norman: So exactly right. Unless we look at the fullness of time, we don’t really have a good handle on these things. I guess another piece that I sort of hinted at, but it’s really essentially for me to think about complex systems, is that we’re always really facing patterns that exist at multiple scales. That’s multiple spacial scales. It’s also certainly multiple time scales. That functional fluidity, I think, is huge. It also speaks to the way we think about engineering, say, in terms of imagine someone engineering the organism that became us. At some point they said, “I don’t really understand what these old jaw bones are doing, so let’s actually optimize them right out and make this system much more optimal because it’s wasting resources.” If that had happened, then we would have missed the opportunity to hear.
Jim Rutt: Yep, absolutely. It turns out my actual deep scientific expertise is in evolutionary computation. We see this all the time in, let’s say, genetic programming. If you don’t manage your genetic program, which is essentially a way to evolve programs, you start with random programs and you do crossover and mutation, you get better programs to solve problems. You can quickly get bloat where you get lots and lots of what seems like dead code. And as you say, there are optimization algorithms one can do. In fact, you can actually just put a very heavy tax on length, and that will cause evolution to select for shorter programs and hence squeeze out most of the bloat. But it’s now well known that if you squeeze out too much of the bloat, you don’t have enough pieces so you don’t have enough diversity for future evolution. There’s that subtle balance between sufficient diversity versus no diversity on one side and bloat, which takes too many of your resources.
Jim Rutt: Here it is in a pure software evolutionary environment, and as you say, it fits in a very analogous fashion with real-world evolution of animals.
Joe Norman: That’s fascinating, Jim. I didn’t know you were working on that. That’s really cool stuff. You know, as you talk about it, the thing that comes to my mind is, okay, the temptation is to say, “Maybe there’s kind of a sweet spot for that variety.” You know, like you said, if it’s too much bloat, it’s just hogging resources. It’s not enough, it’s not developing those opportunities and those adjacent possibles. Then perhaps, maybe there isn’t even an optimal there, but in the bigger, bigger picture, perhaps that’s also a variable that requires diversity. Some environments are more supportive of bloat, and some environments are less supportive of bloat. So you beget variety and then meta-variety, you might say. All of these ideas, I think, are so essential for how we’re going to build systems into the 21st century, because we’re pretty much beyond the point of being able to structure many systems in a top-down intentional way.
Joe Norman: So that’s really cool. I’d love to talk to you more about the computational evolution stuff. That’s very nice.
Jim Rutt: Yeah, it’s very cool stuff. That’s really what I did in this field, mostly. As you say that, I’m just gonna think out loud here, which is allowed on this show. Frowned upon in real life, but on the Jim Rutt Show, we allow thinking. All right? When you say diversity of bloat, what that actually means is probably the lack of tightness in the ecosystem. Typically in a natural system, and then I’ll give you the computational equivalent. When a major new innovation happens, let’s say fish come out onto the land, suddenly they have this unbelievably cool ecosystem that they can pillage and grow and reproduce extremely rapidly with no limits for a while. But we go back to Malthus, and sooner or later, you reach the carrying capacity of the ecosystem, and we have no extra resources really. Inevitably, in stasis, most species are living on the edge of starvation all the time.
Jim Rutt: So during those pioneering periods, bloat would conceptually be okay at a couple different levels. One because you don’t have to worry about starving, which is always a number one risk. You can be a little bit less efficient. You can carry around more extra components. When the times get tight, genes that are bloaty get selected against because they’re consuming more calories per unit of reproduction, and get squeezed out. I would say the same is true when trying to solve a problem in genetic programming to the degree that the ratio of the hardness of the problem to the amount of computation you have seems weighted in the side of having lots of computation. I would hypothesize, thinking out loud here – I may actually do the experiment – that being more bloaty is good if the job is to solve the problem in the shortest period of clock time.
Jim Rutt: On the other hand, if it’s a really hard problem and you don’t have an unlimited computation budget, then one must think more about optimizing bloat for the search space that you’re in. Does that make any sense?
Joe Norman: Yeah, that absolutely makes sense, and if you end up running the experiments, I’d love to have a look and talk about them. I guess the challenge is – and once again now, I’m just sort of going out on a limb here and I would need to think about it more to have confidence – but I almost wonder if the question itself is undecidable in terms of a priori, do I know how much bloat this problem deserves, so to speak? I wonder if we can answer that a priori in any reasonable fashion. I think we can intuit in many cases kind of where that balance might be struck, but whether it could be solved once and for all, I’m not sure.
Jim Rutt: I’m actually quite strong on that. You cannot solve it for sure, forever. And why? The no free lunch theorem. David Wolpert, one of our SFI guys, earlier in his career, before he came to SFI, formulated the no free lunch theorem, which is one of the most important theorems in the universe! In fact, I divide people into people who know and practice the no free lunch theorem and those that don’t. Essentially, what Wolpert proved definitively is in an arbitrary space, there is no optimal search strategy. Every search strategy has to be optimized for the search space that it’s being applied to. I would turn your words around and say that you may not have explicitly brought to mind no free lunch, but you intuited it when you said, “You apply your intuition to tune a search algorithm for a search space.” So that’s what you do, really. There is no right answer for searching any given arbitrary search space.
Joe Norman: That’s a great way of thinking about it. I love that. Okay, I’m gonna have to bring more attention to the no free lunch theorem. Love it.
Jim Rutt: It’s an amazingly short, readable paper, but it, to my mind, is actually fundamental. We all know in the no free lunch theorem, you don’t actually understand the universe that we happen to be living in. So I’m glad I can offer that up to folks to read. David Wolpert, W-O-L-P-E-R-T. Another thing I saw in some of your writings, as I was reading them getting prepared for this chat, is that you tend to use the word “irreducibility” quite a bit. You know, some schools of complexity do, some don’t. I’d love to have you define that for our audience. Keep in mind, they’re smart folks, but mostly not with backgrounds in complexity science.
Joe Norman: I will say that when I use the term irreducibility in general, I’m using it someone colloquially in the sense of I’m not referring to a specific kind of irreducibility necessarily, but rather the general concept of irreducibility. What I mean by that is you could have, for instance, a very specific kind of irreducibility such as computational irreducibility that Stephen Wolfram talks about. And in that case, all of his work is founded on elementary cellular automata and the fact that they present a universal computer, so you can think about, essentially, any process that is computable in that framework. In that frame, irreducibility can be summarized by saying, “Well there’s no shortcut to the future to see how the pattern unfolds in time or what it will be at some arbitrary time in the future.” You have to go through every step. Everything in his world is discrete time, discrete space, cellular automata, deterministic. Nevertheless, even with all of that determinism in the system, there’s no concise comprehensible formula to say, “Well what will happen at time one billion, or ten billion?” Or whatever it is.
Joe Norman: So it’s irreducible in the sense that the whole pattern determines what happens in the pattern in the future, and there’s no shortcut. That’s sort of a summary of computational irreducibility. But it really gets into this idea of irreducibility more generally, which is the idea that there’s no further amount you can deconstruct the system into smaller pieces or fundamental laws, or something like that, that allows one to compress the system further, sort of compress it to something, a redundant pattern or a repetitive pattern that allows one to project in time or space. What will the system be, what will it do, how will it behave? It comes up in other places. Like in cybernetics, irreducibility has to do with essentially matrices and whether they can be broken down, or a machine can be broken down into smaller subcomponents that behave independently. That’s the general idea.
Joe Norman: I just like the term irreducibility because it really puts emphasis on that issue of, you’re looking at some phenomenon, and it’s as far down as you can get to address that phenomenon. And if you go any further, you actually destroy that phenomenon via the analysis.
Jim Rutt: That’s very similar to the computational measures of complexity, right? Is there any shorter way to state a problem than to just run the program, right?
Joe Norman: Right, right. The halting problem, of course. Absolutely.
Jim Rutt: Back to Wolfram, I played for a while with Rule 120 to see if I could find anything there, and the answer was no. Right?
Joe Norman: Fair enough.
Jim Rutt: Interesting stuff. Another key fundamental building block of conversations around complexity: emergence. I’m sure you’ve thought about that a little bit. Some of the things I found most useful there are Harold Morowitz’s book, The Emergence of Everything. He lays out 28 levels that start at quarks and work their way up to the universe. And again, John Holland. A very difficult book, but one worth reading, Emergence From Chaos to Order. Who do you follow? Who have you read? What do you think about emergence?
Joe Norman: Holland, I’ve certainly read Holland. We’ve already mentioned Kauffman. I really enjoy Chris Alexander, who has some technical [inaudible 00:25:23] but often takes a sort of informal approach to these issues because he’s really thinking about the practical space of architecture, in his case. But his insights really apply to so many kinds of practices. Everywhere from folks that have a formal treatment. Robert [Rose 00:25:39] is somebody who I feel doesn’t really get enough credit, and he deals with some of these issues with biology and the limits of formalism. I look across all these folks. I try not to commit myself to one treatment or another, but instead try to get a gestalt across the space of what is everyone saying in common, whatever the difference is?
Joe Norman: I think what’s so essential, and does kind of cut across all the different treatments, is the relationship between formal models and what happens in the world. So much of what we experience is emergence. Once again, this cuts across even folks who either believe emergence is either epistemic, meaning that it’s not really happening, but according to how we understand things. We need to invoke it to help us understand what’s going on. To those who are more like a Kauffman who believe it’s ontological, and new things, properties arise in the universe that are novel proper.
Joe Norman: I’m very much interested in a practice-oriented approach to emergence, where we can start to think about how do we react locally to local perturbations, local variables, and response incrementally, essentially, to small pieces of information we gather from the system as we interact with a [perturbant 00:26:55] and it kind of pushes back and gives us feedback. Out of these very small incremental movements, how things that we don’t imagine, or organizations that we never imagined are explicitly designed, come out of the system that we’re generally embedded and interacting with.
Joe Norman: You did mention upfront that one of my current pursuits, and hopefully a lifelong pursuit, is the idea is homesteading. This has been a really embodied experience for me since my wife and myself moved up to live in New Hampshire now, and we got a little property where we’re doing this stuff. We’re growing our own food. We’re not able to provide all of our own food, but day after day, it’s more and more making progress. And really, where we are now, even after a year of doing this, we’ve pursued the process for multiple years, but we’ve been living on this property for a year now. I wouldn’t have known what to do. Even simple things, like, “Where do I put this fence? Where do I put this gate? Where do I put this garden bed? Where do I fell this tree? Why this tree, why not that tree?”
Joe Norman: All these little tiny decisions over time are actually evolving into systems of behavior that we’re embodying where there’s emerging semantic relationships out of the different pieces and parts of the property. “Oh, this is the part where the compost collects, and it’s uphill, and that’s useful because, until we collect the compost to spread somewhere intentionally, it’s now running down the hill into the garden. So it’s even passively helping to fertilize the garden beds.” Things like that. It’s just small things here and there, and things I didn’t imagine upfront and couldn’t have designed, but they’re emerging out of our small, incremental interactions over time.
Jim Rutt: That’s really interesting. Again, just as in interesting, “Isn’t the world strange?” I originally ran across Joe when a friend of mine said, “Hey, you ought to talk to Joe Norman. He’s a smart guy interested in local agriculture.” I had no idea he was a complexity dude. How about that?
Joe Norman: That’s a beautiful intersection. There’s not enough of it. There’s some, and especially when you do start to look into the local ag, even without it being explicitly called out, so many of the insights of complexity are embedded in there. For instance, in permaculture and some of these other philosophies.
Jim Rutt: We had Joel Salatin out at the Santa Fe Institute one time. That was quite interesting.
Joe Norman: I love Joel. I’ve met him in person, but I love his writings. That’s really cool. That must have been fun.
Jim Rutt: Joel’s a neighbor. Well, we live close to Joel. We know him reasonably well. I run into him fairly frequently. He is an amazing character. A true American innovator, and his books are well worth reading. My favorite is Everything I Want to Do is Illegal.
Joe Norman: Great title.
Jim Rutt: That’s a really interesting book. He is right at the cutting edge of helping to rethink what local agriculture might actually mean. Though I will say we did not really successfully inoculate him with the complexity virus, but I think we helped his perceptions a little bit. He certainly helped our perceptions a little bit by bringing in more practice-oriented things, more of the applied space. I want to jump here a little bit. When I was preparing for this, I like to prepare five to ten hours for each guest, and one of the things I ran across was your dissertation. I read that with considerable interest. It was on the perception of objects in motion. I particularly liked the fact that you referenced one of my favorite cognitive scientists who actually sort of became an anti-cognitive scientist later, JJ Gibson. If you’re up to it, maybe talk a little bit about what you learned in doing the work for your dissertation.
Joe Norman: Yeah, absolutely. JJ Gibson is wonderful. His most famous book is The Ecological Approach to Visual Perception. What he means by ecological is not necessarily in the way you think of ecology in terms of we’re looking at interactions of organisms or something like that, but the fact that the living human, or any living organism with vision in this case, is embedded within an ecological system. That has some profound implications for how we think about perception. One of his key concepts is this idea of affordance. An affordance is a … we can call it an atom of perception that is action-oriented or implies some set of possibilities for action. His idea is we don’t perceive abstract physical properties of the environment. We’re not gathering sense data to reconstruct a physical theory of what’s out there. Rather, what we’re doing is picking up opportunities for us to act on the environment or within the environment. You might think of something like a branch is graspable, and so you don’t see that the branch is approximately cylindric or something like that. You see, “No, I can actually grab that with my hand and pull on it, and pull myself up this tree.”
Joe Norman: So it’s very much an action-oriented heuristic, you might think of this idea of perception. What this really starts to draw out is that what we think of as perception is really a relational property. We often, again from this Western reductionistic cannon, think of, “Okay, we have a brain, and inside that brain we’re modeling the world out there.” But when you think about, “What does it mean for a branch to be graspable?” Well now I’m invoking the structure of the organism itself as well, because there’s a relationship between the size of the branch and the size of a hand that can grip it. These are some of the ideas that JJ Gibson introduced, and are so powerful, and really can affect the way you look at the interaction of organism and environment, and what it means for an organism to be structured, and what that means for the experience and the perception of the organism. This is also related to ideas of umwelt, like an animal or a species has a certain kind of life world that they inhabit, and that life world is with respect to the structure of the organism itself.
Joe Norman: Now how this worked into my dissertation: I actually, in large part, including JJ Gibson in my dissertation as a nod to where I felt I fell short. I did a lot of psychophysical experimentation looking at perception of motion, of folks sitting and looking at a computer screen and having some frames flash and asking them if they’re able to discern shapes, what they saw with respect to patterns of motion. I think I uncovered some really important things, doing that. At the same time, there was an ever present awareness that this is not like an ecological setting. There’s something much different about sitting in a testing room looking at a computer screen and answering questions.
Joe Norman: I will say that, to do justice to that setting, that is itself an ecological setting of a kind, but one needs to always maintain the awareness that that is a very specific, a very special setting, and not a general setting, and there are in fact some features of more general settings that might be important, and certainly are important, to understand how what I was discovering really is important or not important to ecological perception. What I looked at was the way that … I looked at several things, but for instance, the way that certain visual patterns are multi-stable, so you can show the same pattern to either different people or the same person at different times, and what they perceive is different one time versus another.
Joe Norman: We looked at the dynamics of that. We looked at some of the older literature we spoke to in psychophysics that did some poor statistics, frankly, and assumed some things that were not quite right, showed there’s different kinds of visual information that people are using for different kinds of processes, and really showed that the response is quite a non-linear one. You can capture a lot of very different-seeming stimuli with a concise model. When I [inaudible 00:34:29] modeled a dynamical neural network, and it showed that a very generic pattern of interaction among the neurons in the neural network exposed to patterns looked a lot like, I’ll say very, very close to what people perceive across these patterns. That connected to the gestalt psychology with respect to perceptual principles, and embodied those principles with something a little more hard-coated in a mathematical and computational model.
Jim Rutt: Got that. Very interesting, because my biggest single personal project is building a rudimentary conscious-cognition embedded in an ecosystem modeled on a white-tailed deer. I used Gibson as a significant part of my inspiration. I thought of what you were doing – if it’s true, we’ll someday maybe find out – as one of the building blocks on which these phenomena are built. I didn’t specify down at that level. I’m working at a higher level, and I assumed something like that is going on which delivers into the global workspace objects, and set of affordances which are actually discovered over time, relationships, motion, etc. So it was right next-door neighbor of the work that I do. I said, “Wow, I wish I had read this when I first started working on it.” I might have built a little bit more down in that direction.
Joe Norman: That’s interesting. There’s two interactions that, if I had continued on that particular path, I was planning to pursue, and things just changed and I didn’t put any more time into them. One was indeed that, “Okay, let’s start embedding this in [inaudible 00:35:56] that are in an environment and exactly that. Let’s start to see if they can discover affordances.” The other was the fact that I looked at these things so often that I became very good at controlling what I was perceiving, so most of the subjects were sort of … they experience it as a passive reception of they either saw this, or they saw that, whereas I could decide in real time what I was seeing.
Joe Norman: I started to run some experiments that showed that there were definite time scales at which this did happen over, and therefore also didn’t happen over. They were different than the time scales of the perception itself. Just to say it quickly, if the motion in the patterns were too fast, you could still perceive the motion, but you could no longer control it, whereas if you slowed them down to a certain critical speed, you could start to actually control which pattern you are perceiving. I think that there’s some deep connections between these two, as well, with that question of perceptual learning, and how we discover affordances over time.
Jim Rutt: I can give you a great perceptual learning example. Once it happened to me about seven years ago, I’ve been extremely interested in perceptual learning. As a farmer myself, every spring I spend about seven to ten days eradicating invasive weeds that are trying to attack my field, spring-specific invasives. They were coralberry, barberry, and autumn olive. After seven to ten days of riding around four or five hours a day, and making a real time decision on which of one of those three species before you zap it. When I was near done with it, as I’d ride up the road to the nearby town, any invasive species in people’s fields just blinked at me. It was amazing. They just popped. It was an astounding, just really strong example of perceptual learning. Since then, I’ve taken the perceptual learning literature much more seriously than I ever did before.
Joe Norman: That’s right. Once again, just to draw on my recent experiences of beginning the homestead lifestyle, is just that when I first got here Jim, I didn’t even want to clip a branch. I was like, “What am I going to screw up if I clip this branch?” Over time, I’m developing the perception of seeing out in time a little further, where I can see, “If I let this thing, where will it tend to go? If I clip it now, what will tend to happen around it? It’s gonna let more light in.” Things like that. I think that perceptual learning is really essential. It’s fascinating the way we have to step into the unknown, because you can’t know what it’s like to perceive something you haven’t perceived yet. You have to continue to put yourself into the situation over and over, where these serendipitous moments can happen where something clicks inside of you. It’s certainly some kind of self-organizational process can be top-down directed. But something snaps, and all of a sudden you can see something you couldn’t see before.
Jim Rutt: You even have a tool that the higher-level systems can use, also, which is once that perceptual learning has clicked in, you can operate much more rapidly in targeting your invasives.
Jim Rutt: Anyway, let’s flip back now to talking about complex systems. One of the things that did trigger some thoughts in my head as I was reading your dissertation is the relationship between complex systems science, and modeling and simulations. Some people say, “Hey, you really can’t do these probes on complex systems.” And at some level, they’re correct. But at other levels, they’re not. We do probes on complex systems with both models, and thinking through the implications of models, and sort of more quick and dirty with simulations. Have you given any thought to where model building and simulations fit into the explorations of complexity science?
Joe Norman: Yeah, absolutely. From an applied perspective, we’re really trying to solve a problem, understand a system that we need to interact with better that we’re trying to get something from. I think the number one utility to modeling is to make assumptions very explicit. If we want a computer simulation to run, we have to write down what are the assumptions that are being made so that it can unfold its behavior at time. Making those assumptions very explicit, and then when we go back to the system to observe the real natural system, determining whether we’ve made the right assumptions.
Joe Norman: So modeling and simulation is often considered in the paradigm of prediction. Build the model, run it forward in time, predict what’s going to happen. There’s a lot of problems with that. It can work under certain conditions, but I think that that’s a secondary kind of a utility of modeling. Really, this primary utility is to, “Did we get the assumptions right?” Agent-based modeling, I think one of the really powerful aspects of it is that because we’re able to make a large set of micro-assumptions, we can actually see the difference in the aggregate behavior between what unfolds from that agent-based model, and what maybe we would have thought would unfold if we did some more macro-level formal modeling.
Joe Norman: For example, say we’re modeling a hospital, and looking at how infections might propagate through a hospital. There’s this issue where you go to the hospital and you’re trying to get better, but you might pick up an infection while you’re there because there’s a lot of other sick people around, and whatnot. Let’s say we’re modeling that. If we want to do an agent-based model, we could say, “Okay, we can model a surface. We can make some assumptions about if someone touches the surface, and they have this bacteria on their hand. Maybe there’s some probability that it transfers to that surface. Now someone else comes and touches that surface. So we can model really at that very micro-level where our assumptions are actually probably pretty good relative to our macro-assumptions. Whereas if we wanted to do some high-level mathematical modeling of that system, we might assume something, like for instance, that some set of events are independent. Like my touching a surface and your touching a surface are some kind of independent events, where they might actually have interdependencies, and therefore follow different kinds of distributions.
Joe Norman: That’s just to give a little bit of crystallization to this general idea that really, we are the worst at understanding our own assumptions and identifying where they’re fault, and modeling and simulation can allow us to draw them out, force them to become explicit, and give us an opportunity to adjust them as we observe their real system, the natural system.
Jim Rutt: I like that way of thinking. Let me add one additional thing – which I often do when people are talking about models, agent-based models, or physical simulations – which is on one side, it helps you clarify your assumptions, and on the second, as you’ve pointed out, one should not believe any given trajectory. Right? However, I do believe that once a model has been well-developed, and the assumptions vetted by experts and by the best available evidence, one can often learn some interesting things from ensemble statistics of large numbers of runs. For instance, we can found out if we’re in Mediocristan or Extremistan in the NNT type language. We can see how much variance is there between the runs. The beauty of agent-based models, if they’re not too big, you can run them a million times. Right? You can really get a sense of what statistical distribution of outcomes are we likely to see with this set of assumptions. I had that as something that people often miss when they think about what can you do with simulations or models.
Joe Norman: That’s right. I would never take a model too, too seriously that didn’t have some stochasticity involved so that you run it many times over and look at the ensemble. That said, there is always the danger that even across those who build models, in principle it seems like independently, there are subtle ways for non-independence of assumptions to arise. You might think of something like climate science, where you have many different teams, many different individuals building models, and some of the way they’re structured, some of the factors they incorporate are different. There’s some independence among how they’re instantiated. Nevertheless, there’s a cultural – I almost said cultural climate, but maybe that’s a little confusing, oh well – that can inject more subtle non-independence into the assumption set. Even when we do have a nice model, even if it’s very robust, I’m always, always, always hypercautious of committing too much to any action that’s solely coming out of a modeling prediction, even on an ensemble.
Joe Norman: But I do agree with your main point, that yeah, we’re not talking about run the deterministic model forward in time and think that’s how the system goes. No, the real world is always messier than that, and ensembles are the right way to think. Even interestingly, which you mentioned NNT, Nassim, we often think of running ensembles so that we can think of the distribution of possibilities into the future. But we actually have a lot of uncertainty about the past, as well. Thinking about the past in terms of ensembles actually makes a lot of sense too.
Jim Rutt: We only have one past.
Joe Norman: We only have one, but this is about knowledge of the past. It’s actually very difficult to transmit perfect fidelity what actually happened in any given situation. In fact, if you take seriously the idea of, just think of a simple fixed point attractor, and if more than one path converges any kind of many to one mapping, then in principle we can’t say which past led to it. Now we might have some nice evidence where we can reasonably say, but as a general point, it’s a real problem with understanding where we are and where we come from.
Jim Rutt: Interesting. Yeah, you could run the simulation backwards, and it won’t give you the answer, but it’ll give you a set of things, some of which could be close to the answer.
Joe Norman: Right, exactly.
Jim Rutt: That is actually pretty interesting. What do you think of climate science? I know you’ve written a little bit. I saw your one-pager that you wrote with some other folks. What are your thoughts about climate science and how we should think about it, and how we should think about, as we know, a fair amount of the argument is based on modeling. Where is your head as an applied complexity guy around climate science?
Joe Norman: You know, the letter we wrote was really a companion piece to our larger piece on we call a non-naïve precautionary principle. With respect to climate, we’re making a point that I think is essential for someone to be making, and we have this polarized discussion where we’re saying there’s the believers and the deniers. The believers say, “Here’s what the models are saying. Here’s the scientific consensus,” they’re saying, “So here’s why we should be worried about climate.” The deniers are saying, “Either this is a funded agenda, or the models are bunk, so we shouldn’t be worried about it.” The interesting thing is we all can agree that the climate is important. If it goes outside a range or becomes too volatile, then that’s very bad for living systems, including us and all of the life support systems we depend on.
Joe Norman: So the implication of having poor models that do a bad job of predicting is not at all, “Don’t worry about it then.” It’s actually the opposite. It says, “If we really have trouble predicting this system, and we don’t know what’s going to happen, then we should probably exercise an abundance of precaution around this system, and we should do what we can to mitigate our contribution to the unpredictability.” In this case, that contribution could be a large coherent expulsion of CO2. We don’t need to necessarily understand the exact trajectory and dynamics that will unfold to know that if you have a complex system, if it’s large-scale in this case, if it’s essential for life. if you perturb with a large-scale force, say a lot of CO2 coming out at once, then you’re easily contributing to the destabilization of what is likely a meta-stable system.
Joe Norman: That’s our precautionary principle approach to the climate problem. The more uncertainty we have around the models, the more we need to move forward with an abundance of caution and not assume that, “Oh well, if the models aren’t great at predicting, then we’re fine. Don’t worry about it.”
Jim Rutt: I would also add, this [inaudible 00:47:33] I always look for, is get away from just data and look at causality. There is a base causality around climate which everybody needs to keep in mind, which is it’s an absolute, easily provable physical chemical finding that if you build a system and add greenhouse gases into it, and radiate light into that system and have it reflect back as infrared, more energy will be stored in the system that has more greenhouse gases. That’s a very important base fact which some of the deniers don’t even seem to know. Right?
Joe Norman: That is true. The question then becomes, from my perspective, what are the other agents in the system that might then start feeding back and alter that particular trajectory? For instance, Freeman Dyson has talked about, “The Earth is greening. You have all this CO2 in the atmosphere, that means that plants grow more easily, so the Earth is greening.” You can have some kind of feedback effects. He’s really arguing that, once again, “It’s no problem. It will kind of stabilize out.” I don’t see any reason to believe that a priori. But certainly, yes, you have some pretty sound science there that says, “Given these assumptions, yes, you will have more energy in the system, and that means expect to enter a different regime than you’re currently in.” Which, for us, is probably not a good thing.
Jim Rutt: Now to your point, there are many, many side loops. For instance, in climate models, probably the least bound element in the simulations around cloud formation with increased water vapor in the atmosphere. Right? Why is it that important? Because clouds are far more reflective of light than the Earth or the water is. If cloud formation as water vapor cycles operate faster and faster in the atmosphere as temperature starts to rise a little bit, it is possible that cloud formation could be the adapter that essentially constrains a system. But the best models we have show that, while it makes a significant difference in the rate of run-up, it is nowhere near enough to stop the run-up, at least that’s the models I’m familiar with.
Jim Rutt: The other second biggest one is the absorption of CO2 into the oceans. Now that’s only a short-term fix because the long-term problem is, let’s assume somehow magically – which does not appear to be correct from the physical chemistry – that all the excess CO2 is absorbed in the ocean. The end result is an acidic ocean with all kinds of dire effects. Right?
Joe Norman: I think for whatever reason, that’s been one of the more downplayed aspects of the danger is the acidification of oceans, and what that means for whole Earth ecosystems.
Jim Rutt: Then of course, in this particular case – we’ll talk about the precautionary principle in general later – but in the area of climate, there’s also side loops, and we can see it in the geological record that it’s happened, where we could move from worst case scenario, five degrees C by 2100, to 15 degrees C by 2100 if we get into a positive feedback loop.
Jim Rutt: For instance, there are methane ices in shallow waters all around the world. Methane is kind of an odd gas in that it solidifies into an ice that’s surprisingly low pressure. Of course, it’s a triple point with pressure and temperature. If the water temperatures rise in those shallow waters enough to let loose those methane ices, and there’s huge quantities of them, much more than the amount of methane known to be available in natural gas wells, that could cause a very rapid run-up. They even say it’s a small rapid run-up, but if it turns out there’s lots of these methane ices at low depths of the water and they have not been fully mapped, you could get a positive feedback loop where a sharp but relatively small rise in temperature causes more methane ices to evaporate into the water and bubble up into the air. We could rapidly spin up to something like a 15 degree C, 25-26 degree F increase by 2100, which would be truly disastrous.
Jim Rutt: The nature of our system with the side loops that could drive positive feedback is a particular reason to consider the precautionary principle.
Joe Norman: Yeah, absolutely. Obviously, there are potential, as you put them, side loops that have negative feedback, but really that’s not what we’re not sure about. You’ve mentioned this one particular one. How many have we not identified yet? How many potential positive feedback loops, points of instability are there lurking that we know nothing about right now? We don’t want to find out the hard way.
Jim Rutt: We do know from the geological record that it has happened. From the Greenland ice cores, we have found periods where temperatures have jumped 15 C in 50 years.
Joe Norman: That would be catastrophic.
Jim Rutt: Interesting. Well let’s talk about the precautionary principle. I read carefully your paper on GMOs.
Joe Norman: I will say it’s a paper on precautionary principle, and we use the application of GMOs as an example case.
Jim Rutt: Yeah, and you use nuclear power as a counter-example case. I will say I walked into it with a farmer’s bias. “It sounds like bullshit.” Right? But on the other hand, by the time I was finished reading it, I said, “Hmm. Maybe they’re right.” But I’d like to probe into that one a little bit, and see if we can clarify my thinking and maybe yours. Why don’t you tell the story, and use it from the GMO perspective, and maybe you want to compare and contract with nuclear, on the story you were telling in that paper?
Joe Norman: We’ve been doing, for a long time … Frankly, a corporate talking point is, “We’ve been modifying organisms forever.” Which is true. We’ve been doing it from an artificial selection standpoint. Now, what is happening with GMO is this kind of top-down design thinking where we’re saying, “Okay, we have this ability to insert trans genes into organisms and elicit desirable properties, and then use those as cultivars in agriculture.” There’s actually quite a few problems here, but one of the fundamental issues is that these designs that these folks are imagining, these whole-system design of reimagining every facet of agriculture, are making a lot of simplifying assumptions around what the complexities of agriculture actually are and the way complex systems work, and how they push back on things.
Joe Norman: Just as a very well-known example, herbicidal resistance is often engineered into these organisms. Well lo and behold, the herbicide is used a lot on these crops so that the crops live and the weeds die. Well the weeds evolve herbicidal resistance. There’s these assumptions that aren’t built in to their top-down design that they’re now reacting to, but are not something they imagined happening. There’s other issues, specific issues like the fact that they claim the pesticide usage levels are way down, but they’re not accounting for the fact that all of these organisms are generating, producing BT pesticides inside of them, inside of each cell, endogenously producing these proteins. The idea that the count of application is down doesn’t represent what’s actually in there, and that specifically has the problem of, “Well you can’t wash that off.” And other problems as well. What’s happening to the pollinator populations? These are designed to serve as insecticides for these crops.
Joe Norman: Those are some specific issues. The general issue we are tinkering with, we are playing with, we are trying to design systems within ecosystems where runaway cascade effects can happen in ecosystems. They do happen. We’re now doing, you could call it, long-range transport of features of biology, protein synthesize, that can have massive effects on the ecosystems they’re inserted to.
Joe Norman: As we address in the paper, the idea isn’t that the activity of modifying the organism is in and of itself dangerous or risky. The issue is we are, without any sense of the potential cascade effects, releasing these into the environment, and we’re not doing it on a small scale. We’re on a large-scale, synchronized fashion releasing novel organisms into the environment. When we’re thinking about the potential for cascades, the ecological cascade or viral cascade or epidemic cascade, you think of the Irish potato famine as an example of where a cascade can really have an effect on crops. There’s no clear upper bound, if we move into a space with these kinds of effects, at which they’ll stop. What you end up with, because you have this potential for contagion, you have this potential for cascade, you end up with fat-tailed distributions as opposed to thin-tailed ones, meaning very large events can happen and those events are almost certainly not in our favor. There’s no obvious upper bound at which if something like that did start to unfold, there would be a circuit-breaker, you might call it, or a boundary that causes the process to stop.
Joe Norman: The idea isn’t to predict one or other feature that is the major problem, but to recognize that we’re playing in a space with living organisms that have all of the features that contribute to cascading and fat-tails. We’re now going above and beyond the statistical search space of artificial selective processes. We’re selecting this tomato over that tomato, and growing from that seed. We’re now saying, “I think tomatoes should be resistance to this pest, and they should generate this pesticide so that this pest can’t get them.” Injecting that across a huge number of tomatoes that are all very genetically similar, and projecting it actively world-wide, and replacing other more time-tested systems with these kinds of systems as another bonus risk. The idea is, we’re definitely in a systemic regime where fat tails exist. We definitely don’t have the analytic nor modeling simulation tools to exhaustively look at the possible trajectories of this. We’re not treating this as a risk to environmental exposure, but we are exposing the environment to it in a massive way.
Joe Norman: So you start to stack up these different features, and you see that this is a situation where we’re imagining these benefits. I hate the corporate talking points because they’re frankly unethical. “Oh, don’t you want to feed the world? This is going to feed the world and your way won’t, for whatever reason,” even though that’s obviously a straw-man. There’s a lot of different approaches that don’t pose the same risk that could indeed feed the world. You’re stacking all of these risks up and you’re pointing to these benefits. The fact of the matter is that when you have these potential downsides, there’s no finite amount of benefits you can articulate and tack on that justify taking the risk of collapsing ecosystems which we all depend on, which agricultural depends, which we depend on for so many of our life support systems.
Jim Rutt: This is where I want to push back a little bit. We quickly were able to describe some very serious ecosystem risks around climate: positive feedback loops, historical records, etc. We also talked about negative feedback loops that might ameliorate it. When we think about GMOs, I have yet to see a compelling causal story on how a GMO plant could cause widespread devastation in the ecosystem. Let’s take on example.
Joe Norman: Okay, I’ll give you an example.
Jim Rutt: Let’s use cotton. It’s the number one GMO plant, or even if you have another one.
Joe Norman: We could use corn, cotton, soy. These are the ones that are mostly monocrop. Obviously, you have at the level of human nourishment the ability to eat. You have the risks of massive amounts of monoculture coming out of this, unprecedented levels of monoculture. Monoculture is a risk regardless of GMO, but GMO really exacerbates the degree of monoculture that we’re talking about here, we’re talking globally. If you did have some kind of a viral event or any kind of pathogen event, this can spread now worldwide. The effect on our food system, our food supply, could be massive. Imagine all the other interconnected, interdependent human systems once the food system starts to be stressed that hard. Okay, so there’s one.
Jim Rutt: Let’s stop there, because what you just described is the danger of monoculture, not the danger of GMO. We had horrendous monoculture a long far back. As we probably know, 95% of our bananas are all clones of one banana. It’s astounding. And apples were way more monoculture than they are now.
Joe Norman: I’ve never [inaudible 00:59:53] mostly eaten apple nor mostly eaten bananas. But more and more, we are mostly eating corn, mostly eating soy, and we’re talking about these crops. The GMO in this case is an enabler of a degree of monoculture that didn’t exist before.
Jim Rutt: At least early, before the patents expire, etc., may actually drive diversity if people take different GMO approaches. I would say you’re arguing against monoculture, which I strongly agree with. We could have a wonderful conversation about that. It may be that GMO is an accelerant, but it is not … I would say, in this [inaudible 01:00:27] first argument, not the fundamental.
Joe Norman: You say it could be an accelerant, but as we know well from complex systems science, more is different. When you speed something up that quickly, you can quickly end up in a qualitatively different regime. That’s what we’re positing we’re experiencing with this. So yes, you can get there other ways, but getting there faster implies much more risk.
Jim Rutt: One could throw out a counter way to regulate. Right? Rather than being Europe and banning GMOs, let’s say we could have an anti-monoculture rule. We could require that no more than 20% of any grain crop can be within x DNA hamming distance of the rest of the crop. Forced diversity, rather than ban the technology.
Joe Norman: You could. What you’ll find … I mean, in a realistic application of that. Let’s say we wanted that as a policy, but what would happen? Well, the value proposition to global corporations would drop rapidly because their whole value prop is really about this, “It’s actually hard to develop a GMO line.” So they hit, every once in a while, something that makes sense for their profit model, and they deploy it in sync. You would experience a lot of pushback on that policy via regulatory capture from these large corporations, just in terms of real world implementation of a policy like that. Let’s imagine that somehow we could get such a policy implemented. I agree that would be a great mitigant. Would that mitigate all of the risks of GMO? No, not necessarily.
Joe Norman: We have other issues like cross-breeding. Things are promiscuous in nature, from bacteria to plants, and there is very good evidence of these traits not staying confined into the crops themselves, but actually cross-breeding into wild grasses and things. For instance, a very interesting study in Switzerland that has had from day zero a total ban on the cultivation or import of GMO. What you find are grasses that have cross-bred and have these traits along the train tracks. Why along the train tracks? Because there’s some kind of a pollination event that’s happening via the trains traveling themselves. There’s all of these kinds of dynamics that we’re not factoring in about how things spread. When we’re talking about spreading these traits, what traits? There’s some, frankly, propaganda around, “They could be traits that are for drought tolerance, or they could be traits for vitamin A so that people don’t go blind and things.” But in reality, the only traits that have found any utility are herbicidal resistance and insecticide production.
Joe Norman: We could imagine a different universe in which some other trait is discovered. Now we can go back to the … this is a little different no free lunch than we were talking about before, but there’s no free lunch in terms of you’re going to engineer in drought tolerance. Well something’s got to give, so what does that mean? A lot of stuff is more fantastical than practical. In essence, we’re talking about systems that can do many things. Monocrop is one of the features. Cross-breeding and spreading proteins and amplifying certain proteins that might serve as insecticides throughout an ecosystem present another source of systemic risk. We need pollinators. I don’t really buy the idea that we can replace all the pollinators with little robots, and everything will be fine. We need these ecosystem cyclers and connector nodes, and pollinators are a huge one of those. We’re also now having wild plants that are growing insecticide in ways that no other plant was growing before. Plants in general do often have insecticidal qualities, but you’re presenting a very novel exposure in short order in a way that doesn’t respect something we talked about before, the coevolutionary dynamics of ecosystems.
Joe Norman: Ecosystems generally play nice precisely because there’s so much coevolution happening. As an example of a non-GMO issue that we all recognize that is related to this is invasive species. Why are invasive species invasive? You were talking about those weeds. Well, it’s because they’re now in a new context that they weren’t in before, and so they lack the coevolutionary history of that ecosystem. There’s not the proper push and pull that develops over time. How does proper push and pull develop? It’s through coevolution and, frankly, small local extinction events when coevolution doesn’t satisfy, when it ends up in a kind of Malthusian extinction. Somehow, something undercuts its own resource or assumption set of survival. Without this coevolutionary process, we’re really playing with fire here. And it’s not just monoculture, it’s systemic perturbation of ecosystem functioning.
Jim Rutt: That’s possible. But let’s say there’s a hop to a specific grass in a specific region, back to your paper, wouldn’t that fall into the limited category? Say it’s a specific grass that lives at a certain elevation in the highlands of Central Europe in Switzerland. Right?
Joe Norman: If that is what happens, yes. But the issue is we can’t exhaustively say all the ifs that might occur with this, and we are certainly in a space where massive spread beyond such bounds are possible. There is every possibility in the world that yes, some event could happen that would then be contained and bounded. Of course, this is inherent in the idea of a power law distribution. There’s many, many, many, many, many small events. What the problem is is that every once in a while, you have a very, very big event. These are what we’re worried about, not the many small events.
Jim Rutt: I did like that part of your argument, and I will say it will cause me to think about this more. But I would say I’m not convinced, because I will put the challenge back to you and the people you work with to come up with a causal mechanism by which plant-based pests could go out of control on a worldwide basis, as opposed to perhaps do some damage to a local ecosystem where a given species in which there was a hop occurred. But we know no plant species are ubiquitous around the world.
Joe Norman: The thing that’s changing that is our agriculture and our international transport. More and more, we are getting species that are very widespread, crops that are very widespread with very little genetic diversity. I appreciate the pushback, frankly. I think it’s always good to sharpen the sword and to question one’s assumptions. Another relevant piece that comes to mind with respect to the precautionary principle and how we’re presenting it is, what is the supposed problem that this innovation is addressing, or this intervention is addressing? Are there other ways that present a lot less uncertainty to address this? In the case of agriculture, there is all the promise in the world of other approaches that not only don’t present the risks that GMO presents, but also have a lot of other positive externalities that mitigate a lot of other risks we face. For instance, around soil erosion and things of that nature. That feeds back into carbon sequestration and all these other issues. It’s not only that there’s risks, it’s that we have alternatives that don’t present the same risks.
Jim Rutt: Yeah, I would agree with that. That would be a wonderful discussion. At some point in the future, I’m going to do a whole month on forward thinking agriculture. Maybe I’ll have you back and we can talk just about this topic.
Jim Rutt: Unfortunately Joe, we’re out of time. This has been a wonderful conversation. It was everything I was hoping it would be. I’d really like to thank you for being on the Jim Rutt Show.
Joe Norman: I want to thank you for inviting me. This has been a lot of fun. No, I always appreciate talking to you, Jim, and I hope we talk soon, on the show or otherwise.
Jim Rutt: Thanks Joe, it’s been great. We’ll talk soon.
Jim Rutt: Production services and audio editing by Staunton Media Lab. Music by Tom Muller at modernspacemusic.com.