The following is a rough transcript which has not been revised by The Jim Rutt Show or by Dennis Waters. Please check with us before using any quotations from this transcript. Thank you.
Jim: Today’s guest is Dennis Waters. Dennis received his PhD from Binghamton University in 1990. Then, he became a publishing entrepreneur founding technical news services like genomeweb.com. After retiring, Dennis continued his PhD research on how one dimensional patterns of DNA language and code guide the three dimensional world. He is currently a visiting scientist at Rutgers University. Welcome, Dennis.
Dennis: Thank you, Jim. It is a pleasure to be here.
Jim: It’s amazing to reconnect after God knows how many years. Dennis and I had some interactions when he was a publisher in a domain I worked in, which was, I think, satellite communications. Was that what your newsletter was about? I don’t remember.
Dennis: Yeah, we were doing satellite… We were doing, this was back in, well, I like to say this is in the mid-’80s. I don’t want to say it’s like 35 years ago. Let’s say it’s the mid-’80s. Yeah.
Jim: No, we already had fired that and we’re working on electricity, right?
Dennis: Yeah, we were pushing the boundaries of figuring out this idea that you could send data over wireless communication, which was kind of a crazy thing at the time.
Jim: Yeah, but I built my second company on that idea, a company called First Call and we managed to get error-free data over a one-way broadcast. People said, “How’d you do that?” We came up with some very clever tricks and it worked damn well. It was amazing. I mean, I haven’t talked Dennis since and I happen to stumble across him, I guess it was on the SFI Facebook page. Was that where it was?
Dennis: Right, right. I was doing a little promotion of my new book and there it was and there you were.
Jim: Yeah. I said, “Let me take a quick look at this thing. Dennis was always a smart fella. He maybe has something interesting. Who the fuck knows, right?” Then, I took a quick look at the book and I go, “Holy shit! This is exactly the kind of stuff I’m interested in.” I invited Dennis on to the show.
Jim: Today, we’re going to talk mostly about his book, which is titled, Behavior in Culture in One Dimension: Sequences, Affordances, and the Evolution of Complexity. My regular listeners know, there’s some Jim Rutt bait there for sure. Affordances and evolution of complexity, the new thing here, and I got to say, I have never actually quite seen this approach this way, is this concept that sequences are sort of a fundamentally different thing in the world and have had a gigantic impact of the evolution of our universe. Is that kind of fair enough to say?
Dennis: I think that’s fair. Sure. The idea that most of what we see is complex evolution derived behavior is the result of coordination and orchestration by one dimensional pattern. How do you get three dimensional behavior out of one dimensional pattern is kind of the big question.
Jim: In fact, I would let you… Next time you rewrite this thing, needs to be a little bit more bold and say, not three dimensional patterns, but four dimensional patterns…
Dennis: Fair enough.
Jim: … because the dynamics are also implied by the sequences.
Dennis: Oh, yes. Oh, yes.
Jim: We obviously live in a four dimensional space time and the evolution of time and space is really what it’s all about. I’ll give yourself credit. Add one more dimension too. I should add, for people who are interested in the book, this thing is amazingly broad. It goes from biochemistry to anthropology to the evolution of language, the origin of life and politics, bad actors in the army and all kinds of stuff. It’s 221 pages long but it is so full of stuff. I mean, I was just like savoring. I was reading it very, very slowly. It’s extremely enjoyable. It’s also, because your background as a journalist, it’s really well written.
Dennis: Oh, thank you.
Jim: Very, very nice. I should also add for the listeners out there, best as I can tell, it doesn’t really require any prerequisites in anything. He does a very wonderful job of explaining what’s needed along the way, including some fairly deep stuff. I must note, one of my little touchstones on truly interesting books, is it has truly interesting footnotes.
Jim: Don’t skip the footnotes. It’s sometimes tempting to do so but in this case, yeah, I would say each chapter is probably at least three or four of the footnotes that are actually well worth reading, and are very clear and really do add a fair amount to the story.
Dennis: Well, writing a book that says sort of relentlessly interdisciplinary is this one has all kinds of pitfalls attached to it and the question of how much stuff you leave in the main text and how much you push down into the footnotes and how would you say to make a lot of fairly technical topics accessible to people who are not technical, or at least not technical in a particular field.
Dennis: You can have some who may be a linguist, but doesn’t know anything about molecular genetics, read the book, and you’ve got to make it accessible to them but then you’ve got somebody who’s a molecular geneticist read the book, who doesn’t maybe know that much about linguistics. You’ve got to make the linguistic part accessible to them. It’s a challenge. As I went along, more and more stuff got pushed into the footnotes. There you have it.
Jim: Yeah. Before we jump in, talk a little bit about kind of the difficulty, the institutional difficulty of interdisciplinary research. You quote David Hull, “Officially, we’re all supposed to value interdisciplinary research but in reality, just about every feature of academia frustrates genuinely interdisciplinary work. Those of us who are engaged in it, are the last hired and the first fired.”
Dennis: Well, that’s David Hull, who was one of the great philosophers of Biology and one of the great inspirations for this work, but the problem with interdisciplinary work in the academy, I think, has been well stated by him in the sense that, as you well know, the Academy is pretty well siloed and it’s difficult for people to break out of the silos, despite the good intentions of faculty, administrators, and that kind of thing.
Dennis: I had the luxury a, of being in a fairly eclectic department when I was pursuing my PhD and I had a PhD advisor who was wide open to this kind of thing. Also, because I’ve been an independent scholar, as we’re called, as opposed to having an academic affiliation for the past several decades, it’s given me a little more freedom to do this kind of thing. I’m not sure even I could have done it if I were in an academic setting,
Jim: It’d been damn difficult. In fact, I think, in retrospect, if there had been more room for interdisciplinary research, I might have actually gone forward and done graduate school like I originally intended to, but as I got to learn, I came from working class background. I didn’t really know very many college educated people other than school teachers. When I got into college and kind of saw, “What the hell? This is what these people do? Fuck that,” right?
Jim: Then, late in life, I kind of reconnected with my academic interest at the Santa Fe Institute, which is kind of relentlessly cross disciplinary, even transdisciplinary. If such approach to research it existed in 1975, I think I definitely would have hopped into it. There’s bits of it out there, but not as much as people which there was because I …
Dennis: That’s for sure.
Jim: … was involved in science governance on various boards. They all talk about it, but they don’t do it nearly as much as they talk about it. You were very, very fortunate to have found an advisor who lets you get away with this basically.
Jim: Anyway, let’s get into it. The first item in the book that I highlighted is a really interesting thought. Then, in some senses is like, the biggest thought in the whole thing is that on Earth before life, let’s say as far as we know, in the universe before life, there were no sequences, at least not sequences in the sense that you’re talking about.
Dennis: Well, if you think about the prebiotic Earth or really sort of any of, I mean, they’re now thousands of thousands of exoplanets have been discovered. That’s probably the case on many of those, but in the prebiotic Earth, there was a lot going on but most everything that was going on is stuff that we would recognize as being geochemistry and stuff that was obeying physical law. Explaining what was going on was not that difficult, because although there was a lot of complexity and if you really wanted to measure it, and model it and all that kind of stuff, it would be difficult, but the stuff that was going on was readily understandable in terms of geophysics and that kind of thing.
Dennis: Then, if you turn around and look at what we have today, we’re in this environment, it’s a cultural environment and a social environment and the technological environment of humans, and also a natural environment, just about everything that we’re looking at there: the trees, the grass, the birds, the squirrels, the lichens, all of it is being orchestrated to some extent by sequences, sequences of DNA and RNA and protein in the case of the living world and sequences of speech and of text, and of computer code, zeros and ones in our technological civilization.
Dennis: This is the big change, I think, that has taken place since the prebiotic Earth is we have become sequentialized. The surface of the Earth has been taken over and colonized by sequences and to understand how matter behaves under the influence of sequences is not the same as understanding how matter behaves under the laws of physics.
Jim: Yeah, I think this is very, very useful. It actually helps buttress an argument that I occasionally make it SFI, which I would say that the line between life and non-life is a fairly bright line that is qualitatively different. I get a lot of pushback on that. I think that your argument about sequences actually helps buttress that argument that a world that doesn’t operate from sequences with some of the capabilities, that we’ll start to talk about, is a different way, the behaviors are different, statistics are different, everything is different. This was really a fundamental qualitative change in the evolution of the universe.
Dennis: Yeah, and I think right now in the current moment, there’s a huge amount of discussion about UFOs and space aliens, and all that sort of thing has come back into the public consciousness in a way that it’s been absent for a while, and all the stuff that I’m hearing and reading and thinking about, I mean, what always comes to mind is, well, where are the sequences? What kind of sequences are there that are orchestrating this because these kinds of things that must obey the laws of physics, but we know that the laws of physics can be bent and constrained by sequences to create complex behavior.
Dennis: If we want to, I don’t know if there are aliens and I’m not even going to speculate on that but if there were, I bet they have sequences and I would bet that they are the result of a process of natural selection, just like we are.
Jim: Yeah, that’s a good framing because you point out that this idea of sequences, in no way, suspends the laws of physics, essentially. These are constraints that sit on top of the laws of physics, which is a very interesting concept. You also make the point that these constraints are not causal, rather, they tend to be probabilistic. Maybe you could talk about that concept a little bit.
Dennis: Well, this goes back to Leibnitz, actually, who talked about things that inclined but do not necessitate. When you think of physics, you’re really thinking about things that are either certain or impossible. Everything is either certain, it’s going to happen or impossible, it’s not going to happen but our world, the world that we live in, and the living world from which we arose, sort of lives in between those two places.
Dennis: We have things that are almost certain, but not quite and things that are almost impossible, but not quite. The need for understanding constraints, as a way of harnessing or orchestrating or choreographing the way the physical law works, is what we’re after here. The language and sequences of DNA in the cell operate as boundary conditions on the behavior of the underlying matter.
Jim: It’s interesting. Actually, it’s very congruent concept with Brian Arthur’s ideas about technology. In fact, I’m going to be interviewing him in about two weeks on my last episode this month on his book, The Nature of Technology. He fundamentally talks about technology being a container for phenomenon. By phenomena, he means physical law.
Jim: For instance, the fact that we use quantum mechanical effects to read the surface of high density disk drives, for instance, it’s a whole series of constraints and structures built on top of a physical phenomenon. I think the two ideas are pretty similar. I’m going to have to think about that.
Jim: Now, let’s try to draw a picture of what you mean by sequences. You basically, this early on, say there are really two different ways of thinking about them. One is instructive sequences and the other is descriptive sequences. An instructive sequence, a recipe how to bake a pie is one of the ones that you use several times, how to assemble IKEA furniture or how to set up a tent. What more do you want to say about instructive sequences?
Dennis: Well, the thing that’s interesting about the distinction between what you describe and what you instruct is there is also a distinction here between and we’ll get into this more, I’m sure a little later, things that when you make a description, that description is relatively static. When you’re creating an instruction, you really want to control the behavior of matter, even if that matter is somebody’s dad setting up a tent.
Dennis: The instructive sequences tell you what to do. They give you a way of organizing the world. It may be just putting together furniture. It may be making a pie. Descriptive sequences are really the result of some kind of a perception or a measurement process in which you see something in the world and then you write down, typically what it is that you saw.
Dennis: The thing that’s interesting about language and about all of these sequences is that one sequence can do both of these things. Now, in a way that sounds sort of trivial and obvious, but it’s actually quite profound. The fact that you can describe and you can instruct, that is you can make a measurement and you can also choreograph some behavior in the three dimensional world with the same sequence, the same alphabet, the same syntax and so forth.
Jim: Yeah, that’s quite interesting, the kind of the dual attributes of things like language. Another component that you specify for your sequence is the idea that they’re discrete versus continuous, the 3D world is continuous, sequences are discrete.
Dennis: Well, there’s a huge literature on the discrete versus the continuous in engineering and computer science, and so forth and mathematics, absolutely but in this case, it’s really talking about things like in a sequence, the only fundamental relationship between two, let’s say, letters of the alphabet, for the sake of simplicity, is adjacency. That is, they’re adjacent to one another. The physical distance between them, the physical attractions between them are not really what’s important.
Dennis: In other words, you’ve got a couple of letters on a page and it’s really the fact that it’s those letters and they’re next to each other that is, they create a certain pattern through adjacency that is important. The physical aspects of them what kind of paper they’re printed on and what kind of ink it is and what the shape of the font is, and those sorts of things are not as important.
Dennis: The kinds of things that we would normally talk about when we talk about the relationships between objects in the physical world are going to be physical forces. There’s going to be gravity and electromagnetism and that kind of thing and one of those relationships and with sequences, it’s really this question of adjacency, what follows what.
Jim: Yup. Then, another attribute, very important and seems to be characteristic of all the ones you talk about is that there’s a kind of two-level phenomena going on. There’s a base level alphabet, if we want to call it that. Then, there’s relationships amongst these elements that produce patterns.
Dennis: Well, yeah, this is well known in linguistics and it was, I think, originally described by Charles Hockett, who was a linguist at Cornell. There’s a lot of Cornell connections through all of this, for some reason. Anyway, he called it duality of patterning. In other words, you start at some fundamental level with meaningless elements, in this case, letters of an alphabet and then you combine them, and you sort of build a hierarchy, you have words and morphemes and you have phrases and clauses and sentences, and so forth, and you just build up, build up and you start out with things that have no meaning and yet, as you build this hierarchy, they gradually begin to accrue meaning.
Dennis: The same thing is true in the cell because the nucleotides that make up DNA, if you just take them at what they are, they’re just molecules but if you arrange them, then they start putting together arrangements three at a time, these are the codons that map to amino acids in the genetic code. Then, you can organize them beyond that into things like genes and then the segments of genes that take place in split genes and alternative splicing and that stuff. We may get into that, too.
Jim: Yeah. Then, another fundamental attribute that you attribute to these sequences and I’ve never thought about this before, but this seems key actually, is that they are low energy relative to the consequences that they trigger.
Dennis: Well, if you think about a switch, switch is a good example of this, in other words, the amount of energy that it takes, what’s a good switch? The trigger of a shotgun is a good switch. The amount of energy that it takes to pull the trigger is very different from the amount of energy that is released by that act. Sequences have this property in that they really, their energy level is trivial compared to what they can constrain and what they can guide.
Dennis: I mean, we have a lot of idioms on our language that relate to this actions speak louder than words and sticks and stones will break my bones, but words will never hurt me, all of these are implying that the sequences themselves are largely inert. It’s really the physical world that’s an issue.
Jim: Yeah, my favorite one of those linguistics is talk is cheap.
Dennis: Yes.
Jim: Very true. Now, this next point is key and I think, it’s so deep in one of the areas that I do some my own work in, which is that any sequence is only meaningful to the degree that it has an interpreter, right? In fact, as you say in the book, “The preexistence of a complete set of these mechanisms is a requirement for sequences to express their functions and replicate their patterns. We inherit not only genes made of DNA, but an intricate structure of cellular machinery made up of proteins.” There’s this very curious requirement for the coevolution of interpreters and sequences.
Dennis: If the human species were completely wiped out, all of our books would remain in existence as physical objects and they would have gradually disintegrate but what they’re able to, the utility that they have or their ability to act as boundary conditions on anything going on in the real world would go away when we went away.
Dennis: The same is true in in the living world, when you’re taught about the reproduction of a cell and you’re talking about the replication of DNA and mitosis and processes like that, there’s a great emphasis on the sequence. Yeah, I got to have the sequence, that’s for sure but you also need all of the interpretive molecules, which are a remarkable collection of very large and very strange things that can actually transcribe and translate the sequence in DNA into a functioning protein, for example or that can replicate a DNA sequence into another DNA sequence.
Dennis: You need both the sequence and you need the interpretive machinery and if all you have to do is pass on the sequence, then you’re kind of stuck. I mean, this is the way viruses work, as I’m sure you know, the virus is really just a bit of pretty much naked DNA or RNA. It doesn’t have any of that equipment. That’s why it has to attach itself to a cell and use the equipment that’s in the cell, the molecular machinery of the cell itself, in order to express it and replicate.
Jim: Of course, that’s why epigenetics is interesting, right? If you’d had some DNA that had been stored in a test tube, it wouldn’t do anything, right? It requires all the machinery, all the chemistry of the cell, which is replicated when each cell divides. The chemical state of the cell itself has some impact on how genes are expressed and how development occurs. The machinery and the instructions are kind of, especially in biology, are intimately interconnected.
Jim: You alluded to a little bit the Fermi paradox, which listeners to the show knows one of my pet obsessions. I believe that is the second most important scientific question. The most important scientific question is, why does the universe exist? We’re not even close to the pay grade of being able to answer that one but the second one, are there others out there?
Jim: We may be getting close to being able to answer that one. When I was a 12-year-old nerd, I was sure the answer was, “Oh, yeah, definitely. They’re out there. There got to be hundreds of thousands of smart ones. I mean, we’re [inaudible 00:22:36] these ideas, right? It had to be based on something but the more I thought about it, it’s possible we’re alone and it might be this issue of bootstrapping the interpreter and the sequence, particularly the DNA one.
Jim: My home academic field is evolutionary computing. One of the things that we talk about in evolutionary computing a lot is something called the error catastrophe. I think you call it the error threshold in your book, which is if the error rates above x and typically in at least in a controlled experiment, it’s a fairly brisk transition, when the error rates’ too high, the ability to build very much up from evolution is very, very weak. As you build things up the mutations, break it down again. You get past the error catastrophe and what you build up partially gets destroyed by mutation, but enough of it gets preserved that you have an upward cycle.
Jim: One of the great mysteries of the origins of life is how in the world did a series of chemistry, which almost certainly was above the air catastrophe and the mutation rate managed to evolve this very elaborate machinery around DNA, which does error checking and repair, et cetera so that the replication of DNA is like in error rates of a part per billion rather than more typically parts per a thousand or parts per million that you’ll see and even the best chemical synthesis?
Dennis: Well, that is a great question. Give me a call when you figure it out.
Jim: Yeah. Now, that one’s way past my pay grade but actually, Stuart Kauffman and I had an amazing conversation about four hours one time and we came to that point as the big, hmm, that’s just imponderable how we could have got there. It might have been such low probability that it never happened anywhere else in the universe.
Dennis: I might quarrel with that just a little bit because I think that one of the things that you can take away from this book and part of the joy of an interdisciplinary book is figuring out how it docks into all these fields. It’s like there are docking stations in a lot of different fields that this thing that this approach to, to sequences will dock into.
Dennis: Astrobiology, I think, is one and astrobiology is plagued and has been plagued by kind of what you’ve been talking about there, what’s the N equals one problem, which is that we only have one example of life and it’s very difficult to extrapolate what the necessary and sufficient conditions are for life based on one example.
Dennis: However, I think that the point that the book makes is that actually, sequences of language, of speech and later of text and ultimately of code, are a second system that operate on the same principles as the same, they’re different in many ways, but they are two examples of the same kind of thing, a system of sequences. That ratchets us from N equals one up to N equals two.
Dennis: When you’ve got N equals two, it’s much easier to triangulate what the necessary and sufficient conditions might be. I mean, if we’ve hit the jackpot twice on Earth, what does that tell us about the odds of someone maybe hitting the jackpot at least once somewhere else?
Jim: Yeah, I did read that in the book. I have to say, I was not convinced because the problems are at such different scales, right? The DNA bootstrap problem is at the level of the transition from chemistry to life, while the linguistic transition is way, way up the stack. Life had gone a very, very, very long way before we got to the point of even proto language like your vervet monkey example.
Dennis: Well, if you get people studying the origin of life and people studying the origin of language in the room together, they’ll have a nice argument about whose problem is harder.
Jim: That’s true. We have both groups at the Santa Fe Institute, Eric Smith and Harold Morowitz, now rest in peace, one of my mentors, both done some very interesting work on origin of life and Murray Gell-Mann, of all people.
Dennis: Yeah, that’s right.
Jim: His life was-
Dennis: He had a book on that. I think SFI published that, right?
Jim: Yeah. Absolutely, it was his passion. Here’s one of the smartest guys ever and worked on it, didn’t solve it. Yeah, both really, and Eric Smith, one of the smartest people I’ve ever met, neither of them have solved it yet, too very, very difficult problems.
Jim: Another interesting attribute of your sequences and this is one again I never thought about, but has to be approximately true, is that all sequences of similar length have similar energetic costs, which actually turned out to be quite important?
Dennis: Well, yeah, whether you’re replicating a sequence of DNA or interpreting a sequence of DNA or you’re copying a text or you’re copying a sequence of code, if sequences of the same length had different energy profiles, then an evolutionary process would tend to work against the ones that were more energetically complex. It’s really crucial that the interpretive machinery, as we’ve been discussing, the interpretive and replicative machinery, be able to handle anything that’s thrown at it.
Dennis: There is not a necessity of even being semantically coherent. If you’ve got a photocopier, for example, is one way of replicating a text, the photocopier will replicate nonsense text as easily as a very sensible text. That’s a key thing is you’ve got to be able to have the machinery of translation operate on any sequence that is thrown at it.
Jim: Yeah. That has some interesting implications at the higher level, particularly in the social level, distinction that we make in economics is between rivalrous and non-rivalrous. The classic example is a ham sandwich is a rival risk good, either I eat it or you eat it. We can’t both eat it but when the cost duplicate something is essentially trivial relative to its value, we can say it’s non-rivalrous. The classic example is the MP3 file. It’s not quite free to duplicate but effectively free because it’s so low cost relative to its potential value.
Jim: The fact that these things have similar and low energetic costs, the sequences tells us something fundamental about the difference between material goods and intellectual property, for instance, and perhaps we’ve gotten ourselves into a conundrum trying to make one like the other when maybe they shouldn’t be.
Dennis: That’s a very good point. That’s a very good point. I mean, I think, the real distinction here boils down to this distinction between and this is, again, the work of my advisor upon whose foundation I built a lot of this, his name is Howard Pattee. He was actually a colleague of Stuart Kauffman’s back in the late ’60s.
Dennis: There was a series of conferences funded by the Rockefeller Foundation on sort of theoretical biology that was organized by C.H. Waddington and they had four of these things. Four volumes of that came out but a very young Stuart Kauffman and a very young Howard Pattee were among the junior faculty at the thing way back when. Those volumes are still quite fascinating to read. I would recommend them.
Dennis: Anyway, Howard Pattee came up with this distinction between what he calls rate independence and rate dependence, that sequences are rate independent because the amount of time that is necessary to process the sequences is not as material as it is in rate-dependent systems, like the ordinary physical world.
Dennis: My favorite example of this is when you take a driving test and you usually take a written test to test your knowledge of the rules of the road and then you take a driving test that actually puts you behind the wheel with an inspector who makes sure that you can physically operate an automobile.
Dennis: If you imagine a situation in which, let’s say, the overall test takes an hour, you can finish the written test. If you’ve got 30 minutes allotted for it, you can finish the written test in 10 or 15 minutes and still get everything right. The speed at which you take the test doesn’t have any influence on the outcome. You can take all 30 minutes and get it wrong or you can take 10 minutes and get it right. However, if you try to do the, if the driving test has 30 minutes allocated for it and you finish it in 15, you’re in big trouble.
Jim: Yeah, you gave some other good examples of this distinction between rate independence and rate dependence. Apologize to the 50% of my audience, which is not the United States but the football huddle, I think you described, is one where whether it takes 5 seconds or 25 seconds to call the play doesn’t really matter. It’s not time-dependent how long it takes the quarterback to describe the play but the details of the motion of the players are all very rate-dependent.
Jim: The famous relationship between the running back and the lineman, you hit the hole at the right time, right? As a former offensive lineman, I remember very well that it was fractions of a second, if you could apply your block at the right time, the guy got through, even if you had a good block if it was too early or too late, it was irrelevant. The other one, which I thought was very good, it’s an interest of mine, is the distinction between negotiations prior to a war and war itself.
Dennis: Yes, this concept of rate dependence or rate independent scale is all the way up to international affairs. There are basically two ways that the nation states have to resolve their disputes. One is to negotiate a treaty and the other is to go to war. Negotiating a treaty is a rate-independent process because it doesn’t really matter how long the treaty is or how long it takes to negotiate the treaty, these are not the important things. It’s what the treaty actually says that’s crucial.
Dennis: On the other hand, warfare is a completely rate-dependent process in which small changes of rate at certain times can have tremendous effect on the outcome. It’s really a fundamental concept when you think about anything that is sequence-based, like a treaty, for example.
Jim: Yeah. I think, just to make it tangible for the audience, imagine the Battle of Britain, do the RAF fighters get there before the bombers or not. You’re talking five-minute difference, right?
Dennis: Yes.
Jim: London gets clobbered if they don’t. A whole bunch of German bombers get shot down if they do. Rate-dependence is of the essence of the actual war itself. That’s a great distinction when I never thought about before yet. This is what I love about this book. There’s so many things that when you think about it, you may go, “Yeah, that has to be right,” but just never thought about them before. That’s what I love about this book, why I just think it’s so cool.
Jim: Another one of Howard Pattee’s ideas, which is really quite interesting, is his distinction between laws and rules.
Dennis: Well, here, we’re getting into, just to make sure that we’re clear on what we mean by a law, we’re really talking about something like a physical law. We have laws that are created by legislatures, the law of the land but that’s not the kind of law we’re talking about here. We’re really talking about physical law. This ties back to some of the observations that were made. Gosh, it must have been in the late ’40s, by Max Delbruck, the physicist. He had a paper called, A Physicist Looks at Biology. He was one of the first to sort of make this explicit but the idea is that a law like a physical law has three characteristics.
Dennis: First of all, it is a universal. We expect that that law, if there is a law of physics, like gravity or electromagnetism, that that law is going to hold everywhere and at all times. It is inexorable, in the sense that there’s not a way to stop it, it’s going to do what it’s going to do. Finally, it is incorporeal. While you may say that the Earth revolves around the Sun, it does so without having any mechanism to make that possible. It just goes ahead and does it.
Dennis: On the other hand rules, which I map on to the idea of a sequence, rules are not universal. They’re local. That is the rules that operate the living world and that operate our civilization are local to this planet. We would not expect to go to Mars and find Finnish being spoken there but we would expect the law of gravity to work there, for example.
Dennis: Rules are local. They are contingent in that sense that they are not inexorable. They can always be evaded or changed. You can amend the Constitution to prohibit alcohol consumption but then you can immediately turn around and negate that Constitutional Amendment. A rule can always be changed and that a rule is always structure-dependent. This goes back to what we were talking about earlier, Jim, that you need this set of mechanisms in order to execute some kind of rule.
Dennis: A rule takes the form of a sequence. It’s always going to be local. It’s always going to be structure-dependent. It’s always going to be contingent, in some sense, in a way that the laws of nature are universal, inexorable, and incorporeal.
Jim: This idea that the rules are bootstrap with our interpreters, again, is very key to me, right? Think of at the high level, the concept of nation state rules, which we call laws to make it confusing, but let’s distinguish that in Dennis’s sense those are rules are meaningless without something like the state, right?
Jim: The state is this many-leveled stack of emergent complexity, which interprets the rules and turns them into actions essentially. The statutes without the state are meaningless. Frankly, the other way around more or less true too, the state without something like statutes is sort of meaningless as well. Then, that’s the fundamental distinction with this concept of laws. I love the idea that nobody is pushing the Earth around the Sun. It’s a fundamental of the universe.
Jim: We now think we know at least that it’s due to the distortion in space time, according to Einstein’s general theory of relativity but that’s essentially a brute fact about the universe, at least our current level of understanding, though. We’re not completing our understanding of that yet because we have not unified general relativity with quantum mechanics. There may still be some surprises there but we, at least, have a reasonable first order view of the nature of the physical law that causes that to happen. I thought that was very interesting.
Jim: Now, let’s move on to the next, SFI style complexity science finds this an extraordinarily important attribute of what you call sequences, which is their self-referentialness.
Dennis: Well, that’s one of the things that’s most interesting about sequences is that sequences can refer to other sequences, but they do so in a sort of interesting physical way, if you’re in, a footnote would be a good example of that. In other words, a footnote takes one sequence that’s somewhere in the text and then it refers you to another sequence that’s somewhere else in the text. That’s the kind of self-referential property that allows this complexity to build.
Dennis: The thing that’s key to that, I think is the need for stability. If you’re going to refer to something, that something has got to be stable. If you think about Odysseus in Homer, being tied to the mast so that he can hear the songs of the sirens without being tempted to jump overboard and get himself killed. He orders his sailors to tie him to the mast and then he tells them, “No matter what I say, ignore what I say in the future.” He’s basically saying, I’m ordering you to ignore certain orders that I’m going to say in the future. That is a kind of self-reference. He’s essentially using one sequence to negate the effects of another sequence.
Dennis: Now, the thing is that that’s a very simple example. It’s an example that’s based in speech. This is something he says. These are not written instructions. These are verbal instructions. We all know that verbal instructions are quite open to misinterpretation and misunderstanding. We all have had conversations with people and we go back and say, “Well, I remember when you told me this.” I was like, “Well, I didn’t tell you that. You told me this.” “No, I didn’t tell you that.”
Dennis: The kind of self-reference that’s needed in order to build a complex system requires text. It requires a stable sequence. We know that the speech is evanescent. It kind of disappears after you say it. Very simple things like Odysseus being tied to the mast. He can tell his sailors a couple of things and they’ll be able to remember it and probably get it but if he said, “On the other hand, if this happens, then you untie me. If this happens, then you don’t untie me. If this happens, the ship,” if he starts making it too complicated, people aren’t going to be able to remember it but if it’s written down and the sequence that is being searched is stable, which you get both with text and with DNA in the cell, then you’re able to really build unlimited evolutionary complexity.
Jim: Well, let me spring that. I have it actually later in my notes, but let’s bring the second idea of Dennis’s, which is that in the history of civilization and history of how humans are different than chimpanzees, et cetera, we talk about language as the bright line that separates humans from chimps and at some level it probably is, at least something like Chomsky and fully recursive language but Dennis makes this very interesting claim. As I said, later, in my notes, let’s talk about it now, that language is important but actually, it’s writing that’s more important for the world we have today. Why don’t you?
Dennis: Yeah, I think there are people who have been working in this field and I think they have been overlooked to some extent. I mean, there’s a psychologist in Canada called David Olson, who’s done a lot of this work and of course, the anthropologist, Jack Goody, who died a few years ago, who was also very active in understanding the implications of writing or the difference between literate cultures and preliterate cultures.
Dennis: I start the story really in the book, it’s the beginning of life. There’s a pretty strong consensus now among molecular biologists who study this sort of thing that there was one something called an RNA world. An RNA world was based, not on DNA and protein, the way our current living world is, but it was based on RNA molecules. RNA molecules had the property of being able to fold themselves up into enzymes, which are called ribozymes.
Dennis: The RNA ribozymes could then go out and execute functions in metabolism and probably get a primitive metabolism going but then they had to unfold themselves in order to be replicated so that you could replicate the RNA, but you could only replicate it when it was not folded. The RNA could do stuff in metabolism as a ribozyme but it can only do that when it folded up on itself.
Dennis: You have these kind of two states of things and it was really difficult and RNA has a lot of the other things that you were talking about earlier, Jim, about error catastrophes and that kind of thing that it was very unstable, it tended to have a high error rate when you replicated that kind of thing.
Dennis: The RNA world, there’s a lot of evidence that the RNA world was there. If you think of a couple of the large molecular complexes that are important part of the cellular machinery, you’ve got the ribosome, which is what translates messenger RNA into protein, you’ve got the spliceosome, which is the thing that actually edits on the fly the messenger RNA and these things are huge molecules, I mean, enormous molecules, and at the very heart of them, they have RNA mechanisms. In other words, the ancestral mechanism that is doing all the work is actually made of RNA and all this protein that’s in these things is sort of built around that to provide some superstructure and kind of hold things together.
Dennis: In any event, there’s lots of evidence that this RNA world existed, but there’s plenty of evidence too that it could not have gotten very complicated because of the fact that it had a high error rate and the catalytic power of these ribozymes was not very strong, and that you really needed to add DNA, which was far more stable, use DNA for information storage, and then you develop protein enzymes, which are far more powerful catalytically. Essentially, you get a division of labor, in which the information storage and the replication is handled by one set of molecules, which are DNA and then the interaction with the world is taken care of by these protein.
Dennis: Now, how you get from one to the other, nobody knows the answer to that, but I think that model actually works very well. We start to think about human culture because human culture in a preliterate society, the only thing you had to go on really was fallible human memory. If you wanted to know something, you would have to ask someone who you think might know the answer to that question. If you want to know whether a plant is edible or whether the stream runs to the sea, things like this, you need to find somebody who knows the answer to that question.
Dennis: It’s really the preliterate cultures and preliterate cultures got pretty far. There were states, there were technologies, there were lots of things going on but the real key to moving into our literate technological civilization was the advent of writing, because what writing did is writing allowed the stability to enter, so that instead of having evidence in speech, you could have written instructions. Once you’ve got writing in place, then this kind of self-referential process can scale more or less indefinitely, and you develop some complexity.
Dennis: The other thing that you need, in the same way that RNA gave way to DNA as a more stable storage medium, speech gave way to text as a more stable storage medium. The other side of the equation here is that just as RNA ribozymes gave way to much more powerful and potent protein enzymes, so did we develop these technologies of measurement and of creating machines that could behave in the world in a way that our physical bodies could not do. We could extend our perceptual systems, our vision, and our hearing, and so forth and we can extend the power of our hand through machinery and that kind of thing.
Dennis: It’s a combination of being able to have much more powerful interaction with the environment and also have much more stable information storage in the form of writing. I think that’s really where the threshold was that was crossed going from a sort of preliterate society to the modern technological civilization that we have. I think this goes back to the work of John von Neumann in his theory of self-reproducing automata, where he talked about some threshold, some tipping point after which you can get an enormous increase in complexity but before which, you don’t get that and I think that’s what we’re looking at here.
Jim: Yeah. Von Neumann or John von Neumann, I believe, argue that you needed to separate the information layer from the production layer, essentially, the physical layer, the reproductive layer.
Dennis: Yeah, there is a necessity to separate the replication function from the interpretive function. That is why when he built his mental model of what a self-reproducing automaton would look like, he had one component where you would feed a sequence into that and it would build a machine from that sequence but he also had a second part that you would feed the sequence in, and it would make a copy of the sequence. Decisively separating the copying from the interpretation was crucial. We find that to be the case in the living world as well.
Jim: Yeah, it’s interesting. You make a very interesting parallel that you compare and contrast within the DNA sequences between structural and regulatory, and the white collar versus blue collar work in a corporation.
Dennis: Molecular biologists tend to look at genes broadly speaking as either being structural genes, which code for proteins, that actually go out and do things in the cell. These are the proteins that guide metabolism and so on, and so forth. They’re the sort of frontline. I think of them as blue collar. These are the blue collar enzymes or the blue collar genes that code for these enzymes but most of the complexity that we get in the world really comes from the regulatory apparatus, which goes back to this question of self-reference that we were talking about a bit ago.
Dennis: There are other genes that have no function except to regulate or control the expression of the blue collar genes, and I think of them as being the white collar genes and white collar enzyme. The thing about them, of course, is that you can start building hierarchies in scaling those things up so that one gene can regulate another gene, but then something else can regulate the first one and so on and so forth. You can build fairly complicated modules by doing that, what are called gene regulatory networks by a molecular biologist.
Jim: [inaudible 00:50:07] if you said this or it’s a thought I had myself, but anyway, it certainly fits into this idea and that’s one could think of them as almost two separate worlds. Particularly, let’s look at the white collar world. We have Old Joe sitting on the assembly line tightening bolt number 47 and let’s say the corporate culture within the white collar world, toxic versus friendly versus supportive or not, actually has no implications whatsoever on Old Joe in turning bolt 47, unless the particular affordance, which moves from the white collar to the blue collar, tells Joe to stop tightening bolt 47 or to use some different tools. In some sense, they’re two different domains and it’s the affordances between the two that produce the results in effect.
Dennis: Yes, that’s right. Any system of sequences has a part that faces the world and interacts with the world but then those interactors, blue collar interactors are then constrained by other interactors that are purely white collars. You can just add layer upon layer of abstraction until you get up to the point where you completely lose track of what the meaning of some of these things are.
Dennis: I mean, we see that in vocabulary. We have abstract nouns. We have things like freedom. We know that freedom and loyalty and dignity and things like that are very high level. We also have things like rocks and conveyor belts and so forth. We have things that are fairly low level that are more physical and which we understand more directly and can perceive more directly.
Dennis: What we’re really looking at with things like freedom, words like that is that they are able to constrain and guide enormous amounts of underlying activity by the blue collar interactors that are out there. When we think of freedom, well, here’s a good example, this is actually not from language, this is from the cell. I have a son who is developmentally disabled and he has a condition that is called Fragile X syndrome. He lacks a single gene that produces a single protein but the downstream effects of that protein are quite extraordinary. There are literally hundreds of reactions that this protein helps to regulate.
Dennis: There are phenotypical results from this but it’s how is it possible, for example, that the typical phenotype for a Fragile X male is to have big ears, high palate, large testicles, communication difficulties, cognitive processing, lots of things, very loose joints, how is it possible that the absence of one gene can have all this amazing amount of effect and that’s because it sits at the very top of a hierarchy. It’s a very high level white collar gene that is affecting downstream, the interpretation of all of these other genes. The case in languages that you have words like freedom and you know that that word, in the right context, can unleash a cascade of all kinds of all kinds of real world activities.
Dennis: These abstract words are there. The thing, to go back to something we were talking about earlier, Jim, the thing that’s interesting here is that, again, the interpretive system, the interpretive machinery has to be able to handle a word-like freedom as easily as it does a word-like rock. Something that’s extremely tangible, something that’s extremely abstract, the processing equipment doesn’t care. It just takes care of it.
Jim: Well, this is a very interesting pivotal point in the conversation because it actually points to two different very important concepts. I’m trying to decide which one to go to first. Let’s do open-endedness first, which is because hierarchies can keep [hierarchicing 00:54:22], as well as mutate, et cetera, but let’s just assume that continually building of hierarchies, new concepts are always available, at least in principle to be built.
Jim: This is something new and different. New physical laws, as far as we know, are not invented, right? Though there is some fringe theories about the evolution of physical law, but let’s assume we’ll take it as taught in our physics textbooks that they’re fixed. These typically the white collar systems and then of course, there are affordances to the blue collar world, are open-ended in that more and more levels of abstraction can be added.
Dennis: That’s right. There’s no upper limit on how abstract and that these hierarchical stacks can become. One of the things that I’ve always found fascinating about language is that you do have these kinds of words like nouns and verbs, and adjectives, and adverbs, which are pretty much open classes, they’re called open classes, which is to say that you can quoin a new noun, you can name new things.
Dennis: This is done all the time in Science and Engineering and product development that you come up with something and you give it a name or you come up with some new activity, and you give that a name. Quoining new words, for nouns, for adjectives, for adverbs, and for verbs is very straightforward. We do it all the time and nobody thinks twice about it but there’s also these other words, things like prepositions and things like deictics, which are the pointing words like this and that in here and there.
Dennis: Some of the, what are called modals, which are things like might and could and should and would and these are closed classes. I mean, you could waste a lot of neuron power trying to coin a new preposition. I mean, good luck with that because the prepositions, there’s about 150 prepositions in the English language. Why do we have these closed classes and why do we have the open classes and is that a fundamental property of these sequential systems?
Dennis: The argument that I make is yes, yes, yes, this is the case, that in fact, even at the cell, you find this, that you find genes, which are quite malleable. You can create new genes pretty much without limit, but they’re also what you might think of as the prepositions and so forth of the genetic system that are fixed and some of these are the codon, some of these are when the DNA is being transcribed, where does it stop? There has to be a stopping point. There are things like that, that are quite universal throughout the genetic system that really correspond to these kind of open and closed classes.
Dennis: I think it is a fundamental property of sequence systems. I think it has to do with our ability to project in space and time some of these affordances and some of these opportunities for interaction because if you think about animal communication, you mentioned vervet monkeys before, so vervet monkeys have alarm calls that they use to warn of predation from leopards or Eagles or snakes, but they have no way of talking about the probability of there being a snake or maybe there’ll be a snake tomorrow or maybe there’s a snake on the other side of the hill.
Dennis: These are things that we can do because we can talk about might and we can talk about over the hill, and we can talk about in a couple of days. We’ve got prepositions like in and over, and we’ve got modals like might, that’ll allow us to take the affordances that we have and they’re here and now and we can expand them, extend them into the future and put conditions on them and so on, and so forth. That really gives us a huge amount of power and flexibility in what we can talk about and what we can do.
Jim: I think that’s very interesting. What I really found interesting is this pattern two cases of it and I thought about it further and found a third case, which is this idea of having a coherent core and then a flexible domain around it. That may actually be very important for developing an expressive sequence language, kind of like the idea of error catastrophe and getting over DNA, if the core is not stable enough as if prepositions came and went, the ability to bootstrap sophisticated artifacts in language would become very difficult, while the fact that nouns come and go, I mean, it’s always a fun thing. You see these articles in the Atlantic or something, here are 100 nouns that still exist in the Oxford Dictionary, but I guarantee you’ve never heard, right?
Dennis: Right, right.
Jim: Perhaps there’s something significant about this balance between core and the application layer, the kernel and the application as we’d say in operating systems.
Dennis: I think there’s a way of thinking about that in terms of our culture as well because all of us are born into cultures and we tend to, we acquire a language, for example, we acquire the language that the culture that we’re born into, and we acquire folkways and we acquire all kinds of cultural behaviors and norms that helped to guide our behavior. Within that, then there’s a lot of diversity.
Dennis: While you may have an assembly line worker and you may have a farmer, and you may have an engineer, and you may have a computer scientist, and you may have a lawyer, these people may all come out of the same culture, and they may all use the same basic core of language and they may all share a set of cultural norms and values, they also have these very specialized languages. These are very specialized things that they’re able to perceive in the world that are unique to them are unique to their profession. Again, you’ve got a core and you’ve got to maintain that core and then build upon it.
Jim: Yeah, I think that’s very interesting and important design principle for at least certain kinds of design in our work on the GameB world of social operating systems from the future was my third example, we call it coherent pluralism. We have found from history that highly coherent systems generally turn into nightmares, like Stalinism or Nazism, while radical pluralism ends up not being able to build very much, anarchy essentially. There’s some middle ground, don’t know where it is, of coherent pluralism where there’s a set of principles, which all agree to, and yet any group of people can build lots of things around those, however they see fit, that seems to fit this model very nicely.
Dennis: Well you tend to, you see this… Interestingly, you see this at the microbial level, the field of horizontal gene transfer, which has become quite prominent, I think, largely due to the work of Carl Woese and others, is the idea that, now we’re talking about bacteria.
Dennis: We’re talking about the genes that helped to run a bacteria like E. coli, which is the standard laboratory bacterium or something like a prochlorococcus, which is a cyanobacterium, that is probably responsible for as much photosynthesis as any other organism on the planet. It lives out in the oceans. These organisms at this level have the ability to exchange bits of DNA among each other. We normally think of inheriting DNA as something that passes from parent to offspring but in this case, little bits of DNA are shuffled among these organisms in real time and often in response to perturbations in their environment.
Dennis: One good example of this is how drug resistance travels in populations of bacteria. If a gene evolves in such a way to confer drug resistance on the bacterium, that can actually be transmitted in real time to other members of the population as opposed to waiting to pass it down to the next generation. However, what we find with these organisms is that they may only have, for the sake of argument, 5,000 genes in their genomes, but of those 5,000, maybe only half are going to be found in all members of the species.
Dennis: They have their own core genome similar to what we’ve been talking about, they have a core genome, in this case of if we say 5,000 genes in all, of maybe 2,500 genes that are found everywhere, and the other 2,500 come from other places, and are used for other things. If you add them all up, a species like E. coli or prochlorococcus, even more so, prochlorococcus, I think, has a core genome of about 2,000 genes but the total number of genes that have been found across all of them may be as high as 80,000.
Dennis: In other words, this enormous storehouse of genes out in the world that are available to be shared and it’s almost like thinking about you need a lawyer. Call a lawyer? You need a gene. The gene you need is over there and another cell will share it with you. If there is a gene that helps a prochlorococcus survive better in a low light situation and it gets darker or they move down in the water column, then they can share that gene that helps them.
Dennis: This is done in real time as opposed to having to wait for reproduction to take place to pass it down to subsequent generations but this phenomenon of having a core that is shared by everybody and then having all of this variation around the core, I think, it appears in many guises.
Jim: Interesting. A side note when I was reading that, I said, “Hmm,” what’s kind of interesting, however, this prokaryotic style of horizontal gene transmission is really effective at local adaptation, as you point out. Okay, the waters turned more saline. Well, it turns out there’s a gene floating around in the soup that come in through my membrane from time to time. Mostly, I ignore it, but at the moment, I need that sucker, so let’s replicate it, right? Or it turns out the chemistry causes it to replicate.
Jim: On the other hand, because there isn’t stability, there’s less self-referential ability within a prokaryotic genetic ecosystem than there is in a eukaryotic ecosystem, where the genes are duplicated as a package. Perhaps that’s why, while the prokaryotes, the archaea and the bacteria are remarkably adaptive in every niche imaginable, they don’t generate much complexity because the genetic machinery they chose to use, they got locked into more precisely, does not have the long range self-referentialness of the eukaryotic cells, which have allowed the eukaryotic cell to be open-ended, to build this hierarchical complexity that we talked about. Does that makes sense to you?
Dennis: Yeah, the advent of multicellularity and how you build a multicellular organism from a single celled zygote is totally dependent on this self-referential property and the ability to organize into these hierarchies, these gene regulatory networks that are responsible for building out the body plan during development and that kind of thing.
Jim: Of course, that came a little bit later. We got the eukaryotic first, which…
Dennis: Yes.
Jim: … used a different method of duplication of the genetic material generated in toto rather than in bits and pieces at a time and that the math for that is what enabled this long reign self-referentialness, which then discovered the great trek, which is, because there’s several kinds of multicellularity that were invented, but only one has been decisive. That was the one that led to the Cambrian explosion, in which essentially, all the body plans, all the phyla on earth today, came into existence with one minor exception during something like 5 or 10 million years, about 500 million years ago. It’s, again, one of those interesting scratch your head Fermi paradox questions. How hard was that?
Dennis: I know. You were talking to Doug Erwin about that, I think, if I’m not mistaken, and that, yeah.
Jim: Correct. We had Doug Erwin on and we dug deep into the Cambrian explosion, one of my favorite topics, right?
Dennis: Yeah. Well, he worked with some of the folks who were very involved in developing these theories of gene regulatory networks and how they cohere and are essentially passed down as a functional unit.
Jim: Yup, yup. Famously the ones that the fox genes that, is that the one that Doug lays out the body plan?
Dennis: Yes, yes.
Jim: Yeah, you can have, in theory, insects with 15 modules instead of the three that we get because they happen to be set to three, right? Number of arms that you have, and the length of your torso, they’re all set by these essentially recursive calling of the same structural program with different parameters.
Dennis: Yeah, how many segments would you like?
Jim: Yeah, essentially genetics invented the sub routine call with a parameter, right?
Dennis: Yup, yup.
Jim: God damn clever, old Ma nature, ain’t she? Let’s now bump up a little bit to one higher level of abstractions, one of the points you make early in the book, is that while you’re going to talk about specific instantiations, in this case, DNA-based genetics and human language, particularly written language, in theory, your model could apply to any sequential systems that have these attributes.
Dennis: Well, yes, I mean, what I’m trying to do in the book is to figure out what is necessary versus what is true, but not important. That’s one of the things that you get at as you’re trying to compare something that where people have thought there are two separate things. People have thought, “Well, there’s the genetics and there’s the language.” My argument is that these are really two facets of one thing or two different examples of one thing, which is the systems of sequences.
Dennis: The real question is, as you say, is are there other possible sequential substrates that could exist? If there were, how would they function? My argument would be that ultimately, you need some mechanism for the sequence because they are energetically inert for the sequence to interact with the world and you need some kind of an interactor.
Dennis: This goes back to David Hull, who you quoted at the very beginning of the podcast, David Hull really came up with the idea of an interactor as kind of a reaction to Richard Dawkins, actually. Richard Dawkins, in The Selfish Gene, talked about replicator, replicator, replicator. What we understand now, replicators are, they are inert. They don’t function in the world without having some kind of a mechanism for getting to the world. It was Hull who came up with the idea of an interactor as being the thing that engages with the world and the interactor has to have two pieces.
Dennis: One is some kind of a perceptual mechanism and some sort of a behavioral mechanism. In other words, it has to see something and do something, essentially. An enzyme is probably the simplest example of that because an enzyme can recognize another molecule and it’s very specific in the kind of molecule that will recognize and then it performs some molecular manipulation on that other molecule to push metabolism forward.
Dennis: The idea of perceiving something and then having a very specific reaction, behavioral reaction to that is what interactors do. Any system that has sequences has to have these interactors to engage with the world and of course, the interactor is also helped to build the mechanisms that interpret and replicate the sequences. That’s how you get this bootstrap effect and you get the building of these evolutionarily open-ended systems.
Jim: It’s interesting because you talk about three classes of sequences. You don’t really get much into the third and the third thing computer code. If we think about the search for artificial, general intelligence as an example of where computer code might be able to transcend things like human language, could create mechanisms of communicating, which are cognitively gated by our relative stupidity.
Jim: Humans are, to the first order, the stupidest possible general intelligence. Ma Nature is seldom profligate with our gifts, we’re the first across the line in our evolutionary tree. The chances of us being very far over the line is small. We know from cognitive science, there’s lots of obvious bottlenecks like Miller’s famous seven plus or minus two, working memory size, which fundamentally distorts our language, for instance, right? It’s why phrases are short.
Jim: The distance between pauses in language is seldom more than seven words. It’s also why there’s seldom more than three levels of recursion, even though Chomsky claims universal language is infinitely recursive. He’s actually wrong. It’s not. It’s basically three levels of recursion. Occasionally, somebody in a maniacal academic text might go to four but if it goes to five, even the PhDs will be scratching their heads.
Dennis: Yeah, no good editor would let that happen.
Jim: Yeah, if we particularly to read some humanities journal today, “What the fuck?” What does that actually mean? Anyway, my point is that in your land of code but most importantly, with also sensors and effectors because this is, again, this is what I forgot to ask earlier is a simple grounding problem, which has bedeviled artificial general intelligence research, in my opinion, but is forced to be confronted in robots.
Dennis: Yeah, and I would predict that the artificial, general intelligence will continue to be befuddled by this problem more or less indefinitely. The difference between a code, because we know that computer code has many of the attributes of human language, written language, not speech so much, but certainly, it’s got an alphabet. It’s got the hierarchical structuring. It gains meaning as you put these ones and zeros together, many of those same attributes exists there. It’s rate-independent. If you put a faster processor in the computer, the code will run faster, but it doesn’t change the output, all those same attributes.
Dennis: However, the thing that code does not have is it does not have a built in semantics. I think that gets to a Stevan Harnad’s symbol grounding problem and the psychologist, LS Vygotsky, actually had a had a phrase that I like a lot, which is he calls it deliberate semantics. In other words, when you’re dealing with robots, you’re dealing with the imposition of semantics on code.
Dennis: That’s evolutionarily backwards because the fact is that our language evolved in an animal that already had a fairly advanced perceptual and motor behavioral system. The perceptual things, the effectors, the sensors, all of these things were already in existence when we develop language. The language is very much connected to the world through us in a way that code that is written de novo is not.
Dennis: This came out, it was interesting, because I’m a big fan of enzymes and some authors wrote a book about the history of the study of enzymes that they called, Nature’s Robots. That was a metaphor that other people have used that the enzymes and the cell are running around like little robots. I have to question that metaphor because when you think of a robot in modern technology, a robot has a set of sensors and a set of effectors. They may cut well, whatever it may be. They may have some vision. They may have some hearing. They may have some propriocentric things. Then, they have a computer in between. They sense things. Those percepts are converted into code. They’re processed and then the output connects to the effectors and the effector is going to do something in the world. That’s how a robot works.
Dennis: Enzymes aren’t like that. Enzymes don’t have a computer. Enzymes are strictly mechanical devices that are actually able to perceive things and actually able to manipulate other things at the molecular level but there is no computer in between the input and the output. The question that I ask is, when we think about cognition, is the brain more like an enzyme or is it more like a robot? In other words, are we thinking about something that is largely a dynamical system or something that is much more of an information processing system or is it some hybrid of both?
Jim: I think the current thinking is, of course, it’s some of both. I will say that there are people, to my view at least, have gone too far into the information processing model. You can sort of see the dead end in AI and the so-called good old fashioned AI, where they thought people could get to AGI via writing down enough logical statements, the so called expert systems model. That ran out of gas. It turns out the world is way higher dimensionality than the ability for any human to write down enough if then statements or prologue statements to model it.
Jim: Of course, now, we’re in this very interesting world that seems more biological, so-called machine learning, where rather than being told what to do, deep neural nets and other closely-related technologies are given large amounts of data and they essentially program themselves based on data. There’s, of course, similar things in robotics, where robots can learn how to navigate an environment from doing it now.
Jim: Unfortunately, there’s still a gap in that the amount of data necessary to do machine learning is way larger than it is for humans. Humans can learn on relatively small number of examples. Human cognition, at least this is the Ruttian view for what it’s worth, is that it combines some of both in our perceptual systems and our ability to create things like objects in our mind, are very much like machine learning and that they’re self-organization related that’s in response to stimuli from the external environment.
Jim: However, our cognition is, at least in part driven by something sort of like the symbolic in that we’re able to manipulate high level objects and get inferences between them and probability, at least often implicit probability assumptions about things without having to have millions and millions of examples. Famously, alpha zero learned to play chess by itself, but it took 100 million games before it got very good. A seven-year-old, you can teach to play chess and by their fifth game, they’re sort of okay, right. That’s qualitatively different.
Dennis: Yeah, I think the distinction that you have to make with a lot of these machine learning systems too is the fact that the inputs that they’re given are all sequences. In other words, you’re feeding them sequences and they’re figuring out the world from the sequences, rather than how we evolved, which is we figured out the world by moving around in it and most of us dying off, before we got to where we are.
Jim: Yeah, of course, we get to interesting new phenomena, like the self-driving car, right? I mean, this is right at the edge and part of what self-driving cars do is learning from their environment. Now, it turns out, it’s too dangerous, too expensive, let them loose on the roads.
Jim: But Google, in particular, has spent a vast amounts, its Waymo subsidiary, building a simulated world apparently at a relatively high level of detail in their Waymo AIs have gotten much of their learning from interacting with simulated humans because obviously, they have to have lots of simulated humans in there too driving cars around to interact with and have learned from the “environment” and then once they get out on the road, if they’re, at least I would hope that they also have learning modules built in and even better, they can share their learning with each other unlike humans. Humans share our learning but very slowly through things like textbooks. Things like self-driving cars could share their learning in near real time, which gets us kind of closer to a more biological model.
Dennis: Well, this is very reminiscent of the work that was actually being done. I think at SFI back in the ’90s, was it Chris Langdon who was doing all artificial?
Jim: A-Life.
Dennis: Yeah, A-Life, Artificial Life where we were trying to simulate evolution in a completely silicon-based environment.
Jim: Yeah, it’s very interesting out there. Well, anything else you want to talk about? I’ve got a few other things on my list but frankly, we hit all the high parts astoundingly in 90 minutes in a book of, Astounding Depth and I would again, those of you who found this conversation interesting, I would definitely go and get that book. It’s easy reading. I’m trying to look for my notes for that goddamn title is.
Dennis: Let me tell you, Jim.
Jim: Yeah. Yeah. Let the author give us the title.
Dennis: It’s Behavior and Culture In One Dimension: Sequences, Affordances and the Evolution of Complexity published by Rutledge.
Jim: Yeah. You don’t need to know nothing to read it, right? That’s the amazing thing about it. That’s where, I think, he did such a fine job. Any final thoughts you want to leave our audience with here?
Dennis: Well, I guess, there’s a lot in here. I would just say that the opportunity to write a book that’s interdisciplinary was a real gift to me. I’m hoping that many of the fields that it touches upon will find it useful, including linguistics and evolutionary theory, philosophy of biology, cognitive science, complexity science, and so forth.
Jim: Well, thank you Dennis Waters for a wonderful and very enjoyable, at least on my part, conversation.
Dennis: It has been terrific talking with you, Jim. Thanks for having me.
Production services and audio editing by Jared Janes Consulting, Music by Tom Muller at modernspacemusic.com.