The following is a rough transcript which has not been revised by The Jim Rutt Show or John Krakauer. Please check with us before using any quotations from this transcript. Thank you.
Jim Rutt: Today’s guest is John Krakauer. John is a professor of neurology and neuroscience and director of the Center for Study of Motor Learning and Brain Repair at Johns Hopkins. He’s also external faculty at the Santa Fe Institute, and he’s a Director of the Center for Restorative Neurotechnology at the Semolimo Center for the Unknown. What a great name—for the unknown. I love it. Welcome, John.
John Krakauer: Thanks for having me. Hi there, Jim.
Jim Rutt: This will be fun. As fairly often on my podcast, I reached out to John after reading an article that he was—I think you were the lead author on. Right?
John Krakauer: Yes.
Jim Rutt: And truthfully, I didn’t even notice that he was the lead author. I know John reasonably well, and I looked up and thought, oh, shit—John Krakauer. So anyway, I reached out and he graciously agreed. The paper is actually quite an old one in this field, 2017. It’s called “Neuroscience Needs Behavior: Correcting a Reductionist Bias.” Before we hop in, why don’t we define for the audience what you mean by behavior and reductionist?
John Krakauer: Well, behavior is a very difficult thing to define. In fact, there was a paper dedicated to asking people to define behavior. But I think that what we mean in the setting of studying neuroscience are, hopefully, ecologically valid actions taken by the animal in order to pursue goals successfully. And then we would like to know what the mechanism inside the nervous system is that explains the successful goal-directed actions of whatever species you’re studying.
Now, behavior can sometimes not be the full manifested action you see in, you know, a documentary about wildlife. So you have to come up with more truncated, more tractable experimental versions of those behaviors in the lab, and then hope that what you discover still pertains to the more naturalistic version of that same behavior in the animal’s natural habitat. So one of the tensions in this paper is when do you come up with a task that activates the nervous system but now no longer bears much relation to what the animal is really doing in its umwelt or habitat. That’s one of the things we touch upon.
Jim Rutt: Well, let’s talk a little bit about reductionism and your quite strong arguments there.
John Krakauer: Well, reductionism has a huge philosophical and scientific literature associated with it. As you know, the Santa Fe Institute is essentially allergic to the idea that everything can be explained by going down to the level of component parts. Otherwise, every discipline that was not physics should ultimately disappear once we can understand the physics well enough. And of course that’s extreme, but you could say that even though no one thinks that everything should reduce to physics, there are some neuroscientists who think that ultimately there’ll be a neural story for all animal behavior. The case we’re making is that just like subatomic particles aren’t the ultimate explanatory object, neither are neurons or neural circuits.
Jim Rutt: Early on, you talk about behavioral decomposition being epistemically prior to neural investigations. Epistemically is pretty heavy artillery. What are you saying here?
John Krakauer: Well, we’re just saying that if you want to ultimately get to the neural circuit implementational basis of behaviors, you better have a good idea of what that behavior is and its component parts, so that when you do start looking for neural correlates, you know you’re correlating with the right things. In other words, if you’re running—your legs are moving, your arms are moving, you’re breathing, your heart is beating—all sorts of things are happening. Now if you want to understand how your spinal cord is related to running, then you really need to make sure that you’re not finding correlates for all those other extraneous things that are going along at the same time. And then on the other side, maybe walking on a treadmill or running on a treadmill misses what it’s like to run over ground. In other words, there are so many components of the behavior, and you need to decide a priori which ones are the most relevant for your question and which ones you can discard.
Jim Rutt: That is always the challenge when you come up with an experiment—what can you leave out, and also what do you need to leave out to make the problem tractable?
John Krakauer: That’s right. So in other words, if you just go in blind and just record from neurons and hope that you can reverse engineer what they’re up to from their own activity without anchoring them on some conceptualization of the behavior they’re responsible for, you’re going to be fishing for a needle in a haystack.
Jim Rutt: So let’s go to your running example. I actually use bipedal running a fair bit in my own exploratory work, because it is something that appears to be unique to humans. It’s a very late evolutionary solution to a problem within mammals, I should say. Obviously, birds have solved that one, as did dinosaurs, but we’re the only mammal that does bipedal running at reasonable velocity. So let’s use that as an example. Just give rough examples of the kind of decomposition of the behavior that you’re pointing at.
John Krakauer: Classic experiments going all the way back to Sherrington were interested in looking at walking and running on a treadmill in a cat where you could actually have the cat be spinalized with a thoracic lesion—or sometimes further up in the brainstem—and still see good locomotor behavior in the hind legs of the cat. So you could say, look, walking and trotting seem to be controllable just with your hind legs controlled by your spinal cord unconnected to the rest of your brain. But then you could ask, well, what about balance? What about running over uneven terrain? What about when you have to switch between running and walking fast? In other words, it is true that there is some subset of the behavior that can be isolated to the spinal cord, and then there are many components that you need to go further up the neural axis to include to explain the behavior.
So there’s a tradition in neuroscience going all the way back to Sherrington that isolates the simple fragments and components and then works its way up. It’s too difficult to attack the whole behavior. That’s a good strategy—decompose, simplify, and then hope that you can reassemble all the pieces later. And also, it’s technologically challenging. I mean, we’re getting better and better at recording from the brain and elsewhere during more ecological behaviors, from the rodent to the monkey to the human. So there’s a difference between the scientific philosophy behind decomposition and the technological challenges of free behavior. That has been difficult, and now we’re getting better and better with Markler’s tracking, electrodes in the brain that don’t need to be hooked up to a power supply, things like that.
Jim Rutt: Gotcha. That kind of decomposition—how does that fit into your argument about too much reductionism?
John Krakauer: Well, again, what we’re sort of saying is, when you ask the question, what is the neural basis for a behavior, what are you actually asking? What do you want the sentence uttered to be? In other words, if you say, I want to know what the neural basis of the stretch reflex is—in other words, when a neurologist bangs my knee and my knee jerks—why is that? And you can actually draw a diagram as to how that works. There are sensory fibers in specialized muscle cells that respond to stretch of the muscle, and then they send signals into the spinal cord, which then synapse with motor neurons, which then come out and lead to a corrective contraction of the muscle. That’s what it is. So you can draw a diagram on a whiteboard, and you go, ah, that’s how that works when somebody taps my knee with a tendon hammer.
Now, what’s the equivalent diagram to explain why you understand the word justice? What will the diagram look like? And I think the Santa Fe view is that the coarse-grained level where you go, I get it, will not always be a neural circuit diagram. The neuroscience project has been that you should have a comprehensible diagram containing neurons, which gives you a moment of comprehension of the behavior like you do for the stretch reflex. And what we’re arguing is that’s not going to happen. The reductionist view is that there should always be a neural circuit explanation or some kind of neural explanation for every behavior that one can observe.
Jim Rutt: Now, of course, the opposite of reductionism is an emergence view. Are you saying that in neuroscience, the concept of emergence doesn’t really have much traction yet?
John Krakauer: That’s true. I mean, they hate that word in general—not all of them. But they think emergence is sort of a spooky, magical term. In other words, they haven’t really gotten the memo from what physicists have been saying or places like the Santa Fe Institute have been saying. They don’t really want to look at that literature. They just think that people within neuroscience who claim emergence are just not hardcore enough, just not rigorous enough.
And it’s a subtle concept. David, obviously, who has written extensively about it, really considers it more about explanatory frameworks, about autonomous screening off of explanations. He likes to use the example that the best moves in Go are explained by understanding the rules of Go. Fermat’s Last Theorem was solved with the rules of mathematics—they come from within those rules. You don’t need to know anything about Andrew Wiles’ brain to understand how he proved Fermat’s Last Theorem, or the atomic structure of the pen and paper that he used to do it. The truth of Fermat’s Last Theorem is entirely within the realm of mathematics. So David is very interested in these cases where you can screen off protectorates of explanation with the right coarse-grained variable. That’s where emergence comes in on that side.
And then there’s emergence on the other side of the coin, which is when you have phenomena that are best explained through interactions between objects in their aggregation rather than the individual components. Like the famous example, which I’m sure you know, is that a single water molecule is not wet. And all the way up to Wittgenstein’s—or David Marr’s, I think—saying that wings don’t fly, birds do.
Jim Rutt: Yeah. That’s a different example, actually.
John Krakauer: They’re all basically not trying to attribute a property of the whole as already present in some way in the parts.
Jim Rutt: Yeah. The example I use that I find people can get pretty easily is the concept of a traffic jam. Cars have various behavioral aspects based on the driver and the mechanics of the car, and in general they have x degrees of freedom, etcetera. But when a traffic jam forms, those degrees of freedom are massively reduced until the traffic jam dissipates, and yet no laws of physics are being violated. So there really is downward causality from the traffic jam to the degrees of freedom—how fast you can drive, etcetera—and it’s as real as an electron.
John Krakauer: That’s a very nice example too. I think it’s just very difficult for neuroscientists to not think that the same experimental approach, the same theoretical approach, the same explanatory approach should not apply as you go up the neural axis. Sherrington himself started in the soleus muscle of the decerebrate cat to study the reflex because he thought that this was much more tractable, and he himself said in various other publications that he wasn’t sure we would ever understand how thought was generated by the cortex.
Now I’m not saying that one isn’t going to get increasing progress on that score. It’s just that the nature of the explanation will not satisfy. You might just have to accept that you know the necessary and sufficient conditions to have this part of the brain intact in order to do this behavior, but it won’t feel explanatory the way the stretch reflex I described is. People don’t really like that because it sounds a little bit like the meaning of the universe being 42. It doesn’t sound like an explanation.
Jim Rutt: And there are a lot of people kind of stuck at that level when thinking about things like consciousness. You probably don’t know, but I am the chairman emeritus of the California Institute for Machine Consciousness. And we have a research program that we claim will someday—and I mean, we may all be dead by then—but someday actually explain consciousness in a way that honors both reductionism and emergence simultaneously. In fact, one of my contributions is the realization that the tree of causality since the Big Bang lives precisely congruently with the tree of emergence that follows along with the tree of causality. The two have to be in very close alignment and in fact never contradict each other, which gives me the sense that that’s how we’ll get to a rigorous explanation of things like thought and aesthetics and qualia.
John Krakauer: I just don’t know what a sentence uttered will sound like for you to feel like you go, that’s how it works. I just don’t have the imagination to even know what that would sound like.
Jim Rutt: Yeah. None of us have the understanding yet to even attempt to craft that sentence, and it’ll be a while before we’ve worked out the theory.
John Krakauer: And it might well be that there might not be one. What if you just have to know the conditions necessary for it to manifest, but then when you talk about it, you talk about it at its own level, just like you talk about Fermat’s Last Theorem at the level of mathematics. You just live at that level, know that it isn’t inconsistent with lower levels, but don’t expect to hear a compressed explanation at lower levels that conveys the same level of understanding as you get from within level.
Jim Rutt: So in other words, what if you just have to live with that? It’s possible that we used to think we had to live with that with respect to life. We had élan vital, you know, the sort of magical explanation.
John Krakauer: I mean, you know, I speak with a lot of people at the Santa Fe Institute. We don’t have a definition of life. People argue about it. Should you have a definition, or should you just have a list of features? And as far as I can tell, there is still no agreement on that score. There’s a rough agreement. I feel, you know, there are many things in science that remain forever fuzzy. We don’t know what—in philosophy, people have long since given up on coming up with a watertight definition of the notion of truth. And in other areas, when we say this work of art is good, we don’t have any definitive way of saying a priori when a work of art is going to be good. So maybe we just have to accept that there are terms that we use in a fuzzy fashion. We kind of know what we’re talking about within the context that we’re all speaking, but we’re never going to nail it down. And maybe that’s okay.
Jim Rutt: So that’s a good pivot. If we take that approach and think about the problem of neural reductionism with respect to behavior, are they trying to do the same thing—trying to make something too precise beyond the scope of the explanatory support?
John Krakauer: No. I mean, I think, again, I’m not trying to curtail any kind of work at any level of granularity. The nervous system has been studied in three ways, really. One way is it’s just been treated like any other organ. You can study the kidney, the liver, the lungs, the heart, and we’ve learned tremendous things about the physiology and anatomy of those organs. We can do exactly the same thing with the nervous system. It’s just another organ that is amenable to the same kind of biological research that all the other organs are, and that’s correct—it’s an organ.
Another way it’s been studied is in neuropsychology, mainly in humans. You basically look at this system working, and then patients get lesions, and then you see why it broke in the particular way it did, and then you hope that you can infer what those structures do in health. That’s another approach.
And then beginning in the twentieth century, as you very well know, computer science and Turing and others came along, and then there was a kind of computational information processing view of the nervous system. So you basically had almost three traditions running along in parallel: the organ view, the neuropsychological view of the brain, and the information processing view of the brain. And the fact of the matter is that that triad of approaches has never really gotten along with each other. There are still tensions and contradictions between them.
One way that they manifest to this day is that there are psychological explanations and there are neuroscientific explanations. And as much as some neuroscientists might hope that one day psychology can just be explained away by the more rigorous neuroscience, if we believe in the notions of emergence as you were discussing, then that collapse will never occur.
Now if you want to do the kind of science that you do on kidneys and livers on the nervous system, that’s great. There’s a tremendous amount of work being done on the nervous system as an organ in all its beautiful, intricate anatomical and physiological detail. The question is how does that map onto psychological behavior and other behaviors—movement, perception? And that’s hard, because some people will not find it sufficient to simply correlate neural activity at one level with behaviorally measured activity at another and go, we’re done. People want more than that. They want to have explanations like the stretch reflex I gave you—ah, I see the diagram for that behavior in the language of neurons. So I think people want diagrams or sentences in their head in the language of neurons which immediately gives them the intuition for the behavior, and that’s where we cast some doubt on that.
And I don’t know if that would be true of other organs like the liver or the lungs. I mean, an alveolus in the lung is not breathing. A single renal cell is not necessarily excreting. So these are very tough issues. Now some scientists will go, oh, this is all far too philosophical. We’re just going to get along with our work.
Jim Rutt: Often the right answer in practice.
John Krakauer: Although what we’ve written in more recent papers is what you want to try and avoid is using the coarse-grained language of the behavior and then reintroducing that language surreptitiously into the neural data. An example I like to give that infuriates people sometimes is we know that the motor cortex is responsible for movement to some degree, but we do not say that the motor cortex itself is moving. And the more psychological the behavior, the more cognitive the behavior, one tends to start to use the language at the psychological level and use it—sort of double dip with the same language—in the neural data. The most egregious version of that is the word representation. The word representation is used interchangeably at the psychological and the neuroscientific level, to much confusion in my view.
Jim Rutt: That actually is a good next door neighbor to jump over to something that, for whatever reason, I’d never been exposed to. But as soon as I read your words, I thought, wow, this is both true and interesting—which is this concept of filler verbs.
John Krakauer: Yeah. I think that led to some people being infuriated as well. Most simply, you find a relationship between a neuron firing—whether it’s either in response to a stimulus or it correlates with an action, a movement—and you should simply say, we found a correlation between neural activity and behavior. But that doesn’t scratch the itch. It doesn’t sound like an explanation. So what you do is you add a little bit of surplus meaning to the correlation by saying involves, or regulates, or underlies, or produces. In other words, these words have more explanatory oomph than just saying correlates. But we called them in the paper filler verbs because they’re actually doing no extra work.
Jim Rutt: But they appear to be. I love this. I guess this was actually something new—I don’t know if this is a commonly used term of art, but it was the first time I’d ever seen it, and I thought, shit, yeah, I’ve seen that in lots of other analogous ways.
John Krakauer: Right. It’s sort of pseudo-explanatory. You haven’t said anything extra, but it sounds as though you have.
Jim Rutt: And you pointed out that that’s stylistically what a lot of LLM text does—embellish simple concepts with fuzzier versions of the same concept that sound more pretentious but actually add no actual content.
John Krakauer: Sure. There are many versions of this, and you hear it a lot. There are all sorts of extraordinarily annoying versions of this, like, oh, I’m not very happy at the moment—my dopamine levels must be low. Or, I’m in love with my child because oxytocin levels are high. This absurd kind of correlating a state with a molecule, but not realizing that the word doing all the work is something like love or happy. It’s like the reverse example where you’re still very much relying on the crutch of that filler word in order to convey meaning.
Now in that case, at least it’s kind of true that love does more work than oxytocin. But there are papers that do this all the time, equating the global feeling with its correlate and then making them have identity. In philosophy, it’s called the identity fallacy. The most famous is: pain is just C-fiber firing. But that can’t be right, because if I say C-fiber firing to you, it lacks the surplus meaning of pain, which certainly exists.
Jim Rutt: Put your thumb on the hot charcoal. You’ll know it’s something different than some abstraction.
John Krakauer: Right. These things are all kind of related, and I think they all attest to the thirst we have as humans for compressed explanations at some level of coarse graining. Science is a human endeavor. It has to be comprehensible to humans. And of course, now we’re entering an era where people are saying, no, that era is over. Science can now be done by AI, and it can be done in a way that does not need to be hindered or restrained by the need for constrained, comprehensible explanations. It’s fascinating.
Jim Rutt: Well, that may turn out to be true, and we’re right on the edge. I use AI a fair bit in scientific investigation, and they’re not quite there yet. They’re very, very good helpers, kind of like the equivalent of a third-year PhD student or something. But it feels to me that by the fall of this year, the frontier models will be at the PI level.
John Krakauer: Well, I mean, David and I have spoken a lot about this. I’m writing a paper with a team—we all met at SFI—on LLMs. I was talking with Rafael Millière, who is a brilliant combination of philosopher and computer scientist. He’s kind of a joy to talk to about all this. We were looking at some of our data together, and we were getting Claude to provide some potential interpretations of our results. And we both had to admit—he was much more used to this than I am—it was unbelievably impressive how plausible and logical the suggestions being made about our data were. The irony, of course, is the paper is about the cognitive failings of large language models. So you had this wonderful self-reflexive moment of impressive insight into disappointing failure on the part of the same technology.
Jim Rutt: Of course, that’s one of the beauties of LLMs. They don’t take it personally. I do the same thing. I point out the logical flaws in their arguments, and they go, yes, you’re correct, actually. No defensiveness.
John Krakauer: It gets a bit addictive. I was admitting to David that I get into these protracted arguments with them, and it’s highly enjoyable. It certainly helps hone your thoughts on things.
Jim Rutt: Yeah. It’s like going to the gym and learning to box. It’s a really good sparring partner.
John Krakauer: It is a sparring partner, and it gets back to your point that, to the degree that other scientists, postdocs, and grad students are kind of sparring partners, it may get to the awkward situation where these AIs are more fruitful sparring partners for the advancement of your own intuitions, hypotheses, and ideas than fellow humans.
Jim Rutt: We could very well get there. We’re very close in my estimation.
John Krakauer: Not quite, but probably less than a year away, at least in some fields. Everyone says that something changed in December and January of this year—there was some step change in the competence of these models, especially with respect to their coding ability and other such things. So given the rate of progress, if you’re Luke Skywalker and AI is your R2-D2, and you’re doing science together, it could well be that an AI companion accelerates you further along as a human. I don’t consider that implausible.
Jim Rutt: Nope. I think it’s inevitable. In fact, it’s already there. Anyone who’s not using these LLMs at least as sounding boards and as error checkers and fact checkers is making a big mistake, because they have their place. The question is what is their place in the work that you’re doing?
John Krakauer: I think that’s right. David was telling me that in terms of getting them to give you a critical view of your own work, you basically tell it, be very critical of what I’m about to input here and see where all the weaknesses are. It may well be faster and more thorough than asking a friend to read the paper over—or a co-author.
Jim Rutt: Oh, there’s no doubt about it. Particularly, each model has its strengths and its personalities. ChatGPT 4.5 Pro in deep research mode with a prompt that says, brutally examine this paper, fact check it, look for the holes—it’ll do an amazing job. It’s actually pretty slow; it might take half an hour. But the probability of getting your friend to read your 16-page paper and give you a written response—it’s going to take a hell of a lot longer than thirty minutes, and it’s probably not going to be as good.
John Krakauer: You’re catching me in the middle of all this happening. I don’t know what to think yet. I feel a certain vertigo considering all this. We’re at this very strange moment, aren’t we, Jim? It’s just unclear how things are going to unfold.
Jim Rutt: Yeah. It’s a hinge of history.
John Krakauer: That’s a good way of putting it. It’s a hinge of history. And for me, I don’t have any intention to retire, but I could. I was speaking to Rafael Millière about this—if I were to feed in some of the questions I’m working on right now and some of our data, and it came up with not only better intuitions as to what the data mean, but also suggested better experiments, then I might just go, okay, try to do something else.
Jim Rutt: Or you can think of yourself as a circus lion tamer.
John Krakauer: Yeah, that came up too. It’s a fantastic moment.
Jim Rutt: It really is. I mean, it’s probably comparable at least to the development of heat engines and fossil fuels, and it may even be stronger than that.
John Krakauer: And, you know, adding to this paper about neuroscience needs behavior—one of the fascinating things is we have a new creature in town. These foundation models are amenable to psychological and neuroscientific investigation. Interpretable AI—you can actually go and look at what’s going on inside these neural networks. Melanie Mitchell is doing it, Rafael Millière is doing it, others too. So suddenly the irony just multiplies. We’ve got a new creature to do neuroscience and psychology on.
Jim Rutt: Yeah. And in fact, that was my motivation to accept the offer to get involved with the California Institute for Machine Consciousness. Some amount of my work over the last ten years has been on trying to understand consciousness. I actually even built a rudimentary artificial consciousness—a white-tailed deer. And the reason I’ve been motivated this way is that it’s an extraordinarily difficult topic to investigate in live animals, and particularly in humans. The ethics will not let us do what we might like to do. But if we have an artificial consciousness, we can basically instrument the hell out of it and find out what it’s actually doing to implement its subjective behavior.
John Krakauer: Yeah. I struggle a great deal with this distinction between consciousness and cognition, and I think they’re distinct.
Jim Rutt: Very distinct. Sentience, intelligence, cognition—they overlap, but none of them are the same as the other one. A bacterium is intelligent, but it’s not conscious, at least I would say.
John Krakauer: And there may be particular forms of intelligence that are consciousness-requiring. In other words, you won’t manifest the intelligence without it. Just like feeling pain versus an aversive reflex—feeling pain gives you a more flexible behavior than just an aversive local reflex does. So there are benefits to the behaviors enabled by differing degrees of consciousness. If there are degrees of consciousness—that’s another issue.
Jim Rutt: I’ll just give you another little working tool. I think it’s important to separate the machinery of consciousness from the contents of consciousness. There is some sensorium which integrates multiple modalities of perceiving the world, and those objects, etcetera, are the contents of consciousness and memory. Different animals have different conscious contents. Nagel’s famous paper, “What Is It Like to Be a Bat?”—bats have echolocation as their number one sensory modality, and that is almost certainly represented in their consciousness as a class of object which they cognate on. And for humans, our special conscious content is words. So you’re talking about different kinds of consciousness. It’s probably a pretty direct road from having the ability to manipulate words and symbols to being able to manipulate axiomatic systems like mathematics.
John Krakauer: Yeah. This is all where the biology of the nervous system as an organ sort of crosses over into thinking about it as an information processing system with syntax and semantics. I think a lot of neuroscientists would say, look, I’m not going to talk about consciousness or content or language—these are all human-centric views. Evolution has given us a nervous system very similar to the one in other animals. So let’s just work out the biology of nervous systems before we have to worry about these high-level concepts, and then maybe it will just fall out in the wash. I think some people feel that way.
Jim Rutt: Cowards, I would say.
John Krakauer: Yeah. But they’ll just say, let’s work on the mouse. It’s a mammal. It has perceptual consciousness. It feels pain. It’s ninety to a hundred million years away from us—that’s nothing compared to the billions of years of life’s evolution before that. And these are really tricky issues. I think a lot of people working on nonhuman biological systems that are more tractable mechanistically will say that we’re not going to be distracted by human centrism or philosophy.
Jim Rutt: And again, let a thousand flowers bloom. In reality, they will all provide interesting bits and pieces for the next synthesis.
John Krakauer: Yeah. I think so. To the degree that philosophy can help with a little bit of logic hygiene and iron out wrinkles, as Quine put it—paradoxes and inconsistencies—so that you do the best work you can without making errors, that’s useful.
Jim Rutt: Peirce. I love his definition of philosophy, which is making our ideas clear.
John Krakauer: A lot of philosophers object to this Quinean, Peircean view that it’s really about intellectual hygiene. They feel like philosophy has its own contributions to make independent of just cleaning up other people’s scientific contributions. But I think that is a part of it, no doubt.
Jim Rutt: Alright. Let’s get back to the paper. The next topic is that you point out multiple realizability between nervous systems and behavior. Why don’t you explain to the audience what multiple realizability means in this context and its significance in your argument?
John Krakauer: Well, multiple realizability has been around—I think it was Putnam and colleagues that came up with it originally—which is just saying that there’s a problem about trying to map behavior onto neural mechanisms because there might be many neural mechanisms that could be equally compatible with the behavior. So you have this many-to-one problem. But in our case, we were being a little bit more local and pragmatic about it, just saying, look, there are a number of ways that you can get confused about this mapping between neural activity and behavior.
You might see patterns of activity in the nervous system that relate to an artificial behavior that you do in the lab. Analogous to: I could use my iPhone to hammer a nail. Now that’s not what my iPhone was for, and it’s not a particularly good use of it, but I can do it. And so analogously, you might force neural activity in a very unnatural behavior, but it doesn’t give you much insight into what those neurons are really for, nor does it give you much insight into the behavior those neurons are really for.
There’s also the problem that not all neural activity that you see relates to any behavior that is natural. You might see one neuron firing in multiple behaviors. A cortical motor neuron might be involved in me moving my finger like this, and making a fist, and moving my arm. And then you go, well, how can it be responsible for such different behaviors? So really, multiple realizability was more like a blanket term for all the ways that you can get confused about the mapping between neural activity patterns and behaviors. And once you get one-to-many and many-to-one mappings, ambiguity sets in, and you have to find ways to resolve that ambiguity.
Jim Rutt: You said that multiple realizability is pervasive and fatal to naive reductionism.
John Krakauer: Well, yes. Just because you might say, oh, here’s this behavior, and this neural activity is necessary and sufficient for it, not knowing that there are a null space of eight other ways to get that to happen. And you can’t necessarily, in experiment, exhaust all the potential ways. You can’t even know all the potential ways. So it gets really tricky about how these kinds of fields proceed at all.
Jim Rutt: It strikes me that this is an example of a need for philosophy—for people to tune up their epistemic humility and really understand what they can claim and what they can’t. They can say this configuration of neural stimulations causes a leg to jerk or something. But they shouldn’t say, based on your work and others’, that this is the only way to get the leg to jerk. So getting people to be more careful about what they claim. What you want is convergence of evidence from multiple types of experiment at multiple levels.
John Krakauer: But these are really tricky things. I’ve been studying motor learning and motor skill in the lab for a long time. You do something very simple in the lab, and you call it motor learning and skill, and you hope that it relates to what it means to be good at tennis or soccer or violin playing. But how do you know that your reduced task and the neural correlates of your reduced task relate to the more naturalistic global behavior?
Neuroscience talks always do this. They’ll show Alcaraz playing tennis these days, and they go, look at skill, what an impressive thing. And then they’ll go to the mouse doing something far simpler, like a rotarod or something like that, and go, here’s a reduced system to explain that skill. And how do we know that that behavior in a mouse that is simpler has anything to say about a complex task like tennis in a human? That is a really, really difficult induction. There are all sorts of these sticky situations once you get into neuroscience.
Jim Rutt: Interesting. The other claim that you make is that neuroscience could benefit from taking Marr’s three levels more seriously. Talk about that a little bit.
John Krakauer: People get really frustrated about this, and I think some people get very annoyed by Marr because it presupposes the computational theory of mind and brain, and some people don’t like that at all. In other words, the heart isn’t computing. The lungs are not computing. The immune system isn’t computing. So why should we say the nervous system is?
But let’s just put that aside and go, well, let’s assume that the computational, algorithmic, and implementation levels—all we were really trying to say there is that it’s often easier to get an algorithmic dissection of a behavior first, and then go and look for its neural correlates. We were simply saying that if you can decompose a task—for example, we’ve done studies on learning, and you have a learning rate and a retention factor, so you can sort of predict behavior with those kinds of parameters—you have this compressed algorithmic description that does a good job of providing insight into the behavior. And then you go, okay, now that we have that compressed algorithmic description of the behavior, we have the tools to go looking for their neural correlates.
So for us in this paper, Marr was really about looking at an algorithmic dissection of behavior as a launch pad for subsequent neural correlates. For example, in the paper, we have sound localization. The computation you need to do is: where’s the sound coming from? The algorithmic way of doing it can be either spatial or temporal. So you basically go, here’s what I need to do—I need to localize the sound. Is it coming from the left or the right? There are these two algorithmic approaches to determining that. Once you’ve decided which algorithmic approach is being used, then you can go into the brain of an animal—whether it’s a reptile or a bird or whatever—and break the tie with respect to which algorithm is being used by looking at neural data, and then also maybe get more neural mechanistic insight.
So we were simply saying the sequence of investigation is: what is the goal of the behavior? How is it being solved algorithmically? And then let’s go looking for the neural correlates, the implementational level of Marr. Whereas we feel like there aren’t that many good examples in neuroscience where you reverse engineer the algorithm and the computation by going first and investigating the neural tissue. It was more about the direction of the scientific project.
Jim Rutt: Yeah. So understanding what the behavior consists of before you try to figure out what the neural upstream correlates are.
John Krakauer: Yes. In other words, it just seemed that that was a way to sort of handle the space of possibilities and have a clue—it leaves you a trail of where to go. That was basically the point we were making. Rather than trying to over-egg the pudding with respect to Marr’s three levels, it was more that it was a framework that had been previously introduced to make this point that I’m making now.
Jim Rutt: Alright. Let’s move on to another topic. You make a pretty strong claim that another failure of neuroscience’s practice today is the mereological fallacy. First, explain to the audience what that is, and then give an example. Mirror neurons is a good example. Well, or maybe give a better one if you’ve got it.
John Krakauer: The mereological fallacy is more what we were saying before—a part of the brain doesn’t feel pain. You, as a whole individual, feel pain. So if you have a patient with a spinal cord transection and you stick a pin in their foot and they withdraw their foot—which they do, it’s a reflex—you don’t say that the foot was feeling pain. In fact, in that case, they don’t. A patient will not feel pain if they have a spinal cord transection, and they’ll nevertheless withdraw their toe. So the idea is—and it goes back to what I said before—wings don’t fly, birds do. The mereological fallacy is to attribute a holistic psychological notion to a part.
Another version: alveoli in your lungs are not breathing. You are. So now the mirror neuron thing is a little bit different. Yes, it’s mereologically fallacious to say that the mirror neuron itself is understanding the action that is being observed by the whole organism within which those mirror neurons are lodged. That language is everywhere. Oh, the mirror neuron fired when the monkey saw me smile. But, you know, the monkey is picking up an apple, the same neurons fire when you see another monkey picking up an apple as when you pick up an apple, and therefore somehow these neurons infer the intention of the other monkey. And this is all nonsense. It’s much more complex than that.
Jim Rutt: To me, that’s a classic example of overreduction.
John Krakauer: Yeah. It’s guilty of all the crimes. It’s inferring upon a single neuron a psychological property that only really applies to the whole. It’s prematurely giving them a psychological role before they could be something simpler that doesn’t need that. Mirror neurons could almost be considered ground zero for all these kinds of problems—the mereological fallacy, reductionism, not considering simpler explanations, over-psychologizing, etcetera. But they’ve gone out of favor, really. There was a time when they were very hot. I have a friend at a cognitive science journal who told me that at one point, a third to a half of the papers that were submitted were about mirror neurons. But I think that time has come and gone.
Jim Rutt: I thought a very interesting and somewhat surprising story was the story about what was actually going on with bradykinesia.
John Krakauer: Yeah. That’s a little bit self-serving. It turns out that Parkinson’s patients have this triad of symptoms in their upper limb. They have tremor, they have rigidity, and they have bradykinesia, which is that they move slowly. I was just at a memorial in New York on Saturday for my colleague Pietro Mazzoni. We had a lab that we ran together at Columbia before I moved to Hopkins, and we wrote a series of quite influential papers together.
One of the papers that we wrote was called “Why Don’t We Move Faster?” In other words, we wanted to understand why all of us pick a tempo for our movements. If you were sitting at a dinner table and asked everyone to reach for a glass of water, they would all pick a similar tempo even though they could go a lot more slowly and pick it up, or they could quickly reach it and still succeed in picking it up. And then the idea was that in Parkinson’s patients, they’ve just picked a slower tempo—not because they have to go slowly because if they went faster they’d be inaccurate. In other words, it was a compensatory slowing.
What Pietro Mazzoni and I said in that paper, along with a fellow at the time, Anna Hristova, was that Parkinson’s patients move more slowly because they want to move more slowly. Now that whole experiment and that alternative conclusion would have been much harder to do in a nonhuman animal. So it was an example of a very carefully designed behavioral experiment that Anna and Pietro designed to get an alternative hypothesis about this cardinal symptom in Parkinson’s disease. And it has kind of borne out that there was a high-level motivational reason for slowed movement. It wasn’t because motor control itself was abnormal. We’re very proud of that paper because it showed how a very carefully designed behavioral experiment with an either-or hypothesis can yield real insight, which can then go on to be examined neurally.
Jim Rutt: That was a quite surprising result. That hypothesis wasn’t even in the air, I don’t believe.
John Krakauer: No, not really. There was a belief that there was some more low-level motor control explanation for why Parkinson’s patients had these problems, rather than it being a kind of implicit, high-level motor cognitive interpretation. Certainly, a lot of people responded positively. You know, in science in general, Jim, there are ideas that are sort of pregnant at a given time. They’re in the air. People are on the cusp of making similar points. We don’t really know why suddenly these things occur across a series of people in labs. So there may have been some of that, but it was more implicit. I think we were the ones who, at least in human Parkinson’s patients, made it very explicit.
Jim Rutt: Really enjoyed that story. I thought that was good science.
John Krakauer: I think so. I can’t be fully objective about that, but I think that Pietro and Anna really showed what a good psychophysical experiment can reveal conceptually.
Jim Rutt: Another thing I did not know—it’s one of the things I love about doing these shows, I always learn something—you made the point that C. elegans’ neural system is now fully known. We have a complete mapping of it. And yet, with some fixed small number of neurons, we actually know the full connectome, and yet we can’t, in a strong way, predict the behavior of C. elegans.
John Krakauer: Yeah. I mean, I’m sure I get all sorts of hate emails about that. I think there’s a difference between having all that information and doing inference sort of experiment-hypothesis-free on that information, rather than having that information as a library or a database so that when you now do a more focal, local experiment where you do have either-or hypotheses, you can then go and look in your library, look at your connectome, look at the cell types. And again, it’s very similar to what we were saying before—do the algorithmic, particular-behavior dissection, and then go to the neural implementation. Obviously, if you’re interested in the neural implementation, you’re going to be in a much better position if you’ve already got all this neural information ready for you to peruse in terms of the connectome, in terms of cell types, and the like. It’s a great thing to have.
What we’re objecting to is somehow that the connectome itself—looking at it, like reading tea leaves—is going to give you explanations. Now some people will say that if they look at particular motifs in the connectome and particular areas, they will have hypotheses just by having it to look at. Maybe. I’m not fully convinced that you’re simply going to have all sorts of algorithmic hypotheses generated by looking at all this neural detail.
It’s like looking at a Rube Goldberg machine. You go, what’s this Rube Goldberg machine going to do? It’s very laborious to try and work through the sequence of causal chain movements to work out what it does. And in the case of a Rube Goldberg machine, I think it’s helpful to know what its output is to sort of infer how all the parts combine to get that. So it was more that—detail alone, in the absence of more focal compressed hypotheses, makes it very difficult to reverse engineer those hypotheses.
And imagine trying to do the same thing with fruit flies, which we now have a complete connectome for. And we talked about a famous paper here—the Jonas and Kording “Could a Neuroscientist Understand a Microprocessor?” paper. When it came out in 2017, the same year as ours, it led to a huge uproar from the neuroscience community. Because they basically said, you have a very simple diagram of explanation about how this microprocessor works. And basically, if you were to do all sorts of neuroscience on the microprocessor while it activated during some very simple behaviors—fragments of three video games—would you be able to infer from the neural data the simple diagram that everyone understands of how a microprocessor works? And the answer they came to was no.
They basically said, if you didn’t know the fetch-decode-execute structure of a microprocessor—which anyone in undergrad can draw—could you get that diagram back just by recording from the microprocessor doing a simplistic behavior? And the answer was probably not. And so C. elegans is even more complicated than that, and so is, as you say, a fly. I think it’s much more helpful to say it’s a database, a library, a great store of useful information that you can go back to once you’ve done your hypothesis-based algorithmic dissection, and then go look at that rich information.
Jim Rutt: Very good. Alright. Last question, and then we’ll let you go, young fellow—which is, what advice would you give to an early career investigator who is working in this boundary line between behavior and neuroscience?
John Krakauer: I mean, I think the advice I would give is that science—and I’m very old fashioned about this—is about observations that lead you to be puzzled, and trying to design experiments to explain those observations. Very old fashioned. I think what’s happened is that young scientists now are so enamored by becoming highly skilled at modeling, coding, and technology that they feel like science has become this new beast, which is sort of developing quantitative, predictive, model-free discovery approaches, where this more old-fashioned observe, ponder, hypothesize, experiment approach is going a little bit out of fashion—just like the humanities.
So my advice would be: don’t lose that version of science for its massive data predictive version that’s coming along. Now some people might say, John, that era of science is over. Stop being obsessed with observations and hypotheses and either-or explanations, and get on the bandwagon of technology, which will do your science for you. My advice would be: stay interested in observing the world and hypothesizing about it. But you know this, right, Jim? That does sound quite old fashioned.
Jim Rutt: But a contrarian view often works.
John Krakauer: The fact that it’s contrarian depresses me a little bit. But that would be my advice—at the very least, I would tell young people, be aware of different versions of the scientific enterprise, and do not let the technological, methodological tail wag the question-asking dog.
Jim Rutt: Yeah. And the idea of theory—do not abandon theory.
John Krakauer: Yeah. And theory has a broader notion. I mean, theory can still be called theory even if it isn’t already a mathematical formalism.
Jim Rutt: Alright. John Krakauer, an amazingly interesting conversation. Thanks for coming on The Jim Rutt Show.
John Krakauer: Thank you, Jim. It was a real honor, and I hope people enjoy it.
Jim Rutt: Take care. I think they will. The Jim Rutt Show is produced by Andrew Blevins. Audio and video edited by Stefan Lowe. Music by Tom Mueller at modernspacemusic.com.
