Transcript of EP 178 – Anil Seth on A New Science of Consciousness

The following is a rough transcript which has not been revised by The Jim Rutt Show or Anil Seth. Please check with us before using any quotations from this transcript. Thank you.

Jim: Today’s guest is Anil Seth. Anil’s a professor of Cognitive and Computational Neuroscience at the University of Sussex, and he’s co-director of the Canadian Institute for Advanced Research Program on Brain, Mind, and Consciousness. And he is the editor-in-chief of the journal, Neuroscience of Consciousness.

Welcome, Anil.

Anil: Hi, Jim. Thanks for having me.

Jim: Hey, I’m really looking forward to this. Today, we’ll be talking about Anil’s book Being You: A New Science of Consciousness. And as regular listeners know, the science of consciousness is one of my top interests. In the past, we’ve had on top thinkers in the domain, including Christoph Koch, Bernard Baars, Antonio Damasio, Emery Brown, and Joscha Bach. You can find those episodes, if you want to catch up on that stuff, with the search function at

So, we’re going to hop into Anil’s ideas here, and there’s a bunch of good ones. So you start off actually with an interesting little story thinking about being you and anesthesia. Why don’t you go with that?

Anil: It’s an account of my non-experience of having general anesthesia. I found it one of the most interesting episodes in my life. I have to choose the words very carefully here because the whole thing about anesthesia is that, there is a total absence of experience. It’s not the experience of absence, it’s the absence of any kind of experience. It’s oblivion. And it’s the closest we’ll ever get in our normal lives to the oblivion that is almost certainly awaiting us after we die, and it was there before we were born.

And if you’re in any doubt about the fragility and the importance of consciousness, anesthesia’s a very good reminder of that importance because things just go away. You are not there. And it’s very different from being asleep. When you’re asleep, and the anesthesiologist will often say, it’s like, “Well, you’ll just go to sleep for a while. And then you’ll wake up and we’ll have done whatever it is we need to do.” But anesthesia’s not like that at all. You’re in a fundamentally different state. You could have been out.

Well, I certainly remember this most recent episode of general anesthesia. You are there. You start counting backwards. You’re gone, and then you’re back. And no time at all seems to have passed. Now, I could have been under for five minutes, five hours or 500 years, it would’ve been the same. And that is what happens when there’s no consciousness. And consciousness is basically the opposite of that.

And what I find fascinating is that, it’s just a simple chemical prop… It’s a mixture of chemicals usually, but there’s some propofol, midazolam, whatever it is. But a simple chemical intervention in the electrochemical machinery of the brain just turns consciousness off. It turns a person into an object. Fortunately, in a reversible way, and the object becomes a person again.

Jim: Cool. Yep, that’s definitely true. I’ve had general anesthesia a couple of times in the last 10 years, and that’s exactly what it is. You wake up and it seems like no time has passed. Could have been nine hours or seven hours or whatever. It’s quite remarkable. For those who are interested in more about anesthesia and consciousness, I had a whole episode on the topic with Emery Brown from MIT, who’s both a cognitive neuroscience guy and a practicing anesthesiologist, and he’s done a lot of work at the intersection of the two. So now we kind of know, in the negative sense about consciousness, if we’re going to talk about consciousness, of course, we have to go down the rabbit hole of, what is consciousness?

Anil: We have to approach this tricky aspect of defining it, which I always shy away from a little bit, firstly, because, I always have to say this, because definitions are not preconditions for making progress, at least not consensus fixed definitions, and the history of science tells us that definitions of a phenomenon, they always change. They always evolve along with our scientific and philosophical understanding. So definition is useful, I think, for something that’s incompletely understood like consciousness, mainly in order to make sure that people don’t talk past each other and are focusing on the phenomenon in question to some extent. So for me, I like the definition that came up quite some time ago, decades ago, from the philosophy of Thomas Nagel.

And he said, “For a conscious organism, there is something it is like to be that organism.” And what he means by that, at least how I interpret him, is that it feels like something to be me. It feels like something to be you, Jim. And it feels like something to be, probably, a monkey, a dog, certainly another person, but it doesn’t feel like anything to be a table or a chair or a laptop computer. So it’s really the presence of any kind of experience whatsoever. This sounds a bit circular, but I still think it’s useful because it puts the focus on experience, rather than things that might be associated with conscious experience like intelligence or language or an explicit sense of self-identity or a particular kind of behavior. Consciousness is just a raw fact that experiencing is going on.

Jim: And then kind of using fancy talk, it’s about the phenomenology, right?

Anil: It is, yeah.

Jim: And then one of the things I was very glad to see you make the distinction, I see so many people get this confused, is the difference between consciousness and intelligence.

Anil: Yeah. I mean, this distinction, I think, is super important. It’s fairly obvious, I think, given a moment or two’s reflection in intelligence. Also, people argue about defining intelligence. One fairly generic definition is doing the right thing at the right time, or being able to make actions in the service of goals, something like that. And when you write it down, it’s clearly different from any conception of consciousness, which is the having of experience.

And I think the reason they often get conflated is because of some residual human exceptionalism. We think we are smart and intelligent, and we know we are conscious, and we think consciousness is really special, and we think our intelligence is really special so we tend to link the two together, and sometimes, even use intelligence as a surrogate for consciousness. If something is sufficiently smart, we might be tempted to say it’s also a conscious and we can see this debate happening, or rather this mistake happening in the current discourse around artificial intelligence, where…

People slip so easily from talking about whether a system like the next generation of chatbot or whatever it might be, or large language model, has some kind of general intelligence, to it’s a conscious machine or it has consciousness that there was the whole furore last year with the engineer from Google who got the boot for basically making the claim that the chatbot LaMDA was sentient, on the basis of nothing at all, but just playing on, I think, this kind of human exceptionalism and anthropomorphism where, if something seems to us similar enough in the context of intelligence, then we tend to feel there’s the presence of a conscious mind there. But conceptually, the two things are completely different. You don’t have to be smart to suffer.

Jim: When we had Antonio Damasio on, he went through a number of clinical examples of people with almost no cortex, still conscious, people with inability to lay down memories, still conscious, people who weren’t manifesting much intelligence, still conscious. And so that just seems so clear. But as you say, we get so confused about it. And in fact, this issue of anthropomorphizing, to my mind, is a plague in psychology in general. Amazing number of times, when I’m reading books, I go, true about humans, what about animals? And often, it’s continuous, but people don’t recognize it. And I think it’s very important to keep that lens in place.

Anil: It is, but it’s a fine balance. In some ways, we need to anthropomorphize a little bit, in order to generalize out from the human that when it comes to consciousness, since that’s the focus now, that we only really know about human consciousness, because only human beings have the linguistic apparatus to tell about their conscious experiences. In fact, I only know that I’m conscious, really. Any inference about even other people is a form of generalization, although I’m very confident that you are conscious, Jim, and everybody else. I think that’s just pop talk. To question that is a bit ridiculous. But still, formally, it’s a generalization. And then when you go out beyond a human, we tend to take what we know about the biological underpinnings of human consciousness and extend them to other animals. Of course, the further away you get, the harder it becomes.

Jim: Yeah, we’ll get into that later, kind of talking about the other animals, what we think we know or what we guess about them. So let’s go on to the next distinction, and that’s the distinction between the subjective experience or phenomenological aspects of consciousness, from other people who work on other parts of the science of consciousness, what we might call the functional and behavioral properties of consciousness.

Anil: Yeah. There are many different aspects to consciousness. So this definition of it there being something it is like to be a conscious organism, that’s a starting point. And then you can just enumerate the different aspects. And some of these are what we would call phenomenological, experiential, the what it’s like to be conscious, the redness of red, the experience of an emotion, the experience of agency of free will. These are all different kinds of experience.

But then there’s also the functional aspects. Consciousness, at least in ours and other animals, is a biological phenomenon. So the starting point for understanding a biological phenomenon is that it must have some function selected for by evolution. And a lot of work in psychology goes on in trying to identify the functions of consciousness. What is it that we as human beings do better, in virtue of being conscious creatures?

And there are many things here. There are many things. We can integrate large amounts of information. We can behave flexibly. We can project further back and forth in time. All of these things seem to go along with consciousness in humans, at least to some extent. And I think that a lot of work in consciousness research gets a little bit overly distracted by focusing on the functions, because they’re easier to study in some ways. You can put people in the lab and you can measure how they perform on some task. But function is kind of not really the bullseye. The bullseye, for me anyway, is experience. We want to explain the phenomenological properties, first and foremost.

And the functional properties of consciousness might give us some clues about that. Why can we integrate lots of information? Well, maybe that’s because the phenomenology of consciousness is unified and integrated. We open our eyes, we see a visual scene and we hear things and we have feelings, and they’re all part of a single conscious experience, so that has some functional properties. But explaining those functional properties is different from focusing on the phenomenology. And that’s where a science of consciousness, in my view, should focus.

Jim: Which of course is the perfect setup for the next topic, Chalmers hard problem and your response to it. So let’s first tell the audience what Chalmers hard problem is, and then your little move there.

Anil: Okay. So David Chalmers’ a friend of mine and an amazingly profound influence on the field of consciousness research in philosophy and neuroscience for decades. And he basically took the old distinction that’s always been there, at least from Descartes and probably before, between mind didn’t matter between mental stuff and physical stuff, and updated it for the 20th century, in the context of trying to explain consciousness.

And the hard problem is the problem of explaining how and why conscious experience should arise from or be identical to physical stuff, physical material interactions in the world. Our brains, our bodies are physical systems. They’re complicated physical systems for sure, but we can describe them in physical terms. Why and how should anything happening in the physical world be identifiable or generate or allow the emergence of a conscious experience? That’s the hard problem.

And my response to it is that, treating consciousness this way, as one single big scary mystery, might be the wrong approach, and that we might be better off addressing what, with tongue in cheek I call, the real problem of consciousness, which is to just say, okay, consciousness exists. And it seems really mysterious and perhaps intractable to understand how it relates to physical matter, but it’s nonetheless there. So instead of facing the hard problem head on, let’s just identify, describe the different properties of consciousness, here we go, we will be talking about the functional ones, the phenomenological ones, and let’s try and explain each of those individually, in terms of things happening in the brain and the body. And if we do that, then perhaps, little by little, the sense of mystery about the hard problem might fade away, and perhaps, can’t guarantee this, but perhaps eventually dissolve altogether.

And this happened before. The analogy I used in the book is about life. So, it wasn’t that long ago, 150 years ago, people thought that life could not be explained in terms of physics and chemistry. So again, there was this hard problem of life and people wondered about whether there’s an élan vital, a spark of life, something like that. And of course, nobody found the spark of life because there isn’t one. But that doesn’t mean that life didn’t exist or doesn’t exist, it just meant that they were looking at the problem the wrong way. The right way to look at the problem of life was as a constellation of properties that certain things had, homeostasis, reproduction, metabolism. You explain those, and then the hard problem of life just fades away and we make progress. So that’s how I think a science of consciousness should proceed.

Jim: Yeah. I will say, my intuitions after reading in this area for 25 years, not actually being an active researcher other than doing a little bit of AI, is the same, is that it feels very much like that 1870s in biology, where it seemed like magic, and then we said, “Oh, it’s very, very interesting. We still don’t know how it got started, but it’s biochemistry, people.” And I suspect that, I suspect, but as you say, we don’t know, that at the end of the day, we’ll eventually make the breakthrough and go, the thing wasn’t that hard after all, once you get there. But it’s hard to get there. It’s the inverse problem problem, right?

Anil: Right. I think, this doesn’t mean there’s also this super interesting idea of the meta problem of consciousness, which David Chalmers recently talked about. And the meta problem of consciousness is the problem of why people think there’s a hard problem of consciousness, which is actually a more tractable problem. What is it about it? And there’s many different reasons why we might think it’s hard. Perhaps it just is. But perhaps also it’s because we’re trying to explain something that is fundamentally us. We’re trying to explain something that we instantiate, and that may lead us to set a higher bar for an explanation in some sort of weirdly subjectively biased way. But yeah, I think this incremental approach is fine.

And you’re right, there’s no guarantee we’ll get there. But what frustrates me is when people put premature limits on what a materialistic explanation could eventually do is that we could never account for consciousness. And I’ll just say, hold on a minute, let’s give it time. We’re not quite just at the beginning, but it is early days. And if you look at any example in the history of science, things that seemed mysterious at one point in time, with the tools and the concepts that were available at that time, become less mysterious with new tools, new concepts, new data, new methods. And that may well apply to consciousness too. Certainly, we don’t want to pronounce that it won’t in advance of trying.

Jim: Yeah. I like your focus on more than one thing and that it’s a process. One of the philosophers of consciousness who I like the work of very much is John Searle. And well beyond just his Chinese room, we could argue about the validity or non validity of that particular argument, but his concept that consciousness is a biological process. You can’t put your finger on it. It’s not one thing. And he uses the example, it’s like digestion. Our digestion includes our teeth, our tongue, our esophagus, our stomach, our liver, on and on. And he’ll say consciousness is, in some sense, analogous to digestion. And then the rut corollary to that is, yeah, the final output’s often very similar, [inaudible 00:17:41]. But yeah, I find that to be a really strong grounding to feeling one’s way towards what consciousness actually is, and it’s not just some one thing you can put your finger on.

Anil: That’s right. But I think all these analogies can be useful, but they’re also a bit dangerous. And we need to be cautious. Consciousness is not the same thing as life. It won’t follow the same intellectual trajectory. It’s also not the same thing as digestion. Philosophers will always point to the difference that, digestion, you can explain functionally and you could build a robot that digests using different kind of substrate or something. But there’s still this apparently distinctive thing about consciousness, which is the existence of phenomenology.

And even some philosophers, one of my great mentors, Daniel Dennett, will challenge that assumption and say, actually, we’re mistaken about the idea that there’s something special about consciousness, that we impute the existence of something, these so-called qualia, these phenomenological things that don’t really exist. It’s a brain’s way of making sense of other things that are going on. So I think it’s worth holding all these possibilities in mind, but still moving forward with the relatively common sense view that, yes, we have conscious experiences and we know they’re intimately dependent on the brain and the body, and let’s try to figure out, in the best way we can, how the one explains the other.

Jim: Great. Well, let’s move on to our next topic, which is measuring consciousness. And before we get into measuring consciousness, maybe talk a little bit about the distinction between wakefulness and levels of consciousness. This is another thing that it’s easy to get confused around.

Anil: It is, and I should say that this may sound like a litany of continual distinctions, but there’s more to it than this, I think. But it is important to avoid these confusions. Just as anesthesia is not the same as sleep, consciousness in general is not the same as wakefulness. They often go together, but we know they’re different because you can separate them in both directions. You can do this thing that in psychology we call a double dissociation. So of course, you can be conscious while you’re asleep. We all know this because we all dream. Not all animals dream, but we do.

And we can also be on the other side. And this is less common, and this only happens in pathological conditions, in illnesses and brain damage and so on. But in the so-called vegetative state or wakeful unaware state, people go through sleep wake cycles, but they are not conscious. There is nothing going on. It feels like nothing to be them. And so the mechanisms that underpin wakefulness are not directly translatable into those that underpin consciousness. It’s something else. So if we want to come up with a measure of how conscious somebody or something is, we need to track the presence of phenomenology, not just their level of wakefulness or physiological arousal.

Jim: And you’ve come up with an approach to do that.

Anil: Well, there’s a number of approaches. I mean, I’ve collaborated in my group with some wonderful colleagues in the States and in Italy, Marcello Massimini prominently and Adam Barrett in my own lab, trying to come up with measures that indeed do this. And there’s a whole bunch, and most of them are based on this idea of complexity, which I know, Jim, being Santa Fe Institute, is something you know a ton about, I’m sure way more than I do. It’s, again, a tricky concept how you define complexity. But I think for present purposes, it’s definable as just this middle ground between total disorder, total randomness, and total order or total predictability, some balance of order and disorder. And of course, there are many ways that we can develop mathematical tools that measure this balance given some data.

And that’s what we have been trying to do, and others have been trying to do, to come up with some quantitative metric that we can apply, let’s say, to EEG, which is the electrical activity we can record from the surface of the brain, and translate it into a figure which might track conscious level. The one measure that actually seems to be among the most robust mainly focuses on the randomness rather than the order. So it really tracks how diverse brain activity patterns are. And the more diverse they are, the more conscious somebody is, very broadly speaking. We use a measure called Lempel-Ziv complexity for this, which was a measure first developed in the signal processing literature to compress image files and/or compress data files into their smallest possible description. And it turns out, the length of the smallest possible description of some brain data is a reasonable proxy for how conscious somebody is.

Jim: Though of course we have, and we’ll talk about complexity measures, we have to know that that’s only going to work in some middle ground because the limit case is static. Total randomness produces the least compressible files, and yet we know that a completely random firing brain is not a conscious brain.

Anil: That’s exactly right. So this is why this measure is certainly not an ideal. It’s not like a measure of temperature which you know translate directly into a physical quantity of heat. So you’re absolutely right. This measure would become very unreliable if the brain tipped over into this sort of totally random state.

There are other measures that people use, like integrated information, causal density, the perturbation complexity index, which track the middle ground a little bit more directly, and which, I think, even though they’re harder to apply, are more theoretically principled. And so the challenge here is that, the measures that are the most theoretically appealing tend to be the ones that are empirically the least robust because they make all sorts of assumptions about the data that real world data doesn’t fit. So we end up, in practice, using the less theoretically principled measures because they are more empirically applicable. Which points to a challenge. I mean, that’s one thing where consciousness research is going, how do we develop measures which are simultaneously theoretically appealing, and but work in practice?

Jim: Now, you gave an example, I had never heard of this, it’s really very clever, called zap and zip. I want you to tell us how that works as a very pragmatic way to measure levels of consciousness.

Anil: Zap and zip. This was developed by my colleagues, Marcello Massimini and Giulio Tononi in Wisconsin. And it’s very elegant, I think. What it does is, it zaps the brain with a very short, sharp pulse of energy, using a technique called transcranial magnetic stimulation. This basically involves putting a very powerful magnet by the brain, turning it on very briefly, and this focuses some energy into the brain, which causes neurons to fire. We have one in our lab too. It sounds a bit scary, but it’s actually totally fine.

And what you can do is you can zap the brain this way, and then you listen to the response. With EEG, you listen to the electoral echo of this stimulation, so not just where it is, but all over the brain. And then what you do is you zip that echo. So the zip is compressing the echo. So instead of applying these measures like Lempel-Ziv complexity directly to the brain data, so instead of compressing directly the brain data, you are compressing the response to the perturbation. So it does capture a bit of both randomness and order, because the pulse has to spread throughout the brain, and it has to be compressible to a certain extent. So the measure that you come up with is called the perturbation complexity index. So it’s basically index of the complexity of this perturbation.

And what Marcello and Giulio have shown is that the number that you get from this, at least within individuals and, in some cases, across different individuals, gives you a good measure of how conscious that person is. And what was remarkable for me, this is work now 10 years old, they showed, when you applied this to people who’d suffered severe brain injury and were in a coma or a vegetative state, firstly, the measure tracked what state they were in, but it also had predictive value about their recovery. So people who scored higher were more likely to get better. And that I found-

Jim: Yeah, interesting.

Anil: That shows to me that consciousness research, it’s not just sort of armchair theorizing about big existential questions. There’s real practical importance here. This is a great demonstration of that.

Jim: Yeah, it’s also an interesting probe on the question of whether consciousness is something you switch on and switch off, or whether there’s a continuous gradation or semi-continuous gradation, et cetera. What have you learned from these kinds of experiments on thinking? How do you even think about that question?

Anil: Yeah, it’s difficult. There’s a temptation to come down one side or the other, which I always tend to resist a little bit. I tend to like fence sitting a bit on these things, because what’s the assumption underlying it? I think one of the assumptions underlying that question is, there is indeed a single dimension to consciousness. And I think this can be questioned. I think the measures like the perturbation complexity…

Anil: … think this can be questioned. I think, the measures like the perturbation complexity index are useful up to a point, but they probably have limits, because unlike something like heat, which really does reduce to a single dimension of measurements for a system, the temperature of a system is a scaler variable that relates directly to the mean kinetic energy of the molecules within it. While it’s nice to approximate consciousness with a single number, it’s very unlikely that really you can do that satisfactory. You can’t measure life with a single number after all. There are plenty of things in biology which just don’t break down that way.

So consciousness is very likely multidimensional. There are probably things change that you can measure in many different ways that collectively describe consciousness, and some might come and go and some might fall below threshold while others don’t. So I prefer to think that consciousness is multidimensional and that there might be some thresholds below which a particular aspect of consciousness goes away entirely. And if you get below threshold on all dimensions, it all goes away, like in general anesthesia. But then above those thresholds, you can have gradations, you can have smooth changes in this multidimensional space. Characterizing the nature of that multidimensional space is another of the main challenges.

Jim: As we know, if we have 11 liters of English London Bitter, then our consciousness gets degraded in some sense, right?

Anil: Yeah, I mean, there’s other things, so another colleague of mine, Dan Boren in Cambridge, he’s starting to think about conditions like dementia. We typically think of conditions like dementia as memory related problems or cognitive diseases, but one can also wonder whether the character of consciousness is changing in these conditions too. I mean, we really know, intuitively, we know that’s true. Anybody who’s had a relative with dementia knows that it’s very likely the way they encounter the world and their self through consciousness is altered. So can we use some of these measures that have already proved their worth in the clinic for things like vegetative state and coma? Can they provide any traction on how consciousness is changing in other conditions like neurodegenerative disorders or indeed, people who’ve drunk 11 pints of Harvey’s best.

Jim: Now, is there any returns on the dementia questions? I will say it’s a personal interest. Both my mother and my wife’s mother both went through dementia. It’s two different ones, one Alzheimer’s and one micro stroke induced. So unfortunately, I’m well aware of those trajectories. Is there any early data yet on that?

Anil: Not sure actually. I think there might be a little bit, but I think what’s becoming clear is that the measures that we have are probably not finessed enough to pick up fine grain differences that you might see in dementia. I mean, we spent a lot of time looking at other conditions for changes in these measures, and the only other one where we found differences of the size that we see in sleep or anesthesia was a psychedelic state. And when- [inaudible 00:31:33]

Jim: Yeah, I was going to ask you about that. Let’s do that one. Let’s talk about the size of psychedelics.

Anil: So that was fascinating because psychedelics is, people talk about it as a very different global state of consciousness, and it certainly does feel that way. So when we applied these measures of conscious level to psychedelics, the results were surprisingly clear that they went in the opposite direction to what happens during sleep and anesthesia. So the brain got more random, if you like, more diverse. And this, to your point earlier, this does not mean that it’s more conscious in any sort of straightforward linear sense, because we are measuring just the randomness, the diversity of the brain signal, the compressibility of the brain signals. But it does relate, I think, to the phenomenology of psychedelics. There is something about the psychedelic experience where the structure of the experience is less ordered. There’s more mixing between the senses. There’s just something a bit more freewheelingly disorganized about a psychedelic experience compared to normal perception.

And so perhaps there’s no surprise that measures of signal diversity or complexity go up in the psychedelic state. And when we published this result, which was a collaboration with Imperial College in London a few years ago, of course the newspapers took it the wrong way, and they said, “Oh, scientists find evidence that psychedelics is a higher state of consciousness.” And it’s like, “No, that’s really not what we’re saying. That’s confusing some sort of very informal description of psychedelics with what our measures are actually doing.” But that was going to happen. We knew that was going to happen. Fair enough.

Jim: Yeah. Got it. That’s a very interesting result. Just curious, is there any result on, anybody done any work on deep meditation, deep meditators with respect, any of these EEG based measures of consciousness?

Anil: I believe that’s going on too. I think there’s all sorts of altered states that people are now looking at with these kinds of tools. So yeah, deep, deep meditation, trance states of different sorts. And yeah, I think there’s not yet a clear picture that that’s emerging. It does seem that you need a pretty dramatic change in consciousness in order to see sensitivity with the kinds of measures that we’re using, but I’m sure there will be some traction.

The question is what explanatory power does this give you? If you just show a change in complexity, does that help you understand anything distinctive about, let’s say, the meditative state compared to the psychedelic state compared to the normal waking state, which is again, why it’s probably best to consider consciousness as multidimensional. I mean, there are some aspects of, let’s say, psychedelics that seem perhaps more prominent than normal waking consciousness, perhaps sense of connectedness, vividness of perceptual experience. And then some might seem less like the sense of self and sense of volition. So I think we need a more rounded picture of the different ways consciousness can vary and how to measure each of those dimensions separately.

Jim: Makes a lot of sense. Well, let’s move on to the next section where Tononian friends do think they have one number to rule them all. Integrated information theory, whenever we get into the science of consciousness, we always end up talking about this and people end up having different opinions. Christof Koch, for instance, defended it to the last dot and tittle, right? Other people have different views. So why don’t we remind our audience what integrated information theory says, and at least qualitatively what phi is all about.

Anil: So integrated information theory or IIT to give it its abbreviation is the brainchild of Giulio Tononi. And now it’s gone through many iterations. In fact, the fourth version IIT 4.0 has just been published, which I confess I haven’t, I yet to read. It’s quite a long paper. It looks not completely different from previous iterations, but perhaps mathematically more elegant. So I can’t comment much on that. The basic idea of integrated information theory I think is very appealing. And it is, again, to write down what conscious experiences are like in terms of their phenomenology, and then to figure out what properties a system must have in order to give rise to those properties. And the two properties that IIT focuses on, there are more, but let’s just focus on two, are integration and information. So Tononi notes that every conscious experience that we have had, could ever have, is integrated. It’s a unified experience. We don’t have multiple conscious experiences going on independently, at least not that we know of. So there’s this integration. Now this can be challenged. Some philosophers challenge this, but let’s just go with it.

And then the second property is information. Every conscious experience is different from every other conscious experience you’ve ever had or ever will have. Will never have this particular conjunction of things happening in your mental life ever again. And that’s not true of everything. That’s not a trivial thing to stay. Like a die throw a die, you get one number out of six. That’s a certain amount of information. You rule out five alternatives, but a particular conscious experience rules out an enormously larger number of alternative possible conscious experiences. So I found this insight really appealing. And in fact, it goes way back. It goes back to the late nineties, and it was reading an early version of this in 1998 that led me to take a postdoc in San Diego to work on ideas like this.

But by the time I got there, Tononi had already left. So that was the end of… Well, it wasn’t actually, it meant that I developed the ideas in a separate way, and that will become important in a second. Because the other thing about IIT is, he then develops a mathematical measure called phi, which captures this balance of integration and differentiation. And it’s related to these measures we were talking about before. This measure of zap and zip actually is a sort of more practical measure that flows from phi, it flows from this idea, but it’s not the same thing. It’s not a direct approximation of the thing. So phi is measure that supposedly captures integration and information, but there’s something very subtle in the way that Tononi does this in IIT, and it’s very important, which is that information for Tononi in IIT is information for the system itself. It’s not information for us as external observers of the system.

And this makes all the difference when you try and write down the math for the measures. Because to measure phi in the Tononi sense, whereas information for the system, you can’t just look at what the system does over time and calculate its probability distributions and so on. You have to know everything the system could possibly do, even if it’s never done it. You have to, if you think about a system as something with loads of buttons and knobs on, you have to twiddle all the knobs and press all the buttons in every possible combination to see what happens. This is of course just not feasible for any real system. And this is a problem if you want to measure it. The reason it has to be that way for IIT is because IIT makes a very, very strong claim. It says that phi is consciousness. It’s growing an identity in relationship. It’s really trying to solve the hard problem. It’s saying this is consciousness. Consciousness in IIT really is what you call, I think it’s formally, it would be maximally irreducible or maxima, irreducible maxima of integrated information. And that is consciousness.

Now this is a very ambitious claim and it’s very interesting and there are fascinating ways one might put it to the tech, given that you can’t actually measure phi, you can do other things. So one of the thing, and I’m involved in a collaboration with people including Giulio trying to do this. One of the predictions it makes is that you can change the structure of a neural system, leave the firing pattern, the activity pattern exactly the same. But if you change the underlying structure, your conscious experience should change. Intriguingly this is true even if the neurons aren’t firing at all. So you could have neurons that don’t fire and that goes along with a particular conscious experience and then you could make it so those neurons can’t fire. And that should change your conscious experience. And that’s weird, right?

Jim: That is weird.

Anil: Why should it make a difference whether a neuron could fire if it’s not firing, but on IIT that makes a difference. So I quite like this theory because it makes counterintuitive predictions, but I think there are other ways to express it. And with colleagues of mine in Sussex and London, mainly Adam Barrett, Pedro Mariano, Fernando, Rosas, and others, we distinguish between strong IIT, which is the Tononi version of it, and what we call weak IIT, which takes the same intuition, that consciousness is both integrated and informative, but instead of trying to come up with a measure that’s identical to that, we say, well, that’s in the eye of the observer and it’s like these are properties that consciousness seems to have. And if we can identify from an observer’s perspective those same properties in neurodynamics, then good. We’ve got sort of explanatory links between the two. The benefit of doing this is that we can develop measures that are like phi, at least in some respects, but that are practically applicable to real data. The disadvantage is- [inaudible 00:42:25]

Jim: And I recall. Yeah.

Anil: Yeah. I mean, it’s no longer a strong claim to say that this is consciousness.

Jim: As I recall how you calculated phi, it had to do with calculating every possible subdivision of the connection space and calculating phi for every one of those, and then picking the largest one. And I never could quite get my head around why the largest one would be qualitatively different than one that was different, that was less by let’s say a 10th of a percent. That was never obvious to me why there was that qualitative difference. But anyway, that’s going down a rabbit hole we probably shouldn’t go down.

Anil: Yeah, we’ve been down that rabbit hole too, so we can go down there if you like. But it is definitely a rabbit hole. It is one of the issues that for our approach, we don’t have that issue. We don’t have to make it depend so much on a single partition of the system.

Jim: Well, we’ll do that offline sometime. I don’t think the rest of our audience will be interested in that particular rabbit hole, even though I am. Now, of course, strong IIT, I like the distinction, produces some kinds of statements like the famous diode connected to the light bulb that turns it on and off is conscious according to Tononi at the level of one bit, which isn’t much, but other systems that you wouldn’t normally think of as conscious under, if you can calculate a phi greater than zero are conscious. So this is the so-called panpsychic aspect of IIT. What can you tell us about that?

Anil: Yeah, it is one of the implications of strong IIT, for sure, that wherever there is phi, wherever there are non-zero maxima of integrated information, there is consciousness. And this means that it’s not a necessarily biological phenomenon only, even a single photodiode, as you just said, will have a modicum of consciousness. And in other things might not be conscious at all. Like a feed forward neural network that might implement a very, very complicated function would not be conscious on IIT, because a purely feed forward system has always zero phi. So it’s a sort of restrained panpsychism, it’s not that consciousness is absolutely everywhere. It’s here and there, but not everywhere. And the idea that the photodiode has one bit of consciousness is a mistake to think of that as just a reduced version of how conscious you and I are, it’s going to be totally different. But still it’s an aspect of the theory that I think is, I find a bit… Personally I find it unappealing, and that’s really not judgment that speaks to whether the theory’s valid, what matters for the theory is whether it makes testable predictions or not.

I personally don’t find panpsychism that appealing and it’s either it’s restrained or it’s more aggressive forms because to build consciousness into the universe from the start, I think, could solve the problem by fiat rather than by actually solving it. But IIT is not guilty of that particular sin, it’s panpsychism as a consequence rather than the pre [inaudible 00:45:46] position. And so I think that’s a good thing. But yeah, if you’re making a claim like Tononi does for IIT and proposing an identity between a particular thing happening and consciousness, then you have to bite the bullet and say wherever that thing happens, there consciousness is.

Jim: Now, Scott Scott Aaronson did some refutations of earlier forms of IIT. I don’t know if those hold up to the later forms where he developed some mathematical formalisms that would generate high phi, but everybody would agree are not conscious. And I don’t know where that controversy ended up.

Anil: Well, that arms raise goes on. And the thing is, it’s not true that not everybody would agree because I think even again, some bullet biting happens, and Scott came up with these things called expander grids, which are just these sort of simple structures, which if you just make them big enough, you can get arbitrarily large values of phi. And if you bite the bullet, well then you have to say, well, yeah, I mean, we can’t imagine what it’s like to be an expanded grid the size of the solar system, but according to IIT, there would be quite a lot of consciousness going on. Again, I mean, way back actually in the mid two thousands when the first IIT came out with the mathematician Eugene Shakhnovich, while I was still in San Diego, we also tried to show some sort of reputations basically on this basis that early versions of phi, you could show that even two coupled neurons might have infinite amount of consciousness. But there’ve been response to that from within IIT to try to either show that these particular systems actually don’t generate large amounts of phi or you bite the bullet. And I think it’s kind of a useful dialogue. It really is.

Jim: And then of course, go all the way Max Tegmark and his quantum IIT, where he does go to the extreme and say, “Ontologically consciousness is built into the universe and is a fundamental state of matter.”

Anil: Yeah, I mean that’s sort of full ball panpsychism.

Jim: The Full Monty. Yeah.

Anil: Yeah. No, yeah. And I find myself always a bit suspicious, but also very intrigued when people talk about quantum mechanics and consciousness in the same breath. There’s been in my mind, a history of mistakes and false leads in relating these two fields. And of course at some level they’re related. I mean, quantum mechanics are still our best understanding of the physical universe at its most fine grain level. But is there anything specifically about quantum mechanics that we need to appeal to in order to explain consciousness? Now that for me is still very much up for grabs and has not yet been demonstrated. Most famously, I think, Roger Penrose and Stuart Hameroff have their theories about consciousness depending on quantum effects in microtubules, which are these tiny structures within cells. I don’t find that remotely compelling, but I do think there’s something potentially interesting at the level of foundations of quantum mechanics where we might start to, I mean, of course in here, this is now way out of my wheelhouse, but I think it’s true to say that there’s no accepted interpretation of quantum mechanics

Jim: No, that is an area I do read in a lot, the quantum foundations thing, and I always warn people who try to anchor their theories in one of the quantum foundations, there’s at least 12 that are still live, and some of them are not even stochastic, right? Because people will tell you, “Oh, the universe is fundamentally stochastic.” Well, actually there’s still three unrefuted quantum foundations that are deterministic people. So now we do seem to think we have probably ruled out any universes that are fully local. But anyway, that’s another rabbit hole.

Anil: Right. That’s just another conversation. But no, just to close that loop very briefly. I mean, there are some interpretations which propose that at the fundamental level, everything is relational, everything is an interaction, sort of relational quantum mechanics and so on. I find that quite interesting because if that’s the interpretation that you go with, then it’s easier to give an ontological status to information. And if you do that, then theories like strong IIT make more sense. But-

Jim: I’m going to get Max on and talk about that.

Anil: There’s a big but beware about this because these are, as you say, as one interpretation among many, and there’s no consensus about any of them.

Jim: And the amazing thing is there is no experiment or no data that can distinguish them. Here it’s been a hundred years and we have 12 radically different metaphysics, and we can’t pick one from the other, which that itself tells us something about reality. I’m not sure what.

Anil: With the exception, as you just said about locality, which was brilliant. I mean, I thought the story of that is just the most beautiful story in the history of 20th century science. I mean, along with DNA, I think for me, how that was demonstrated through bell inequalities and the experiments that followed was just remarkable. Love it.

Jim: Yeah. It’s one of the great, well, one of the great, because I would put general relativity up there as well.

Anil: Yeah. Okay. Fair enough.

Jim: Let’s move on to where I really learned some new things, and that is your thinking about how we do perception and what they really are once it’s inside of our consciousness. So why don’t you take us through your views on what perception actually is, and I’ll wait for you to give your label for it. I could give it to you, but you could lay it out.

Anil: Okay. Yeah. This is a bit of switching of the gears, and this is because in the book, I take the strategy of what are the most general properties of consciousness and divide them into the level which we’ve been talking about, content, which we’re going to talk about, and then self, which we’ll probably also talk about in a minute. So moving on to content, the idea here is that when you are conscious, not under general anesthesia, you’re conscious of something, there are conscious contents, the things that populate your experience at any one time. How do we account for conscious contents in terms of brain mechanisms? And here, and this is really what drives the majority of the work that I do in the lab with my colleagues day in day out, is the idea that the brain is a kind of prediction machine, and that what we perceive is not a readout of the world, the objective world that’s out there from the outside in, but it’s the content of the brain’s best predictions about the way the world must be in order for the sensory information it gets to make sense.

And I think this is a simple idea, but it’s also quite a challenging idea to really get one’s head around. And the reason is because of the way our phenomenology appears. Now, you open your eyes in the morning and the world just seems to pour itself directly into your mind through your senses. It seems that there’s this external objective reality with colors and shapes and things moving and sounds and whatever, and we just passively receive it through our senses. And there it is in the mind. And even if you open a sort of an old-ish, I guess, I haven’t looked at a teaching textbook in psychology for a while, but certainly a classic textbook on visual neuroscience would also describe perception largely as a bottom up process or an outside in process where information comes in through the eyes and the ears and the other senses, and as it progresses deeper into the brain, more complex features are picked out, and we experience a sort of composition of all these features.

The alternative view, and it’s a very old view, goes right back to Plato and Canton and so on, and Herman Vaughn Helmholtz in the 19th century in psychology is that the brain is always making inferences about what’s out there in the world. It has no direct access to objective reality. It only gets indirect, ambiguous, noisy sensory information. So it has to make a best guess, an inference, a probabilistic inference about the way the world is. And to do that, it needs to combine sensory information with its prior knowledge or expectations about the way the world is, and do something that is basically equivalent to bayesian inference, where you combine prior expectations with new data to form a best guess about what’s going on.

And that’s the basic idea of the Bayesian brain and applied to perception we get to the idea that the content of what we perceive is the joint content of the brain’s top down predictions inside out predictions about the causes of the sensory signals that it gets, and that the sensory signals, instead of just being directly read out by some sort of inner homonculus inner south, the sensory signals are just prediction errors, which report the difference between what the brain expects and what it gets at different levels of processing. And the process of perception just involves continually updating predictions or making actions to minimize prediction error everywhere and all the time. So the metaphor that I end up with, and I didn’t make this up, it came from Chris Frith, and the history goes back, is that perception, perceptual content is a kind of control-

Anil: It goes back, is that perception, perceptual content is a kind of controlled hallucination. It’s an active generation that is controlled by, reigned in by, sensory signals coming from the world. So the control is just as important as the hallucination here. This is often overlooked. I like this metaphor because it emphasizes that all our experiences, fundamentally, come from within. When we think of hallucination, we think of an internally-generated perceptual experience, but that normal perception is controlled by the causes of the sensory signals in ways that evolution has shaped to make it useful for us as organisms. Color isn’t out there in the world objectively. It’s a construction of the brain that’s useful for us. It’s part of our brain-based best guess about the way the world is that helps us as organisms navigate, do the right thing at the right time. That’s the idea of perception as a controlled hallucination.

It’s so interesting to me because it just seems like the world is out there and we perceive it, but we have to make this flip and always think, “Actually no, there is a real world out there but the way it appears in my experience is always a construction, always a brain-based neuronal fantasy.”

Jim: Occasionally, in life, you feel these experiences. I can give the example, there’s a black spot out far in the distance and it’s coming towards you and you first say, “Well, it’s most likely a dog, right? It’s the most common black thing I know. My neighbor’s got a black lab,” and then it gets a little closer and you go, “Well, I’m not quite sure what the hell it is. It doesn’t seem like exactly a dog.” Then suddenly you realize, “It’s a bear. Holy shit!”

As a hunter, I’m out often in the woods, using your senses at their limit and you’re looking at ambiguous signals, and sometimes they flip in ways that are surprising, and that sudden thud of a flip is quite interesting. I think that’s a reasonable confirmation of your hypothesis that we’re trying our best to make sense of this and predict what we’ll see next, but sometimes we’re wrong. We actually are hallucinating in the other sense. We think, “Probably a dog. Whoops. Oh, actually a bear. Holy shit!” Right?

Anil: Yeah, you’re right. In the context that you’re talking about, about hunting and using your senses to their maximum, what that might mean is that you’re really paying attention. That’s what it feels like, doesn’t it? You’re paying attention to something.

In this idea of the brain as a prediction machine and the underlying theory of predictive coding or predictive processing, what paying attention means is you are turning up the signal-to-noise ratio on the sensors. You’re more formula. You are increasing the expected precision or reliability of some sensory signal so that yeah, it then can cause a previously-settled best guess to flip quite suddenly because you’re giving extra weight to the sensory input. If you didn’t do that, if you weren’t paying attention, you might still think it’s a black lab rather than a bear until it started eating you.

Jim: Yeah, that would not be good. That is the interesting thing about hunting is that you’re using your senses. If you’re an experienced hunter, your senses are near their limit, and you can even see this, for instance, when you’re looking through binoculars, your hearing goes down. You can actually feel it when you’re hunting. I’ve talked to perceptual psychologists and they say, “Yeah, that’s a known fact. Most people wouldn’t notice it, but because you’re a hunter, you’re both listening and looking at the same time and there’s some interference going on there.” Of course, there’s also the famous optical illusions like the chess board with the shadow on it. Maybe you could talk about that and/or the dress example that you give.

Anil: That’s right. There’s so many lovely everyday examples, well, maybe not quite every day, that demonstrate, at the very least, that the brain’s expectations strongly shape what we experience.

The checkerboard thing, that’s an illusion you can easily find online if you just Google Adelson’s checkerboard. It’s this classic illusion where there’s a checkerboard, like a chessboard or something, and there’s a cylinder on it and it’s casting a shadow. If you look at the checkerboard, it seems like two particular squares on the checkerboard are very different shades of gray. But if you extract those two squares and compare them, they’re exactly the same shade of gray. Their luminance is identical. The reason we see them as different shades of gray is because the brain has built into its circuits the knowledge, that you are not aware of having, that you weren’t aware of having maybe, that objects under shadow appear darker than they are. There’s also the knowledge that checkerboard patterns have this regular alternation of light and dark, and if you combine those two prior beliefs about the way the world is, then the best perceptual guess about what’s going on is that there’s a checkerboard there with the alternating light and dark patterns, and when you take the context away, you realize that, “Oh no, these things are exactly the same color.”

Now, in a more everyday situation, if you have a piece of white paper and you’re inside on a normal afternoon and you take that white paper outside, it still looks the same color. It still looks white, but the light coming into your eye has totally changed, and the brain is compensating for the ambient light context because the perception of color is useful for the brain, precisely because it’s not a direct light meter. It’s making an inference about the reflective properties of different surfaces. So that’s useful. So that’s what we perceive.

This ghosted dresser, I’m sure that the listeners probably remember. It’s quite a few years ago now, this photo of a dress that half the world saw as white and gold and the other half saw as blue and black and people couldn’t agree. Now they just couldn’t. They found it very difficult to accept there was a different way of experiencing this photo. What this revealed is that there are individual differences in how the brain applies its prior expectations to sensory data. There are two things I think they’re fascinating about this. One is how difficult it was for people to understand another way of seeing was possible because we take what we see to be the real world. That’s part of the challenge in all of this story about perception. The other aspect is that, hold on a minute, this difference in perception, this probably isn’t just for the dress, it’s probably everywhere, all the time. We might not notice it most of the time because we use the same words.

If you know and I are hunting together, which would probably never happen because I don’t go hunting, you are going to have a different experience to me. As you just described, you’re focusing, you’re not hearing stuff, and maybe the color of the sky looks slightly different. There’s a huge amount of hidden inner diversity among all of us, and this is something that I’m very interested in measuring right now because we’re very aware these days of externally visible diversity, differences in skin color and height, and body shape and so on, and how diversity in these kinds of things is enriching for society but we know very little about inner diversity, even though it might be equally important.

So we have a study called the Perception Census, and it’s been going for a few months and it’s a set of easy, engaging, interactive little tasks and illusions that measure individual differences in many different dimensions of perception, color, time, vision, music, sound, emotion. We’re trying to map out, for the first time, this hidden landscape of perceptual differences in many aspects at the same time. An experiment like this is successful only when lots of people take part. So I’d encourage people to take part. Apparently, it’s fun. We’ve had lots of reports that people enjoy doing it and they learn a lot about their own perception too. You can do bits and come back. You don’t have to do it all at once.

So yeah, it’s called the Perception Census. We’ve had about 23,000 people take part already, which is a lot from a hundred countries, but we want to make this a real landmark study of perception, so if you can help us, that would be amazing.

Jim: Yep. It’s at As always, the online things we reference on this show will be on the episode page at So check it out. I’m going to do it, see what it’s all about, and you can be part of the advancement of understanding of our minds. What a cool thing. So before we move on from this section, the organizing principle that you propose is prediction error minimization. Why don’t you tell us what that is?

Anil: It’s the overall idea of how the brain makes these best guesses. Just imagine being a brain again. You are there, inside your skull. You’re just getting bombarded by electrical signals that don’t come with labels on. They’re just electrical signals which are related to the world in some way and you’ve got to make sense of them. This is really putting up the problem of inference. So the brain has to make its best guess about what caused those sensory signals.

This is the challenge of inference, of Bayesian inference. This turns out to be pretty much impossible to solve directly, so there has to be some approximate way of doing it. A good approximate way of doing Bayesian inference is prediction error minimization. So the brain has some starting best guess about what’s going on, and then it uses sensory signals to update these predictions. It takes them as prediction error signals, and by minimizing prediction errors everywhere and all the time, the brain can approximate Bayesian inference on the causes of the sensory signal. Then the claim I put on top of that is that the content of what we perceive is the Bayesian best guess. It’s the joint content of all the top-down predictions that are continually changing throughout this process of prediction error minimization.

Jim: All right, let’s move on to the next piece, which is really the Bayesian brain more or less. But let’s do this quickly. We could go down for an hour and a half on the Bayesian brain, but if you could lay out the idea as quickly as you can.

Anil: Well, I think we can do it very quickly because I think we’ve already just done it. The Bayesian brain is indeed this idea that the brain is fundamentally in the business, at least when it comes to perception and action, of making an optimal best guess about the causes of sensory signals. The Bayesian brain, in one sense, it’s a framework. People have criticized it because you say it can’t be disproved because you can always describe it. The brain is doing something that’s Bayes optimal. You just change your priors about what it should be doing, but that’s not the point. I think the point is not that it’s not refutable itself. Is it a useful way of understanding actually what’s happening inside brains?

So the Bayesian brain gives birth to this theory of prediction error minimization, and that is a testable theory. That is a testable theory. Are the contents of perception more associated with the brain’s top-down predictions or the bottom-up sensory input? Do we see signals of prediction error minimization in the brain? All of these things are testable. From my lab and from other labs, there’s increasing evidence that something like this is in fact going on.

Jim: Very good. We now can take the idea of a minimizing prediction error and then add through-action to it and come up with active inference. So let’s not go down the full rabbit hole of FEP, but you have the example you did about finding your keys as an example of active inference.

Anil: Yeah, so this is an extension of prediction error minimization. It’s been around for a while. Karl Friston and Andy Clark have prominently espoused versions of it. The idea is that, for a given sensory prediction error, I can either minimize it by updating my prediction, or I can change the sensory data. So let’s say I’m looking for some car keys on my table, which is usually cluttered with all kinds of papers and other rubbish, mugs, and things, and I’m looking for my car keys. I don’t see my car keys. So what do I do? I can make actions that fulfill this perceptual prediction that car keys are present. I can pick up this piece of paper and see if car keys in fact appear. And if they do, I’ve changed the sensory data to fit my prediction rather than updating the prediction, which would be to say, “Oh, there are no car keys now.”

So that’s two ways of doing it. We can view pretty much the entirety of perception and action as finding a balance between minimizing prediction errors through updating predictions and minimizing prediction errors through changing the data through active inference. An even simpler example is just eye movements. We might often move our eyes so that we see what we’re looking for when we’re reading a page of text.

I think the more interesting, challenging example is the action itself. So if I want to pick up this mug of tea from my desk, that involves a sequence of actions, motor movements of my arm and my hand and my fingers. How does that happen? One way of thinking about it is through active inference. I make a series of perceptual predictions, but now they’re proprioceptive predictions. They’re predictions about body position and kinematics, movement, and these predictions become self-fulfilling. So instead of updating my prediction that I’m not actually grasping the cup, I make a sequence of self-fulfilling proprioceptive predictions, which result in fact, the prediction that I’m holding the cup, come true. So that’s active inference.

Jim: Yeah, that’s interesting when you start to think about how those things are all interacting as a constant dance as we try to figure out the world.

Anil: It is true, and I think it’s got interesting implications in AI as well because once you start talking about active inference, you start talking about control. You can do prediction error minimization purely as a way of figuring out what’s going on, but as soon as you want to intervene in a system and control it, then you need to make actions.

So you can think about active inference as an implementation, as a way of performing optimal control in certain situations too, whether it’s in robotics, whether it’s in foraging for information in machine learning, or whether it’s control of the interior of the body in biology. Once you bring control into the picture, I think that enriches this active inference prediction error minimization framework a lot. In fact, I think that’s at the basis of the whole thing because it’s control that I think is the reason our brains are prediction machines, and that came first.

Jim: Well, let’s hop ahead to something that I have later in my notes, and that is the internal state of things like mood and emotion and feelings and how those guide the organism in its path.

Anil: That’s a very nice segue from where we were. One of the things I started thinking about more than 10 years ago now, and probably one of the earliest contributions I was making to this area, was the idea that if our perceptual content of the outside world can be understood in terms of a perceptual prediction, a top-down prediction, what about things related to the self? What about things related to the body?

There’s another kind of inversion here, which is, we might sometimes think that the self is the thing that does the perceiving, but I think a more accurate, certainly a more scientifically productive way to think of things, is that the self is another kind of perception. What we experience as self is a whole bundle of different kinds of perceptual content to do with the body, to do with our social network, to do with our actions and all sorts of things.

One key aspect of the experience of self is emotion and mood. These are among the most important conscious experiences that we have. They make life worth living. They give our life flow and structure.

What is an emotion? An old tradition in biology, in psychology, says that emotions are perceptions of changes in the physiological state of the body. William James and Carl Lange said this in the 19th century. This idea underwent many transformations in the 20th century, and the latest version of it is basically then an extension of predictive processing, of this idea of prediction error minimization, which says that an emotion is a perceptual prediction about sensory signals that come from the interior of the body. Now we tend to think that perception has to do with the outside world, but there’s a whole raft of sensory signals that come into the brain from the body itself, reporting things like heart rate and blood pressure and gastric tension and all these sorts of things, and again, the brain doesn’t have direct access to the body. It has to make its best guess about what’s going on. So the idea is that it’s exactly the same thing. The brain has predictions about the internal state of the body and it uses the sensory data to update those predictions.

What we experience when the brain is doing this, like the flip-side of these interceptive predictions, the phenomenological flip-side, that’s emotion, that’s mood, just as the phenomenological flip side of prediction is about external things, are experiences of the external world, like visual experiences, auditory experiences and so on.

So there’s just one process. I think that’s one of the, for me, appealing things about this idea. There’s one single, unifying process of prediction error minimization that unfolds in different ways to give us all the different kinds of conscious content that we have, whether they’re related to the world or whether they’re related to the self.

Jim: Yep. Now, I think this is probably my absolute favorite experimental psychology result of all time, and that is the story of the two bridges.

Anil: Oh yeah, yeah.

Jim: At least for me, this nails this idea so strongly that it’s like, “Yeah, that’s got to be true more or less.”

Anil: I’ll explain the experiment, but one thing is, it would be great to replicate this experiment, but we can’t because it’s ethically a very dubious thing to do that you could get away with in the seventies when it was done, but not now at all.

So the experiment was conducted by a pair of psychologists called Dutton and Aron. What they did was they took a bunch of male students, I think students, up on the west coast of Canada, and they went to this canyon, and there were two bridges across this canyon. One was very sturdy and not very far off the water. One was very rickety and had a very steep drop into the frothing tumult below. They divided the students into two groups. One group walked across the sturdy bridge and the other across the rickety bridge.

At the end of the bridge, there was placed a attractive female volunteer, a stooge in the lingo, who would collect some details from each participant as they crossed the bridge. They would ask them some questions about what it was like crossing the bridge and so on. Then at the end of filling in that questionnaire, the female stooge would say, “Okay, and here’s my phone number. Call me if you have any questions about the experiment.” End of the experiment. Not the end of the experiment. Because the real experiment was to see who called the female stooge, and many more people who crossed the rickety bridge called than who crossed the sturdy bridge called.

Dutton and Aron’s interpretation of this, there could be other interpretations, but their interpretation of this was that going across the rickety bridge caused the heart rate to rise, adrenaline to pump, cortisol to start steaming around the body, all these things because it’s a high bridge. And when they got to the end of the bridge, the brain’s interpretation of these physiological changes suddenly switched from fear to sexual attraction. The cognitive framing was different, whereas, on the low sturdy bridge, there was no such change to misinterpret.

So this was taken as an experiment that, at least, is consistent with the idea that the emotion one feels is not purely determined by the physiological state of the body, but it’s a context-sensitive inference about the causes of that physiological state. So yeah, I do quite like it. There’s other experiments which do the same thing, but they’re less fun to tell.

Jim: Yeah, that one I just love because here you are, the guy’s in a state of arousal from a scary traverse of a bridge, sees the girl, and makes the leap, “Oh, I’m aroused because I’m in love,” right? Approximately, right?

Anil: I was going to say, while I was writing this book, I was up staying in a place called the Lake District, which is a mountainous area in the north of England, which I love. It’s a very beautiful part of the world. There’s one mountain ridge that I’ve walked over many times because it’s near the house up there and it’s very scary. It’s always scary. It’s very slippery and there’s sharp drops on either side and even though I’ve crossed that ridge many times, I always feel this sense of physiological arousal going across it.

This one time, when I got to the end of this ridge, there was a sign that somebody had written. I think it was in chalk on a rock. They’d said something like, “Mary, will you marry me?” I can’t remember if it was Mary now or not, but there was a proposal scrolled onto the rock, and I couldn’t help thinking whether this couple might have crossed that ridge and then misinterpreted the scariness of crossing the ridge and realized that they needed to get married, which I think would’ve been great. I don’t know. Obviously, I can’t tell, but that’s what I like to think happened there.

Jim: Yeah, I love it. Well, let’s go on to the neck. Let’s drill down a little further into the self. So you’re basically taking these lines and weaving them together and saying that the self, that’s another controlled hallucination.

Anil: Yeah, that’s right. We’ve already talked about emotion and mood. The larger claim is that everything that we experience as part of the experience of being me or being you, this is where the book’s title comes from, Being You, is a form of perception. Again, it might not seem like that. It might seem in our everyday life that the self is this little inner mini-me somewhere inside my head, staring out through the eyes that is doing the perceiving, that is making the decisions about what to do next, that is executing actions, that there’s this essence of me, maybe it’s the residue of the rational mind still somewhere in there, and that perception is something the self does, not what the self is.

The proposal in this part of the book, like pretty much all of the things, there’s a long historical precedent for this too. The Scottish philosopher, David Hume, talked about the Bundle Theory of Self, which is an early version of it, that no, the self is a perception, not the self does the perceiving.

In the book, I’d unpack this by talking about the different ways we experience being a self. There’s not just one experience of self. There are experiences of the body, as we said, of emotion, of mood, of body ownership. There’s this object in the world that is my body and other things that are not my body. There’s an experience of first-person perspective on the world. There are experiences of agency, of free will, of intention, and then there are experiences of some personal identity, of having memories and plans for the future, what Dunnet called the narrative self, and then there’s the social self. The aspect of being anyone that is dependent on the perception of what others perceive about themselves. Part of being me is refracted through the minds of everybody that I know.

Anil: … through the minds of everybody that I know. And all of these aspects of self, they are bound together, most of the time, for most of us in a way that feels natural, that resists decomposition. But we know from many experiments in the lab and many examples in the clinics, both neurological clinics and psychiatric clinics, that all these different aspects of self can come apart.

And so, the idea that there is a single essence of me, that is the locus of self, well, that is an illusion. But the experience of self is there. It’s as real as the experience of red. It points to something, it picks up really happening things in the body in the world, but it’s just not what we think it might be.

Jim: I like that. Yeah. I think of the sense of self, let’s say the folk sense of self is essentially another class of qualia, right? Which is interesting. And you quote some research, I’ve ever never actually done this, but I know somebody who was a subject for it and he said it was uncanny, which is the rubber hand illusion.

Anil: Yeah, this is great. This is actually since writing the book or I think, while I was finishing it, this rubber hand illusion has taken over my life to an extent, which I didn’t quite predict would happen. A failure of prediction error in my part there, the rubber hand illusion, it’s a lot of fun. All you need is a rubber hand and piece of cardboard and maybe a hammer or a knife or something.

Anyway, what happens is you take the subjects, the participant. You make it so they can’t see their real hand. It’s hidden behind a cardboard partition. Now, they’re sitting at a desk and they put their hand on the table and you put some cardboard so they can’t see their real hand. Then you put a fake hand where the real hand might plausibly be in front of them on the table. Their other hand is also on the table.

Anyway, you then take a couple of paintbrushes and you stroke in time both the fake hand, which they can see in front of them, and their real hand, which they can’t see. Now, imagine being the brain of this person. You’re taking in sensory information. There’s a hand. You can see the hand being touched and you can feel touch because your real hand is being touched even though you can’t see it.

And so the idea is, or the usual story about this is that the brain sees touch and feels touch on this rubber hand and therefore, makes the inference that, “Oh, it’s my hand.” And so you have this uncanny experience that this rubber hand is somehow part of the body. And the best one, the way I usually have this video, which I often show in talks of you induce the illusion. And then you can take a fork or a knife or a hammer or whatever and you can attack the fake hand and the person jerks their real hand away in a… Well, I mean I’d probably do that anyway frankly if somebody came towards me with a hammer in the lab.

But I’ve tried this too. And it does feel like, you feel a sense that your body is being threatened even though you know it isn’t because, at some level, you know it’s not your hand. What this illusion does do is it demonstrates the malleability of our perceptual experience of what is the body. But the reason it’s taken over my life for the last couple of years, well, not to a non-trivial extent, is that the standard story seems to be at best incomplete and probably entirely wrong.

My colleague, Peter Lush, at Sussex always suspected that the rubber hand illusion and experiments like it are riven with what psychologists call demand characteristics. Demand characteristics, they go back to the ’60s at least, they are aspects of an experimental design, which implicitly encourage people to have a particular kind of experience. And this has been, it’s like rule number one of psychology or should be, is control for demand characteristics.

If your experimental setup implicitly encourages experiences in one condition but not another, and then you get those experiences, well that might be purely due to the different expectations that the person implicitly has and might not be due to whatever intervention you’re doing. And so, what Pete did in order to test this did the rubber hand illusion on about 400 people. We had a rubber hand illusion factory going on at our university for a couple of weeks, just one after the other.

And we also measured how hypnotically suggestible each person was. The idea being that the more suggestible they are, and this is a reliable psychological trait, is not just Darren Brown’s stage stuff. We all are hypnotizable to some degree. The prediction would be if you are more hypnotizable, you will experience the rubber hand illusion more strongly because demand characteristics will have more of an effect.

And that’s exactly what we found. Hypnotizability predicted pretty well how people experience the rubber hand illusion. And we also asked people, in a separate experiment, what they would expect to experience in the rubber hand illusion. And indeed, what they expect to experience is what they report when it’s working. We now think that the rubber hand illusion is real in the sense that a lot of people do experience it, but there’s a lot of individual variation.

And that to me, points that it’s more of a top down imaginative suggestion effect than a low level bottom up integration of the sensors effect. But I have to say, there’s a lot of argument in this rubber hand illusion micro field of psychology at the moment. Not everybody agrees with us, but I mean that’s how you make progress. You have disagreements and let’s see which wins out in the end. And I think we’re on the right track.

Jim: Well, it seems like either interpretation, at least it does show that the concept of self is hackable.

Anil: That’s right. Yeah, absolutely.

Jim: I mean it’s maybe a different mechanism than people thought, but at the level of is this self an actual reified thing or is it an ad hoc collection of what happens to be going on at the moment? I think it supports that second position reasonably well, irrespective of what the mechanism is.

Anil: That’s right. Yeah. Another way to say it is it really does show you there are aspects of our experience of self that you just cannot take for granted. It’s easy to take for granted that I will always experience my body as my body. The illusion shows that that’s not the case. And then other clinical conditions, somatoparaphrenia, which is a condition where people, they experience their hand or arm or other or some other limb, in fact belongs to somebody else.

This is a very weird one, to the extent they might throw themselves out of bed because they feel that their arm is somebody else’s arm and what’s it doing in bed with them. Even when you point out that their arm is connected to their shoulder, so therefore must be theirs, it doesn’t change that experience. There are all these examples which indeed show the malleability, the hackability of things we might otherwise take for granted about our experience of what it is to be a self.

Jim: Yeah. In some of those cases, I think with these foreign limbs that go so far as to have to have them amputated because they just can’t rest with them still being attached to their body, which is, it shows how extreme those things could be. Which is interesting.

Anil: Yeah, it is.

Jim: Now, the other thing, of course, I remember thinking about this when I was 10. And I’m sure lots of people have thought about this over the years, but is are we the same person? Is our self, how does our self change over time? And what the hell does that say about selfhood?

Anil: Yeah, this is such an interesting topic because it has, I think, so many implications, not just for science, but for how we live our lives and how we think about the end of our lives and the process of aging. There’s a phenomenon perception called change blindness, been studied for a very long time.

When we look at a scene and some aspects of that scene changes very slowly. And we are not paying attention to that aspect because, well, we have no reason to think it’s relevant. Maybe the background color of the room. I’m looking at you on my computer screen the moment, there’s a pale green color to the wall behind you. If it had started off as orange and changed very slowly to green, I probably would not have noticed. That’s change blindness. It’s change of perception without perception of change.

In brackets, I try to use this distinction to get out of a driving ticket I collected in San Diego many years ago, which was a defense that failed dramatically in traffic court, but I still think it was right. I still think I was in the right. Perception of change is not the same thing as change of perception. Now, if the self is a kind of perception too, or a bundle of perceptions, then probably change blindness applies here too.

And indeed, aspects of ourself do change quite slowly over time. Sometimes we get dramatically reminded of this when we look at an old photo. Or more worryingly, when we do one of those fast-forward your age computer things where we can age ourselves and see what we might look like in 20 years.

We change slowly. Our perception of ourselves might change, but we do not perceive that change happening. That I think raises the possibility that we could have a form of self change blindness. And in the book, I actually go into this in a bit more depth because there are actually very good reasons why this might be the case, why it might be a good thing for our perceptual mechanisms to work this way.

And this gets back a little bit to the idea of perception is control. If our perception of self is mainly about control, controlling ourselves, then it’s probably a good idea for our brain’s perceptual mechanisms to assume that the self doesn’t change very much because that then helps it regulate the self, keeps it stable.

But the upshot is indeed, we are not the same person that we were five years ago or even five minutes ago, that we were quite similar five minutes ago. What it is to be me, what it is to be you, what it is to be anyone is continuously changing. And that I think has quite profound implications for how we think about the end of life. Because a lot of our fear of the end of life is a fear of loss, of holding onto this stable, unchanging essence of self. But if that’s not there, then there’s nothing to lose.

Jim: Yep. As somebody approaching more closely than the end of the road, truthfully, I’ve never had a fear of death. I don’t know why. It’s one of those things I just don’t have. But I can see how that would be a useful consolation to those who do going, “Hey, you’ve been changing ever since you were a tiny kid, now you’re going to change and then it’s going to stop. Oh, well, whatever.”

Anil: Yeah, exactly.

Jim: Which frankly, I think the healthy way to. Speaking of which, in the book, you suggest that self and personal identity may not be identical. That there could be selfhood, like say in young children or in some animals, and without a sense of personal identity. Could you disentangle those two a little bit?

Anil: Yeah. This goes back to a minute ago when we were talking about the different ways in which we experience being a self. And while normally they all feel bound together, they can come apart. Personal identity for most of us is that part of self that we associate with a name, like Anil Seth, a set of memories of what happened in the past and plans for might happen in the future. It’s that narrative sequence of events that define me as an individual distinct from other individuals.

That’s different from what you might call more basic levels of self, which are just these are experiences of mood, of embodiment, of agency, of will. May be possible to have these aspects of self without an elaborate sense of personal identity. Indeed, this might well be what being an infant is partially like. It’s not until at least 12 to 18 months before a human infant can recognize itself in a mirror, which is one way of suggesting that there is no sense of an individual distinct from others at that point. But clearly, there’s emotions and other things might well be going on.

And certain cases in adult humans illustrate this as well. One case I talk about in the book is a musicologist called Clive Waring who had a brain infection and encephalitis, which totally destroyed his ability to lay down new autobiographical memories. Nothing that happened to him was remembered again after this illness in the 1980s.

A big chunk of his self was gone. His sense of a continuous, changing, evolving personal identity is not there. But of course, other aspects of his self remained the same. For his wife, he’s still in many ways the same person. He has emotions, motions, he experiences his body. He can even go back to his choir and conduct it in music. Yes, there are dissociable aspects of self and the impression that there aren’t is one of the illusions of self-perception.

Jim: Interesting. Speculation called for. What do you think about advanced mammals? Like say a smart dog? What were your thoughts about that? Do they have a sense of identity or selfhood one level below personal identity?

Anil: I think when it comes to domesticated animals like dogs and cats, and I know I’m going to offend a ton of people here now, I don’t think they have the aspects of animal selfhood that we might impute. One reason for that is that this mirror self-recognition test, which I mentioned a minute ago, yeah, it’s one test, it’s fallible and so on. But it’s surprising how few animals ever pass it.

Dogs and cats do not pass it. However much you train them, they don’t seem to get it. Orangutans do, gorillas do. The odd elephant does. Dolphins may do, very few animals do. Whereas for us, it’s entirely natural after a certain age. Whatever that aspect of self that we have that makes mirror self-recognition totally natural, dogs and cats don’t have it. Their sense of self is different.

And I think we overestimate it because, of course, we’ve trained over many, many generations these animals to respond to us in ways that give us the impression that there’s a individual self of a similar sort going on there. But they have other aspects [inaudible 01:40:37].

Jim: It’s very interesting.

Anil: I don’t want to pretend that other animals lack any kind of conscious experience. That’s, I hope, clear. That’s a very different thing. And all mammals have the same basic neuronal hardware that we know is important in consciousness in general. And for many aspects of self too, but perhaps not these aspects of personal identity and social self that we humans have.

Jim: Yeah, though what it is interesting, we’ve had numerous dogs over the years. And they definitely have very distinct and stable personalities. There’s some level, if they don’t know that they’re the same person from day to day, but they nonetheless are, there may be maybe some sense there.

Anil: Oh yeah, that’s again, it can be, and I’m sure it is true, that different individuals have distinct dispositions to behave, react in particular ways. But the fact that that’s true just does not warrant the fact that they have this thing called a personal identity over time. That they experience themselves as an individual with those attributes. That’s another level of cognitive sophistication, that the mirror test is one way of testing for it. There may be others. Having a personality is completely compatible with there being aspects of selfhood, but just not the same kind of self that we humans have. That’s what I’m saying.

Jim: Interesting. All right, let’s move on. Probably the last thing we’ll have time for today. There’s some other interesting topics like the inevitable AI question, free energy principle, some other cool things in the book, which I would point people to who are interested in. But I think, I took it away at least, as the unification in your book was your section on being a beast machine.

Anil: Yeah. Now thank you for raising that. I think, yeah, this was interesting for me because this has really helped by the writing of the book. One of the underappreciated benefits of sitting down to do over years, this very painful task of writing a book, is that it makes your own ideas clearer to you and how they all fit together. And this happened to me in writing the chapter about the beast machine.

The term beast machine comes from Descartes. In French, bête-machine. And he used it in a fairly derogatory way about non-human animals like dogs, for instance. For Descartes, it was the presence of a conscious, rational mind that was all important. That was his res cogitans, this stuff of conscious thought. And he reserved this for humans. Other animals in Descartes’ writing at least. In his personal life, maybe not, but then that’s a common thing, right?

In his writing at least, non-human animals were beast machines. They were flash and blood machines. Their reactions, their apparent emotional displays gave them no conscious status of the sort that he attributed to humans in virtue of conscious, rational minds. To put it more simply, the living flesh and blood status of other animals was not relevant to whatever kind of consciousness they may or may not have. And I think entirely the opposite is the case, that we are conscious and we have conscious selves because of our living flesh and blood beast machine nature.

And very briefly, the reason I think about this starts and ends with predictive error minimization, prediction error minimization. The free energy principles is another way to get at it. But yeah, we will probably avoid that rabbit hole for now. The basic idea is that what did brains evolve for? They didn’t evolve to do poetry or neuroscience or philosophy, or even language or anything beyond keeping the body alive. Keeping the body and therefore, the brain alive is evolutionarily speaking, the fundamental imperative for any brain. Even action comes after that.

How do you keep the body alive? Well, fundamentally it’s about control and regulation. There are things like heart rate, blood pressure and so on, that need to be kept within certain bounds for the organism to keep being viable, physiologically viable. How do you keep things within certain bounds? Having a predictive model of how these internal quantities behave is very useful. If you’re going to control the system, whether it’s a body or a central heating system, having a predictive model of it is extremely useful.

In this view, the reason that the brains are prediction machines is fundamentally because of the basic biological imperative, this drive to stay alive. All the predictive machinery that then allows us to perceive the world and other more elaborate aspects of the self evolve, develops and operates from moment to moment in light of this fundamental drive to stay alive.

This means that perception, all forms of perception, and therefore, for me also are nature as conscious beings intimately related to, are flesh and blood materiality. We are conscious and we have conscious selves because we are beast machines. I go a little bit further of course, in the book in trying to make this argument about why this imperative for regulation goes all the way down in living systems.

It’s not just a circuit level thing or it goes all the way down even to individual cells. And you can think of it very common in discussions of consciousness to think that it’s some kind of information processing and that the substrate doesn’t matter. We could be made out of silicon or tin cans wired up the right way, it wouldn’t matter. But actually, it might matter because of this intimate relationship between life, mind and consciousness.

There’s no sharp division in creatures like us between what we might call mindware and what we call wetwear. Where does the substrate start? It’s not clear in biological systems. It could be life that breeds fire into the equations of consciousness, not information processing or anything like that. And I find this compelling, and I find it surprisingly reassuring because it’s a bit like John Sell, you mentioned earlier he talked about digestion and relating consciousness to digestion. That was interesting, that was trying to make the case that consciousness is a biological process, but it was more by analogy rather than by isolating a deep unifying principle between life, mind, and consciousness. And that’s what I’ve tried to do in this beast machine argument.

And then really, you can unpack everything we’ve talked about from this basic starting point that all aspects of consciousness, of varieties of perceptual prediction, all are grounded in our nature as living systems. All of our experiences, even a visual experience is inflected by emotion at some level. And there’s some embodied aspect to everything we experience.

And that all begins to make sense now if we think of prediction, perception as grounded in our flesh and blood materiality. That’s a high level view of the beast machine argument in the book.

Jim: Yeah, I found that the center of the book. I’m always looking for the center of the book. And when I got to that chapter, I said, “I think this is the conceptual center.” And it really held together from that point, I thought was very good. And of course, in the context of Darwinian evolution, it’s what you would expect, right?

Anil: Yeah. It’s what you expect. And it aligns with, it’s similar to people like Antonio Damasio you’ve had on the show. I know he’s great. He’s also have had some really good conversations with him, learned a lot from him. But it rails against our human exceptionalism because we tend to fixate on things like intelligence and rationality. Descartes did for sure. Rather than these more basic biological things which we share with many other animals. But that may be where things start. And you’re absolutely right that evolutionarily, that’s where things should start.

Jim: Yeah, makes perfect sense. Well, I want to thank Anil Seth for a really interesting conversation here. Those who are interested should go check out the book, Being You, a New Science of Consciousness. And if you want to take part in his perception census,, and we’ll have the URL up at the episode page at Anil, thanks a lot.

Anil: Thank you, Jim. It’s been a real pleasure. Thanks for having me on the show.