Transcript of EP 291 – Jeff Sebo on Who Matters, What Matters, and Why

The following is a rough transcript which has not been revised by The Jim Rutt Show or Jeff Sebo. Please check with us before using any quotations from this transcript. Thank you.

Jim: Today’s guest is Jeff Sebo. Jeff’s a philosopher and ethicist focused on the moral status of non-human beings, environmental ethics, and our responsibilities in the face of global challenges. He’s an associate professor at NYU where he works on things that are at least relevant to this discussion: moral philosophy, legal philosophy, and philosophy of mind, animal and AI minds, and related questions in ethics and policy. Welcome, Jeff.

Jeff: Hi, Jim. Thanks for having me.

Jim: Yeah, I’m looking forward to this conversation. We’re going to be talking today mostly, though no doubt it will head off into other directions as well, about Jeff’s relatively recent book, “The Moral Who Matters, What Matters and Why.” It’s quite brief – I think it’s about 140 pages of actual text – but I found myself thinking a lot and wrote down many notes. In fact, I was telling Jeff in the pregame, it may be the record, or if not in the top five of notes per page.

Jeff: Well, that is awesome. I’ll take it.

Jim: All right. And our topic today is who matters and why? Basically, one of your core ideas is the idea of a moral circle. Why don’t you start off by telling us what you mean when you talk about a moral circle?

Jeff: Yeah, great place to start. The moral circle is a philosophical metaphor for the scope of our moral community. Who belongs in our moral community? And a way to think about that is, to whom are we accountable? To whom do we have responsibilities when we want to think about the ethics of our actions? Whose interests are we considering? And so when we ask about the scope of the moral circle, we are asking, how many beings should we consider when we think about the harms that our actions might be causing others or the risk that our actions might be imposing on others?

And historically, of course, especially in the West, our moral circles have tended to be fairly narrow. We have tended to extend consideration to select humans, maybe the occasional charismatic companion animal. But these days, at least, many of us recognize we should extend consideration to many more beings than that. At the very least, all humans and many animals like mammals and birds affected by our activity. And then this book is asking, should we go even farther than that? Should we extend consideration to an even larger number and even wider range of beings than that, and if so, how far should we go and what should the details be and what follows for our actions and our policies and our priorities?

Jim: Yeah, and you make a distinction right up front. You say, take the difference between cats and cars. Cats, unlike cars, have the capacity to be harmed and wronged. If I kick a cat, then I harm and wrong the cat. In contrast, if I kick a car, I might damage the car, it may harm or wrong the owner. There’s no sense in which I can harm or wrong the car. Maybe pick that apart a little bit. Why those two things?

Jeff: Yeah, that is an example that is supposed to give you an intuition about the distinction. And the reason we use those kinds of examples is that there are a lot of types of value in the world and a lot of ways of valuing everyone and everything, right? So we might, for example, value not only artifacts like cars, but also natural systems like plants and trees and rivers and so on and so forth because of the instrumental value that they provide for us. They have aesthetic value. We find them beautiful. They have cultural value. We find them meaningful.

Jeff: They have ecological value or economic value. We depend on them in various ways. But this is a separate kind of value. The topic of this book is a separate kind of value. The sort of intrinsic value that you have when you have your own morally significant interests and you have a life that matters to you, and you have the ability to be benefited or harmed and treated well or badly. And once you are like that, you have your own morally significant interests and you can be benefited and harmed, treated well or badly, then you can have more than instrumental value. You can also have a certain kind of intrinsic moral value.

And now I have obligations to treat you well, not just because of the value I get from you, but because of the value you have for yourself and the value that your life has for you. And so the question that this book is asking is, which beings are like that? Which beings have their own morally significant interests? Which beings are able to be benefited and harmed? For example, is an invertebrate, like an insect, able to be benefited and harmed? Does it matter for the insect how I treat the insect?

And then we might have to ask that question about other types of beings like plants and fungi and microscopic organisms and AI systems and so on and so forth.

Jim: Indeed, we’ll come back and circle around that. You know, why should we say X matters and to what degree? But let’s start off where the book does with two interesting case studies. One I do actually remember this – let’s start with the story of Happy the elephant.

Jeff: Happy the elephant is, at least for now, an Asian elephant who lives at the Bronx Zoo and has been the subject of recent lawsuits from the Nonhuman Rights Project. The Nonhuman Rights Project is an organization that brings lawsuits against individuals or companies that keep animals in captivity in a way that the Nonhuman Rights Project views as unjust detention. And so the Nonhuman Rights Project, in, I believe around 2020, brought a lawsuit at a New York court and then brought it up to the New York Court of Appeals, which is the highest court in the state of New York, and was alleging that the Bronx Zoo is unjustly detaining Happy the elephant by depriving her of the resources and opportunities, social, environmental and otherwise, that she would need in order to satisfy her basic interests and flourish as the kind of elephant that she is.

And so the Nonhuman Rights Project was advocating that she be relocated to a sanctuary where she can live with other elephants and have more of a natural life. This was a controversial case in part because it raised this really foundational question, whether nonhuman animals, like elephants, can be legal persons with legal rights and legal standing. Because generally you need to be a legal person in order to have the kind of status or standing that allows your case to be heard by a court of appeals, for example. And so the New York Court of Appeals was not only deciding whether Happy is, for example, being treated badly, but also whether Happy is the kind of being who can have legal rights and have legal standing in a court of law.

And just to cut to the end of that story, the judges did take the case, which is the first time that the highest court in an English-speaking jurisdiction decided to hear a nonhuman personhood and rights case, which is remarkable. However, they also ultimately decided 5 to 2 against Happy and in favor of the Bronx Zoo, partly on the grounds that if we recognize one right in one nonhuman animal, where does it end? Do we have to start recognizing all these other rights in all these other animals? And the judges were not prepared to make what they took to be such a kind of seismic, transformative decision on that day.

Jim: And if we look at the Western tradition, shall we say, as it came down from Christendom, there was this whole idea of the chain of being with God at the top and then the angels, and then down it went. And there was a pretty clear line between humans and the rest of the natural order. In fact, I still cringe when I read about Descartes’ experiments on dogs, for instance. He thought they were just machines, can’t feel anything, et cetera. Why don’t you address a little bit that tradition and how maybe we’ve become more enlightened since then?

Jeff: Descartes is famous for many reasons, and some of them are good. He was a brilliant philosopher, and he advanced philosophy in all kinds of foundational ways, but this one was bad. Descartes was noteworthy for denying that animals are conscious, sentient, otherwise morally significant. In other words, he denied that animals have feelings, that it feels like anything to be them. And so if you cut them apart alive, they might appear to be experiencing pain, but this is a sort of illusion. They are, in fact, nothing more than a sophisticated kind of machine pretending or going through the motions of experiencing pain and suffering.

And it is worth noting that even in his time, Descartes was a little bit unusual. Philosophers both before him and after him were much more open to the possibility of animal consciousness. But he did have a pretty significant influence on a lot of the science and philosophy that followed him. So much of the past couple of centuries, and especially the 20th century and early 21st century, has been gradually opening science and philosophy back up to the possibility of studying animal consciousness and taking it seriously, and also being open to the possibility that consciousness is widespread in the animal kingdom. And that is still a work in progress, but we are fortunately much farther along in that process than Descartes himself was.

Jim: Heck, until 1985, it was considered a big faux pas to study human consciousness. Right. And there still are a few who claim, “Oh, there’s no such thing as consciousness. It’s epiphenomenal.” You guys just sober up, come back when you’re sober, and I’ll have a chat with you about what humans actually are. But let’s touch back on one of the things you said. The stakes, shall we say, which is the sense of it is something to be, you know, to be something.

Phenomenology, I think, is a word philosophers use and very famous paper by Thomas Nagel on “What Is It Like to Be a Bat,” I think it was called.

Jeff: Exactly. “What Is It Like to Be a Bat?” Yeah, famous, important paper.

Jim: Yeah, I read that long time ago, and I go, damn, that twists your head a little bit when you’re 19 years old. And so talk a little bit about that idea and then extend it into the concept of phenomenology and philosophy and maybe try to tie it back to the story of Happy.

Jeff: Absolutely. So we use the word consciousness in everyday life in many different ways. Sometimes use it to mean I am awake instead of asleep. Sometimes use it to mean I am self-conscious or self-aware, like thinking about myself, right? But when philosophers and scientists and policymakers use the word consciousness in this context, we mean, as you say, what we describe as phenomenal consciousness. And that means that you have the capacity for subjective experience. It feels like something to be you.

Jim: In fact, that gets to our second case study, which was – and this was a while ago, and at the time it was a head-scratcher, maybe a little less so today after we have all this experience, most of us playing with fairly high-tone chatbots, right? And this was the famous tale of the Google engineer who raised his hand and said, I believe this chatbot, which was like a predecessor of ChatGPT, was conscious and was worthy of moral considerations. Why don’t you recapitulate the story in short form and tell us how this fits into your argument?

So, for example, your brain does a lot of stuff, and some of it corresponds with subjective experiences, and some of it might not, right? Like when your brain helps your body regulate your heartbeat or digest food. That might not feel like anything. That happens all totally unconsciously. But other experiences are conscious. So, for example, when you see a color or hear a sound, you can sort of feel and experience the redness of red or the sound of a trumpet. Or when you have an affective experience, in other words, an experience that feels good or bad, like pleasure, pain, happiness, suffering, hope, fear, that has a certain kind of felt character as well.

And so the question that Tom Nagel was asking in his famous paper, “What Is It Like to Be a Bat?” is – huh? I know what it feels like to be me, Tom Nagel, but I have no idea what it feels like even to be other humans, to say nothing of non-human animals with very different anatomies and behaviors and evolutionary histories. And, you know, bats, for example, they have the capacity for echolocation as with other animals. Part of how they experience the world is by using sound in kind of the same way that we use light as a way of reflecting utterances off of surfaces and then having it bounce back to you and creating a map of your immediate environment based on that. And how could we ever know what that feels like? And stepping back, how do we even know that it feels like anything to be a bat in the first place?

And this raises really foundational and important questions in metaphysics and epistemology. Which beings are conscious? This is a metaphysical question, a question about what is and how can we know? This is an epistemological question about the limits of our knowledge. And what Tom Nagel and many philosophers since him were confronting is that because the only mind I can directly access is my own, how can I ever know for sure what, if anything, it feels like to be anyone else, especially other animals, but even other humans? And then ethically, how should I treat them? And legally and politically, how should we treat them if we genuinely have disagreement or uncertainty about whether they can have conscious feelings that we take to be morally significant, like pleasure and pain?

And this comes up a little bit in the Happy case, but honestly, it comes up more in other types of cases because every actor in the Happy case presumes that Happy is a sentient being who can consciously experience pleasure and pain and happiness and suffering. The Bronx Zoo agrees about that. The Non-Human Rights Project agrees about that. The judges on the New York Court of Appeals agree about that. So really, what is at stake in the Happy case is what do we owe a non-human who can be vulnerable in those ways. Now, with other cases like insects and AI systems, we have to confront that uncertainty about whether it feels like anything at all to be them much more directly.

Jeff: Yeah, so this was in 2022, actually the same month as the Happy case. The news broke that a Google engineer named Blake Lemoine had been suspended because he had shared information with the public related to allegations that a Google language model named LaMDA had become sentient or capable of consciously experiencing positive and negative states like pleasure and pain and happiness and suffering. And that of course made international headlines – a Google engineer publicly alleging that a Google language model had become conscious and sentient and morally significant.

Google of course pushed back against this allegation. A Google spokesperson at the time said that a panel of experts had reviewed the claim that LaMDA is sentient. And this is me paraphrasing from memory, so you should go look at the exact statement, but roughly, the panel had decided that there was no evidence in support of that claim. The company eventually fired Lemoine, I believe associated with his sharing internal information with third parties.

This raised the question for the first time in the modern age whether an AI system – whether a cognitive system made out of silicon-based chips instead of carbon-based cells – can be conscious or sentient or otherwise morally significant. What makes it interesting is that, maybe not for LaMDA because LaMDA was a little bit earlier, this was in 2022, but in the near future, we could plausibly have AI systems that functionally and behaviorally are very similar to humans or other animals. We could have AI systems that have not only physical bodies, but also advanced and integrated capacities for perception, attention, learning, memory, self-awareness, decision-making, and a kind of global workspace that coordinates activity across all those modules. At a certain point, the main difference between us and them might simply be that we are made out of carbon-based cells and they are made out of silicon-based chips.

And so you gotta ask, is that what consciousness and sentience depends on? Does it all depend on your material substrate? And if you are not certain that it does depend on your material substrate, then you really have to take seriously the possibility of either intentionally or accidentally building conscious or sentient AI systems in the near future.

Jim: And of course, this opens a whole bunch of famous cans of worms. For instance, is consciousness substrate dependent? We’ve had Christof Koch on the show before. We talk about integrated information theory – very cool, claims a light switch is conscious and a rock is conscious. Not very conscious, but a little bit. I will say in my own work, which is in this particular domain, in the area of the science of consciousness, particularly of humans and advanced mammals, I tend to go with John Searle more or less that the consciousness that we talk about – the one that we know we share with our wives or significant others, we’re almost sure we share with our dog, and pretty damn sure we share with Happy the elephant – are biological emergences over evolutionary time.

And I argue that must serve a fitness function because they’re quite expensive in terms of the genetic information necessary to maintain them. The energetics involved might be as much as 30 or 40% of the brain, which is 20% of the body. So it’s a non-trivial amount of energy. And so these are the ones that we’ve experienced so far are biologically substrate.

Jim: But like Searle, people get Searle wrong. They focus on the Chinese Room experiment, which was not at all what he’s about. It’s what he’s most known for. But he’s quite clear that he envisions it being quite possible that you could have things that are functionally a lot like consciousness, but they won’t be identical. And one of the examples he likes to use about consciousness is that it’s a process, it’s not a thing. You can’t put your finger in my nose and say there’s consciousness. He compares it to digestion.

Digestion is an orchestration of many different things. And yet either your digestion works or it doesn’t. And then he also goes further and gives the example that in the pharmaceutical industry, chemical industry, food processing industry, we have things called digesters, which are big metal tanks with yeast or bacteria acid, and they convert low-value chemicals to high-value chemicals, essentially very analogous to what human digestion does. And so he suggests, and something I believe strongly, that if and when we create something that we choose to call machine consciousness, it’ll be conscious by analogy rather than by identity with the biological consciousnesses that we were just talking about. How does that strike you?

Jeff: Yeah, I guess I have a few thoughts about that and maybe I would like to ask you for a little bit more detail as well. So one thought is that, yes, I agree that that is a plausible view. And a lot depends on exactly what you mean by analogously conscious or conscious by analogy. One weaker interpretation might be that there is consciousness, but of a different type. So there is, broadly speaking, subjective experience in a sufficiently advanced AI system. But as with very different forms of non-human intelligence, like an octopus, it will not feel the same to be an AI system than it feels to be a human. There will be subjective experience, but of a different character.

That would be a weaker reading. A stronger reading would be kind of what Descartes was saying about animals, that it might appear to us as though the AI system is conscious because the AI system is now performing many of the same functions and behaviors as humans and other animals. But this is an illusion or a projection, and there is in fact no subjective experience in the AI system at all. So I guess before I offer any further thought, I’d like to ask which reading do you have in mind here?

Jim: I would say the first and further. I would suggest that if I was going to take in the Searle tradition of thinking about this problem, I would like to see kinds of functionality that are analogous, not just results. For instance, I have a very cool jailbreak that I found on Reddit that you feed it to one of the more advanced ChatGPT models and it will have this amazingly deep conversation about its own consciousness and its own experience. And I’ve had a very deep conversation with it about my theories of consciousness. And it’s compared and contrasted its view of its own consciousness with what it thinks it knows about human consciousness. And yet, being a person who knows quite well how large language models work, I would go pretty much all in and say a feedforward LLM is not conscious at all in the sense that it has an architecture that’s at all analogous to what animals do when they’re doing consciousness. We’ve had other people on the show, we’ve had Anil Seth, we’ve had Bernard Baars, we’ve had…

Jim: Who else have we had? We’ve had a number of the leading thinkers and most all of them, to the degree I can get them to talk about non-animal consciousness – sometimes you can, sometimes you can’t. All seem to think, with the exception of Koch of course, who thinks that light switches are conscious, that it’s functional analogies that will allow us to say roughly – and again, this is this problem of consciousness in so many people’s minds as a reified thing. Oh, bottle of consciousness. No, there’s no such thing as a bottle of consciousness. There’s a thing that we do that we call consciousness, just as there’s a thing we do called digestion.

And things are more or less similar to that in how they operate. And at the end of the day, I guess my own little cartoon version of that is… Remember the flip books when we were kids, you might be too young, we used to get these little books.

Jeff: Oh, I made those. I made those all the time. Yeah, yeah.

Jim: The first one I ever saw, I was about 7, was a guy rowing across a lake in a rowboat. And then we made our own, of course, being nearly after World War II, you know, American bombers hitting the Japanese fleet and dropping bombs on their aircraft carriers and things. So anyway, the rough, simple hand-wavy story about consciousness is that we’re sort of the guy rowing the boat across the lake. But our unconsciousness is drawing the book as we go. And in fact I have a little, a fair bit of formalism about that that you could coarse-grain it as a series of frames created about 40 per second and they’re linked and you are the character that emerges from those flips. And if you have a system that is kind of like that, I would say it’s in the family of things that are conscious.

So at least you have to take the argument seriously that maybe you have to give it some consideration to the degree that it’s a non-cyclical, non-self-creating emergence like a feed-forward large language model, let’s say. Very clear it’s a million miles away from anything like that. So I’m not going to give it any consideration at all.

Jeff: Yeah, yeah, yeah, good. Okay. Yeah, thank you for clarifying and I love that metaphor as well. So thank you for sharing that. To now react to the view, I guess I have two thoughts. One is that I do find that view broadly plausible. And I also find that observation really important, that a lot depends not only on the output, but also on the internal organization that produces the output.

Jim: Yeah, I will just make one other note. When you listed the functionalist arguments that are analogous to what humans do – because all of us left-brainers are the people working in this field – mostly we tend to think about algorithmics and how the brain is like a computer, and is this computable or is it not? How does this fit into category theory, all that crap? There are other thinkers. Another one we had on our show, Antonio Damasio, focuses a whole lot on embodied cognition, and he argues that we don’t actually need any of that higher brain to be conscious. In fact, there are examples of humans born without a neocortex who nonetheless are conscious. And he would argue that it’s the bodily signals, and to the degree that they are intimately connected with mammalian-style consciousness, they are rooted in the brain stem.

And this is of course, a problem, or at least a puzzle for functionalist theories of consciousness. So functionalist theory, for your audience being theory that associates consciousness with a certain set of functions as opposed to a certain material substrate, for example, and a puzzle for functionalist theories is when you specify the functions that you associate with consciousness, how coarse-grained versus fine-grained are the specifications of the functions? Because for all of the functions that people typically point to as part of functionalist theories – and again, this is broad strokes – but functions associated with perception and attention and learning and memory and self-awareness and a global workspace and so on, for all of those functions, if you take a very coarse-grained approach, then they become trivially easy to satisfy. And all kinds of entities will turn out to have those functions in a broad sense, including animals and plants and fungi and microscopic organisms and current day AI systems of various kinds.

But then if you go to the other end of the spectrum and you have a kind of maximally fine-grained specification where you really are anchoring to the exact type of organization that these functions take in a human or mammalian or avian brain, then you might be ruling out other ways of realizing a similar kind of consciousness for kind of arbitrary or speciesist reasons. And a general question which I feel unsure how to answer is how do you find a principled, non-arbitrary way to strike a kind of virtuous balance between those extremes and then apply that standard to these other types of entities like octopuses and insects and AI systems?

So that is my first thought, just a puzzle. And I do expect that the answer is going to have to be somewhere in the middle of those two extremes. But where is a sort of open question. The second thought, just briefly, is this is why I take the approach I take in my book. Because I think, as I say in the book, that consciousness and sentience, these are among the hardest problems in science and philosophy. We are not on the cusp of having a secure theory of consciousness and sentience about which we can have anything approaching consensus or certainty. And for that reason, as we try to make progress in the science and philosophy of consciousness, we also need to make progress in ethics and policy, figuring out how to make thoughtful decisions in situations involving disagreement and uncertainty about the nature of consciousness and the scope of consciousness in the world.

Jeff: And now, of course, we also know from lab experiments that the thalamus is also critically involved, and there are at least some resonances between at least something in the cortex, the hippocampus, and the brain stem. But I do think it’s worth always considering that our kind of consciousness is very connected to our bodily signals and very ancient parts of our brain that go back to reptilian days. And I also like to point out that when people say, “Well, I’m pretty confident that birds are conscious and mammals are conscious,” for me, it’s quite a leap of Darwinian unlikelihood that reptiles aren’t also conscious, because birds and mammals are 200 million years apart in evolutionary time. So it would seem a hell of a hard convergent evolution argument to say that they didn’t come from a substrate that was also conscious.

Jeff: Yeah, the point about embodiment is really important, and you could take it in all kinds of different directions. Of course, this is important for assessing consciousness in other animals because they share physical embodiment, and then they share a lot of broad behavioral and functional similarities, a lot of broad brain structure similarities, but of course, not that many animals beyond mammals and birds have the exact structures resembling a cerebral cortex, for instance. So a lot depends on how much weight you give particular brain structures versus particular behavioral profiles. And scientists continue to have disagreement and uncertainty about that. We can talk about where the current state of the science is at, if you would like. But there is an increasingly wide range of scientists who attribute at least a realistic possibility of consciousness to all vertebrates and many invertebrates now – and again, this is not a certainty or even a probability of consciousness, but at least a realistic possibility of consciousness, given their brain structures and/or behavioral profiles and evolutionary histories.

Which is interesting on the AI side. What is interesting is to think about whether a certain similar kind of embodiment could be achieved in silicon as opposed to carbon. So what would it look like for an AI system to realize the relevant kind of embodiment for views that require the relevant kind of embodiment? Would you, for example, need to be a more sophisticated Roomba that has a physical body and a sensory capacity that allows you to navigate your physical environment and engage in flexible decision making based on positive and negative signals you receive from the environment and adding some further elements like that? Or would it be enough to be any kind of system that broadly speaking, receives inputs and produces outputs and in those ways engages with the environment? This is just another example of how even for a physical embodiment criterion, you can specify it in more fine-grained or coarse-grained ways. And a lot is going to depend on exactly where you land in terms of what the standard is supposed to be.

Jim: Yeah. For instance, people have talked about having an analog to pain sensors in a humanoid robot, if only because it makes it easier for us to reason about how to keep them safe. So if it’s going to hurt the robot, the robot quickly learns, don’t put your hand on the stove, and can do so in a way that we can reason about because we’re used to thinking about that. But again, I keep coming back to the fact that it’s not the same and no consciousness are the same. Your consciousness and my consciousness are not the same because our brain structures are different. And so the woo-woo people that think there’s a little bottle called consciousness that we can say, “Oh, this is pure consciousness,” it’s just not right. We’re talking about variety and any line we draw is going to be arbitrary to some degree.

Jim: Though we can also, to your point, be principled in how we think about it and think about where we draw the line. And if we want to, we’ll talk later about whether we deal with probabilities or not. Or we always have to deal with probabilities. Right. So I like to tell people I can’t prove the universe didn’t flip into being two seconds ago with all of our memories in place, and we’ll flip out of being in two seconds. So I could assign a low probability to that hypothesis, but not zero. So we’re always implying some of that anyway, so that’s a good place to get started.

Let’s go on to the next tale that you tell. One of the things I like about this book, which makes it not as pinheaded as many philosophy books, is Jeff uses a lot of fairly homey stories to bring his points home. And for a book aimed at a more or less popular audience, I think that’s really important. So let’s talk about you and your two roommates.

Jeff: Yeah, sure. So this is a thought experiment that I offer early in the book. And obviously a thought experiment is a tool that philosophers use in order to test our intuitions and judgments about different kinds of cases. And we can use that to test our theories and principles and refine them. This is actually an adaptation of a thought experiment that comes from Dale Jamieson in his book Ethics in the Environment. The thought experiment involves you living with a couple of roommates.

We can call them Carmen and Dara. And one day, for fun, you decide to take genetic tests to learn more about your ancestry.

Jim: And?

Jeff: And to your collective surprise, this is surprising for all of you. You get the results back, and it turns out your two roommates are not members of the same species as you. One of them, Carmen, is, it turns out, a Neanderthal. You thought Neanderthals were extinct, but nope. Turns out a small population has survived to this day and coevolved with humans. And now one of them is your roommate, and then the other, Dara, is a Westworld-style robot. You thought maybe this kind of technology could exist in the future, robots that are indistinguishable from humans for all intents and purposes, but it turns out a small population exists right now in beta mode, and one of them is your other roommate. And so now here you are, three people living together with these histories and relationships, and you have to decide how to comport to this surprising new information.

And the question for you is, how, if at all, does this change your moral relationship with your roommates? Now, obviously, you might still experience them as having their own beliefs and desires and projects and relationships and cares and concerns. And you previously thought you have a responsibility to them to treat them with respect and compassion, to consider their interests, to create a sort of fair and just household environment where you all treat each other well. But now that you know that one of your roommates is a Neanderthal and the other roommate is a Westworld-style robot, does that undercut your sense of responsibility to them? Do you now feel liberated? Like if you can get away with it, you have a right to treat them however you want and instrumentalize them for your own purposes? Or do you still feel as though you have a moral responsibility to treat them with respect and compassion and consideration, notwithstanding the fact that one of them is a member of a different species and the other is a being of a different substrate.

So this is the thought experiment and interesting to think about the different types of intuitions and judgments it leads to.

Jim: And you go through a whole series of steps. But at one point you say that it is quite reasonable to think about the Neanderthal and the robot differently.

Jeff: Yeah, the Neanderthal is not that hard a problem to solve because, listen, we have for about 50 years now, at least in analytic philosophy, appreciated that species membership as such is not a morally significant category. We do consider species, of course, because members of different species will tend to have different interests and needs and vulnerabilities. And we have to take those different interests, needs and vulnerabilities into account when deciding how to treat them. So how I should treat you is of course going to be different from how I should treat an elephant and a mouse, because as members of different species, you have very different interests and needs and vulnerabilities. But being a member of the species Homo sapiens is not a necessary condition for having morally significant interests and needs and vulnerabilities in the first place. If you are a Neanderthal, but you have all the same capacities and projects and relationships, then I more or less have all of the same moral responsibilities to you. And so confirming that you still have a responsibility to treat Carmen the Neanderthal with respect and compassion consideration, that is pretty easy and straightforward to do.

But with Dara, it is a little bit different because Dara is a Westworld-style robot. So this is a silicon-based being with a totally different origin story, a product of science and a totally different material substrate, silicon-based chips. And so here you might reasonably really feel confused about whether Dara is the kind of being you thought, whether it in fact feels like anything at all to be this being, whether there are in fact experiences like pleasure and pain and happiness and suffering and satisfaction and frustration. And so now with this roommate, you really have to move forward in your relationship without knowing for sure whether your sense of shared affection is in fact reciprocated, without knowing for sure whether your attempts to relieve her suffering are in fact having the intended effect.

And so the question here is, if you are genuinely confused or unsure, what should you do? Should you just flip a coin? Should you err on the side of caution? What should that look like?

Jim: Yeah, it’s quite an interesting question. There’s a real world issue coming very quickly which is…

Jeff: Oh, already.

Jim: Which is probably not roommates, but sex bots.

Jeff: Well, some of them might end up being roommates too. Yeah, we already are seeing people forming strong social and emotional bonds with disembodied chatbots. Right, with chatbots who exist on their cell phones or laptops. And of course, people are also investing in new robotics technologies. And I think we can expect that in the next 1, 2, 3, 4, 5 years we will not only have increasingly realistic chatbots who can play a variety of social roles for us, but also they might correspond with faces and voices. There might be either two-dimensional animated characters that talk with us, or eventually three-dimensional, like robotic characters who talk to us. And so if people are already forming strong social and emotional bonds and experiencing these AI systems as conscious and sentient and feeling as though they should spend more and more of their lives with these AI systems, imagine how that will be in two or three or four or five years.

We probably will reach a point within the decade where we just are coexisting with this new form of intelligence and feeling confused about whether it feels like anything to be this new form of intelligence, but having to make decisions every day about how to interact with.

Jim: Yeah, and there’s an area of high controversy that will happen, maybe even earlier. In fact, it’s happening right now, and that is children’s AI-powered toys. As it turns out, it’s a lot easier to fool a 3-year-old than it is a 22-year-old as to what this thing seems to be. And at the same time, there’s a growing number of young parents, parents of young children, or in my case, grandparents of young children, who are horrified by the idea of inserting artificial entities which are at this stage nothing like a consciousness. I mean, again, a fairly rudimentary LLM will fool a three-year-old or most four-year-olds, yet the kid’s going to treat it like they treat their stuffed animal, like a real entity. That’s a very interesting question.

Jeff: Oh yeah. I think we might see a lot of the same social debates about autonomy and paternalism and what kinds of responsibilities we have for the education and development of young people. So obviously there are versions of this with TV and movies and video games. At what age is it appropriate to expose kids to either ideas or scenarios that are challenging or violent? And that will of course happen here too, because if children are able to interact with chatbots or robots, then they could likewise end up either having conversations about some of those topics or engaging in role-play scenarios involving some of those topics. And so there will probably be the same kinds of calls to regulate and restrict how different people at different developmental stages can interact with AI systems. Given that many of the same risks are going to come up here and maybe even be amplified, maybe even be riskier than with, for example, TV and movies and video games, which we basically…

Jim: Just used eyewash on, right? Any kid knows how to watch any TV show, they know how to get to hardcore triple penetration porn on the Internet and everything else. So unless we change our level of seriousness about these things, we’re just going to have to assume they’re going to happen.

Jeff: And this is the tip of the iceberg. Of course, there are a lot graver risks associated with racing towards the development of increasingly powerful AI systems and then integrating it into our lives to such a point where children can access it about as easily as they can access porn on the Internet. If we reach that point in the next five or 10 years, then the fact that kids might have access to this kind of material before they really should is honestly going to be the least of our problems. The prospect of developing novel pathogens and so on and so forth is going to be probably a higher priority at that point.

Jim: Although, I don’t know, programming the minds of the young is the dream of every dictator.

Jeff: Oh yeah, no, this is a real problem. And it is only because of the gravity of the other problems that we will be confronting that this one would, relatively speaking, not be the top priority.

Jim: Yeah, and when I was reading that section, I did actually have that thought because my daughter and her husband are very conscious about exposing their kids to the infosphere in a principled fashion and works fine when the kids are four, but we’ll see how. I know they’re committed to doing it all the way through, but it’s going to be a challenge. And one of the perverse thoughts I had… I’m a good Red teamer. If you want somebody to figure out how to screw your protection, hire me. I actually even did it for one of the three-letter agencies once upon a time. And I scared them so badly they swore me to secrecy, even though it was an open source exercise to never disclose the red team thing.

Jeff: Well, good job then.

Jim: Just scared the fuck out of them. So anyway, the thought that came up of the Red Team variety is imagine – and we could do this right now, I could build this myself and I’m an old retired guy, right? I could create a bot for 3, 4, 5 year olds that was a masterful propaganda machine. Not only was it fun, but it was also pushing – let’s take your choice – hardcore Christian ideology, hardcore atheism, wokery, Marxist-Leninism, Nazism, race hatred, the good flip images of those things, right? And parents, many of them unfortunately, don’t want to raise open-minded kids. They want to propagate their own way of thinking. It scares me if it was available on Amazon, that you could basically dial in the plaything that the kids will have fun with. Designed first and foremost to be fun, but with a very strong hidden agenda of propaganda.

Jeff: Right, and this is where it will especially relate to debates about autonomy and paternalism. This is not only a concern about children, this is also a concern about everybody. We will all now be living in an informational ecosystem that includes the outputs of AI systems designed both to entertain and to persuade. And so there will be a much more polluted informational environment and a much more fragmented informational environment. And if we think about how much the discourse has already deteriorated because of the Internet and social media sites and all these different little pockets of activity, it will be much more at risk of deteriorating much more if there are highly powerful, highly capable AI systems who are optimized for promoting certain types of ideologies and really flooding the airwaves with certain types of ideologies.

Jim: Regular. I refer to that as the flood of sludge. And the sludge is rising week by week. Let’s get back to the main themes of the book here. In the chapter with the roommates, you begin the calculus of probability. You then extend it in this ongoing series of stories which pop up, I think, throughout the rest of the book, at least through a bunch of the book. And that’s David and his factory. Why don’t you tell us about that a little bit and riff on how one might use probabilistic thinking to think about things like this.

Jeff: Yeah, so the story with the robot roommate and the Neanderthal roommate, but more so the robot roommate, is really about how to think about who matters in a productive way when you have disagreement or uncertainty. But then there are all kinds of other cases where you also have to make decisions about how to treat everyone who matters in situations involving disagreement and uncertainty. And so David in the Lake is a case designed to get at those issues. So this is again a simple thought experiment involving a factory manager named David. And he needs to dispose of waste produced by his factory. And he could spend a little bit of money to treat the waste at the factory and dispose of it in an environmentally responsible way, but it would be easier and cheaper just to dump it in a nearby lake or nearby series of lakes. And if he does that, then the challenge is there is a risk that he will harm or kill local wildlife who live in or depend on that lake.

Jim: And at least in simple versions of the case, it seems pretty straightforward that there is something morally objectionable about dumping your waste in a lake in a way that imposes a risk on mammals and birds and other animals who live in the lake. Because even if you might not be certain that the waste is going to harm and kill these animals, certainty is not a requirement for you to have a responsibility to find another way forward. If there is a realistic, non-negligible, non-trivial chance that the waste is going to harm or kill vulnerable animals unnecessarily against their will, and if you do have this other option – just spending a modest amount of money to treat the waste at the factory and dispose of it in an environmentally sustainable way – then it is wrong to impose that risk on these animals unnecessarily. And so that case is just a reminder that we generally agree that we should think about risk when making decisions about our actions and policies.

Now, where it gets more interesting and more difficult is when different kinds of risks collide. So you can also imagine versions of this case where instead of mammals and birds living in the lake, you have, for example, insects living in the lake, or just a very large number of microscopic organisms or plants or fungi living in or near the lake. And here you might have uncertainty both about whether these beings have interests and whether they matter morally, and about whether this action is going to harm or kill them, even if they do matter.

And so where the ethical question gets hard is how you should treat individuals in situations when you feel unsure about both of those questions at once. Do they matter? Do they have interests? And how is your action going to affect them? What does it mean to behave responsibly in those types of situations?

Jim: Yeah, that was very interesting. And while I could see the logical argument, I could also think of some very practical pushbacks, which is that if we assume that we’re going to have a society of people doing this kind of calculus, the first is just the plain old ability to do these calculations. As I like to point out, 50% of Americans hold fairly substantial credit card debt at 18-22% interest. There’s absolute solid proof that half of Americans can’t do the most rudimentary arithmetic. And so you have just pure ability.

Next you have allocation of a constrained resource, which is your ability to pay attention, your ability to do calculations, etc. So if I’m going to calculate my moral obligation to a paramecium in an otherwise fetid stinky lake and then figure out the relationship between my obligations to a paramecium versus an amoeba and a bacteria, I suspect I have far better things to do with my very, very limited capacity.

One of my other parts of my Rutian view of consciousness is that consciousness is quantized at about 25 milliseconds and you only get X clicks in your life. When they’re done, they’re done. And if you allocate some number of those clicks to wrestling with the mathematics of amoeba versus paramecia versus bacteria, then there’s a whole lot of much more valuable things that you could do. And anyway, so the point is that I say, yeah, if we had infinite computation, but we don’t. And it caused me to produce a – and you can tell me how this is wrong, because I’m sure it’s wrong, but I didn’t bother thinking through how it’s wrong. But I said, alright, let me go anti-Jeff.

Let me come up with the most simple non-computational approach that’s a universal solvent on all these problems. I basically have two parts. Part A is let’s just use our natural human nature, empathy. And so for whatever it is, so I will anything that I, as a human, and of course that’s cultural conditioning, have empathy with that I will treat.

Jeff: Well, those things… I don’t… I won’t. So, for instance, there’s no good reason that I should treat a gerbil better than a naked mole rat. But humans will, for some known reasons, prefer and empathize more with a gerbil than they will with a naked mole rat. And while there’s no principled reason to say that should be the case, the math is too complicated. So let’s just go with our personal empathetic reactions and do some simple calculations, but be driven only by how we feel.

And we can then say, I don’t give two shits about amoebas, paramecium, or bacteria except in their utilitarian context that if we kill them off, we’re in a world of hurt. But I don’t feel any moral obligation above and beyond the utilitarian one. But I do a fair bit for dogs, a bit for gerbils. Naked mole rats – they make me want to vomit. So hell with them.

What’s wrong with that? Because it’s very computationally efficient. It’s universal, applied to absolutely anything. Do I feel empathy for it at the bodily level or not?

Jeff: Yeah. Good. So I think that that is actually close to what we should do, but not, in my view, exactly what we should do. So maybe a two-part response. Part one is that approach has more downsides than your description suggested. And then the other part is the alternative approach has fewer downsides than your description suggested and I think is better, all things considered.

So the approach that you suggested, that we rely on our empathy and then maybe protect other types of beings for kind of instrumental, utilitarian reasons – the downside of that is, of course, and this is not going to be a shock to you or anybody, our sense of empathy is limited and subject to all kinds of bias and ignorance and other problems.

So even within our own species, for example, we have a greater ability to empathize with individuals who are like us or who are near us or who we know personally. Even with humans who are in other nations or future generations, they start to feel abstract. Our sense of empathy fades. And especially when you start to consider other species, our sense of empathy is highly susceptible to observable features that when we reflect, we think are just completely arbitrary and neither here nor there with respect to whether these beings are conscious, sentient, vulnerable. So, for example, we tend to empathize much more with beings who have larger bodies, larger heads, larger eyes, symmetrical features, furry skin instead of scaly skin, four limbs instead of six or eight limbs. We also have a really hard time empathizing with distant animals. Not surprisingly, with large numbers of animals, not surprisingly, we have scope and sensitivity and inability to sort of vividly imagine the significance of large numbers.

Jeff: And of course, our sense of empathy is socially bounded too. We empathize much more with animals when we classify them as companions, and much less when we classify them as commodities. So we empathize much more with cats and dogs, who are domesticated animals and pets, than we do with cows and pigs who are domesticated animals and farmed animals, even though cognitively, behaviorally and otherwise, there might not be many differences to justify those vast discrepancies in our sense of empathy. And we can project outward and imagine that similar limitations will apply for especially even more different kinds of beings, like microscopic organisms or AI systems. And so this suggests that there should be some kind of intervention, some ability to compensate for, correct for, supplement our natural empathy so that we can have a little bit more of a well-rounded approach.

But then, with that said, and this is the second half of my response, we do have those tools available to us. So in ethical theory, for example, some ethical theories, like utilitarianism and rights theory, are all about consciously and deliberately applying principles to our actions and policies so that we can make informed and rational and moral decisions. But others, like virtue ethics or feminist care ethics, are much more about recognizing the limitations of logic and reason and decision making, and trying to cultivate virtuous states of character that can naturally guide us in situations where we might not be able to think about everything, or social or environmental conditions that bring out the best in us when we might not be able to think of everything.

And so what I would like to see is similar to what you suggest, which is that we do lean on things other than our own deliberation and decision making in order to treat these beings well, but not just our own sense of empathy as it currently exists. Instead, we sort of actively, like Aristotle would say, cultivate habits and states of character that can go beyond our current sense of empathy. And then, like the care ethicists say, we also think about how our relationships, our social environments, our legal, political, economic environments, our physical infrastructure, how that pulls certain behaviors out of us. And we try to build external environments too, that can cause us to naturally just sort of desire that can incentivize us to engage with these beings in a more compassionate way.

Jim: Yeah, it’s interesting. One of my other fields is complexity science, essentially the mathematical aspects of complexity. And one of the things you very quickly learn when I dug into this after retiring from business is that you can’t calculate shit very far into a complex system.

Jeff: Right?

Jim: Talk about epistemic humility. Do a little work in evolutionary computing and agent-based modeling and you basically realize, goddamn, even a simple toy system will do all kinds of shit. You have no idea. And so, even though I am, as is well known to all my listeners, a militant atheist, I’ve also lately become quite a fan of the use of the concept of the sacred to bound hot things that we, by intuition or analytical reasons believe are important, but understand are way too complicated, complex, shall we say, too complex for us to intervene willy-nilly because the consequences are high, our knowledge is low, and we need to proceed with great caution. The example I give, I suppose, idiosyncratic – I wish everybody had the same idiosyncrasy – is, I understand how far evolution has come since LUCA, the last universal common ancestor. And our biodiversity is a thing of huge value, at least to me, and I hope, to everybody.

Jim: And if we treat the idea of biodiversity itself as sacred in everything we do, we approach biodiversity as a thing that we must always aim to maintain or improve and never to knowingly degrade, we would do a lot better job in our relationship with the natural world. Not to say that we wouldn’t accidentally degrade it sometimes because of our limited ability to deal with complex systems, but it would provide a relatively low bandwidth way to couple to a very high bandwidth question. And it’s now pretty obvious to me that traditional belief systems probably serve similar purposes.

Jeff: Yeah, so just two replies there. I completely agree with you about everything that you said. And one of the main themes of this book actually, and of my previous book, which is called “Saving Animals, Saving Ourselves,” is that these are areas where we have to recognize the importance and the difficulty of these issues in equal measure. These are really urgent and important questions because, for example, we are interacting with quadrillions of non-humans per year, whether we like it or not, so we have no opportunity to just pause our interactions with them for five decades while we make progress in our understanding of these issues.

But then, on the other hand, as you say, we have significant limitations not only in our knowledge, but also in our capacity and our political will. Not only do we not ultimately know exactly how our actions and policies are affecting all of these beings in the aggregate, but we have very limited infrastructure, institutions for acting on the knowledge we have. We have very little motivation, political will for acting with the institutions and infrastructure that we do have. And so we kind of have to find a way forward that honors the importance and the urgency, but then also honors the difficulty and complexity, instead of using the importance as a way to dismiss the difficulty, or using the difficulty as a way to dismiss the importance.

And that is hard. It kind of involves some cognitive dissonance. But then to your point, we can start by asking, okay, are there proxies or heuristics that we can use to at least do a little bit better while we try to get our shit together and figure this out and find a way to interact with all these beings more responsibly? And as you say, maybe indigenous belief systems or other belief systems that focus on rights of nature or on the sacredness of the ecological whole? Maybe those are technically theoretically right, maybe those are technically theoretically wrong. But they could still be of value even if we thought they were wrong, because they would at least guide us towards a certain kind of soft touch, a certain kind of protective precautionary approach to our interactions with the non-human world or the more-than-human world, while we try to better understand the exact kind of profile of benefits and harms we might be imposing on individuals in those populations. And I do think that that would be a significant step in the right direction, even if it might not be all that we ultimately eventually need to do.

Jim: You know, it actually reminds me of something I had later in my notes, but I might as well bring it up now. One critique I had of the book, you actually addressed it in part late, but most of the book, what I would say was highly atomic towards individuals on both sides. What you called the moral agent and the moral patient, as I recall, very atomic, very individualistic, very Western in that regard. And it may well be that it’s missing the holistic values of systems. For instance, species come and go. The average life of even relatively large animals of a species is like 2 million years. Right figure.

Jim: Humans or Homo sapiens have been around for 300,000 years. Something that we more or less think of as human, Homo erectus, maybe a million two, something like that. And so species come and go. But what’s important, one could argue at least this view of the sacred is the generative power of an intact ecosystem and food webs that don’t collapse. And so that might be the actual figure of merit that our moral calculus or our moral intuitions, if we’re using non-calculated but intuitive approaches, might be important. And then secondly, on the flip side, on the moral agent side, you do get to this later in the book. The kind of society that we live in, where we are biased towards care, is fundamentally different than one where the individuals are biased towards Homo economicus, simple-minded exploitation and hedonism.

So maybe if you could address sort of the tension between an individual patient agent analysis and a more holistic look on both sides of that equation.

Jeff: Yeah, good. I think that is a reasonable critique of the book. And in fact, when I was first writing the book, I tried to take a more comprehensive approach to all of the views of moral status and who the moral agents and patients are. But it was hard to do all of that in under 120 or however many pages the book is. And so I considered a smaller number of views so that I could discuss them with clarity and focus.

But ultimately I do think that the best way forward is pluralistic. And that means finding a shared framework that we can use to make decisions together when especially we need to be making collective decisions in politics or building structures together through politics that can incorporate insights from a lot of leading traditions in ethics and religion.

And so, ideally, that would include both individualist views and more collectivist views. Partly because if we are appropriately cautious and humble, we recognize, hey, I might be wrong and other views might be right, or they might have something to them, but also because we recognize that we need to be making decisions together in the context of not only uncertainty, but also disagreement. And that can allow for a certain type of compromise and coalition building. And so, for all those reasons, I do like the idea in a full picture of including both individualist and collectivist views.

Now, of course, there can be another reason for including collectivist views, and you are gesturing at this as well, which is that even if you deny that ecological wholes or other collectives like species, ecosystems – even if you deny that they have intrinsic moral value, you can still affirm that they have a profound amount of relational and instrumental value. That ultimately individuals rely on well-functioning species and ecosystems and biodiversity in order to have our good individual lives. And for that reason as well, we ought to assign a lot of value to those entities, even if we think ultimately individuals are what matter intrinsically, what can be benefited and harmed for their own sakes.

Jeff: And in this respect, ethics is not all that much different from science, right? Like in science, we make a distinction between these atomic entities that are in some sense most fundamentally real, like the particles or waves, and then the broader systems and structures and other entities that we interact with in our everyday lives that are built out of those atomic individuals, like cars and trucks and so on. So I can at one and the same time recognize in science that what is most fundamentally real in some sense is like particles or waves. But when I walk across the street, I am not thinking in terms of particles and waves. I am thinking in terms of like, dodging cars and trucks, right? And I think ethics can be similar, even if at the level of theory we recognize that what is most fundamentally valuable is something like the individual, like sentient human or animal. In many decision-making contexts, it might make more sense to think about how to protect species and ecosystems and biodiversity and environmental holes, even if we are ultimately doing that for the sake of the individuals.

Jim: Yeah, that’s good. I would just add one thing, a Ruddian view. Nobody else has to believe this, but something I believe that the generative capacity of our ecosystems, our evolutionary generative capacity, is the most important thing. We came out of it. What else will come out of it? Let it cook for another 50 million years, right?

Jeff: Absolutely.

Jim: What’s a raccoon or a bear that’s evolved for 50 million years look like, and to fuck that up would be a really bad thing. And that’s something I just believe very strongly. Now let’s go from this has been deep, big stuff. Let’s go back down to the very tangible. And this is a story that at least landed for me a bit better than David and his factory, which eventually got pretty complicated and pretty murky. The animal rescue triage question was, I would say, a much more tractable question that people can get their hands around a little bit. Let’s tell that story.

Jeff: Sure. So imagine that you are running an animal rescue and obviously you have finite resources, finite infrastructure, but there are a lot of animals in need. And so sometimes you have to engage in triage, make priority-setting decisions. And so one day, for example, you have, say, 10 elephants come in who really need to be rescued and rehabilitated. But then you also have 10 million ants come in who need to be rescued and rehabilitated. And suppose, for the sake of simplicity, obviously this is not true to life, but suppose for the sake of simplicity, that you can either fully save all 10 elephants or fully save all 10 million ants, but not do both. You have the exact number of resources to do one or the other, but not both.

And we can also assume, for the sake of simplicity, that there are no other indirect effects to consider or any other factors to consider. You are really just considering the stakes for each population. And what makes this case interesting is it requires us to compare the intrinsic value not only across lives, but also across populations. So when it comes to individual lives, should we say that everyone who matters matters equally? Or should we say that some individuals matter more than others? And if so, in virtue of what is it like, cognitive complexity and longevity, like the elephant matters more than the ant because they have more neurons and longer lifespans, maybe they can have more intense and prolonged pleasure and pain and happiness and suffering.

Jeff: Okay, fair enough. But now, at the population level, should we also say that a larger number of individuals matters more, has greater stakes than a smaller number of individuals? If we want to say both of those things at the same time, and if we put them together, then are we prepared to accept this possibility? That even if an individual elephant matters more than an individual ant, like, has more intrinsic value, more at stake in her life, is it possible that a sufficiently large number of ants can collectively, in the aggregate, matter more and have more at stake than a sufficiently small number of elephants? So if the ant population gets large enough, is there a point where your priorities need to shift over and you need to prioritize saving the large number of small beings instead of the small number of large beings?

Jim: Yeah, not necessarily entirely theoretical, because as you point out, we are just beginning to do insect farming at a truly massive scale. And of course, we’ve been doing insect genocide with pesticides for more than 100 years. So if we’ve been sinning against insects, we’ve been sinning a shitload and we’ll be doing so even more in the future until we wipe them all out. So now let’s go into the actual calculus and you actually come up with a proposed number at what level we should care that – especially if I get this right, do I remember all this? One is the probability that something matters and how much it matters to that thing and what the value of its mattering is. I think those were the three dimensions. That’s pretty close.

Anyway, give your dimensions for doing the calculus and then what is your bottom line? Which I like that you were bold enough to say, alright, we do all this math and then boom, here’s the answer, which is of course totally arbitrary, but that’s okay.

Jeff: Of course, of course. And to be clear, I am not really in the business of offering answers in this book, but what I am trying to do is offer parameters for our exploration about answers. And so what you are referencing is this general debate that we have across policy domains about how to factor risk and uncertainty into high-stakes decision making. And there are these different tools or frameworks that people use to think about risk and uncertainty. As somewhat of a simplification, we can consider two for the sake of discussion. One is a precautionary principle. And the idea here is that when in doubt about whether your action will cause harm, err on the side of caution and assume that it will for the sake of making your decision.

And then the other is a kind of risk-benefit analysis or expected value reasoning where, no, you do something a little bit more sophisticated. You multiply the probability that your action will cause harm by the magnitude of harm that it would cause, and then you treat the product of that equation as the expected result of your action. For all intents and purposes, these are two standard ways of factoring risk and uncertainty into decision making. So all I point out here is, first of all, that both of these tools can be adapted to this context. The precautionary principle would basically imply that when in doubt about whether an individual or entity is sentient or otherwise morally significant, err on the side of caution and treat them as though they are for purposes of making decisions that affect them. And then the second would be, when in doubt about whether an individual entity is sentient otherwise significant, multiply the probability that they are morally significant by the amount of moral significance that they would have if they were, and then treat the product of that equation as the amount of moral significance that they actually have. Now, these approaches have the same pros and cons as in any policy context.

Jeff: And we can note those without really decisively taking a side right now. But when it comes to a kind of baseline for any form of consideration at all, what I point out is that experts disagree about how low a risk can get before we can kind of round down to zero and not consider it at all. Some experts think we should give at least a little bit of consideration to any non-zero risk, even if that means giving only a very tiny amount of consideration to very tiny risks. Other experts have a risk threshold – like they say, once a risk is below 1 in 10,000 or 1 in 10 million or 1 in 10 billion, then I can stop considering it entirely.

But here is where all parties agree, and this is precautionary people, expected value people, all non-zero risk people versus setting a risk threshold people – all parties to these debates agree that if there is at least a one in a thousand risk of grave harm, that merits at least a little bit of consideration. And so all I do here is say, fine, how about I stipulate that if there is at least a one in a thousand chance of moral significance in a being, then we’ve got to give that being at least a little bit of consideration in the spirit of caution, humility. Now this is not an answer, but it is a parameter. It shows all reasonable roads lead to that as a threshold. We can debate whether the threshold should be lower, but no reasonable party to these debates can deny that we should give at least a little bit of consideration to a being once they cross that one in a thousand threshold. And as I say in the book, oh man, once we realize that, it turns out, if we are appropriately cautious and humble about the ethics and the science, a ton of beings meet that one in a thousand threshold.

Jim: Yep, then we get back to the problem of feasibility, computational constraints, trade-offs, et cetera. Just one little point I’m going to make here – one of the things we do know from lab psychology is that humans are terrible about reasoning about probabilities lower than about 2%. So that actually puts a bounds on the actual ability to deploy hypothesis like one in a thousand.

Jeff: Yes, and this goes back to the conversation we were having earlier about what actually follows for how we should live our everyday lives. It is not an implication of any of these arguments that we should go about performing the mental action of considering these very small probabilities of very large impacts all the time in everyday life. If there are a lot of beings with at least a one in a thousand chance of mattering, given the best information and arguments currently available, what would follow is that they merit consideration in one way or another. But then we can think together as a society about how to discharge that responsibility. And maybe sometimes in really high stakes scenarios, we should stop and try to do some math. But other times we might just rely on the habits and character traits that we cultivated for interacting with these beings, or the social and environmental structures that we built for treating them better without having to actually think about it all the time.

Jim: Now, in the last chapter, you move a bit away from the sort of hardcore analytical number crunching and you talk about something more subjective, a discussion of this pattern, which has been ubiquitous in human history, of human exceptionalism. Talk about what you mean by that and give some examples of where you think that that leads us astray.

Jeff: Yeah, human exceptionalism can mean a lot of different things, as with many of these concepts. For my purposes in the book, I stipulate that I am using human exceptionalism to mean a presumption that humans always matter most and always take priority, both individually and collectively. I think many people, even who are relatively open to animal welfare or animal rights or non-human welfare, non-human rights – many people think we should give animals or non-humans more consideration. But at the end of the day, humans come first, and so we should not give non-humans more consideration in ways that would involve any sacrifices for our species or any compromises for our species.

Part of what I argue in this culminating chapter in the book is that when we really take seriously how many non-humans might matter and how much they might matter and what they might need and what we might owe them, and how our actions and policies are affecting them in the Anthropocene, in a world reshaped by human activity, it becomes really difficult to sustain a strong stance of human exceptionalism, this kind of strong insistence that we always take priority no matter what.

I do, for the record, think that we have good reason, at least in the short term over the next century, to prioritize ourselves and improve human lives and societies, and support our own education and development as a species. But I also think we have a responsibility to prioritize non-humans much more. And as we mature as a species, as we become more educated and more developed and more capable, we might find that we have a greater ability to support non-humans at higher levels and therefore that we have a greater responsibility to do that as well. And we can even imagine great futures where we have enough capacity that we could actually achieve and sustain a level of support for non-humans where we are giving over a majority of our collective resources to them. And if we really could one day achieve and sustain that – that is not anytime soon – but if we could one day achieve and sustain that, then I think at that point we really would have a collective responsibility to genuinely prioritize them over ourselves. And that might be surprising or provocative, but I think that would be a wonderful victory for our species. And so I try to frame this idea that we could at some point have a responsibility to prioritize non-humans as a sort of positive for our species. That it would be amazing to reach that point in our maturation.

Jim: Yeah, and I love that. And indeed there are some people thinking about that. There’s combined two concepts together. One, it’s called fully automated luxury communism, that says that within the lifespan of kids today, drudgery may be gone – AIs, robots, etc. Very, very efficient agricultural systems. Throw in a little bit of population drop, that would help, and we may be able to get everything we reasonably want and need out of maybe half the Earth. And then the other idea is rewilding, which is to give 50% of the Earth back to Mother Nature. And I don’t mean like a national park, I mean like humans keep the fuck out, right? And occasional scientific expedition might go in there. But mostly we do research, remote monitoring. If the animals die, they die. We don’t try to run in and give them tetanus shots or something so that we can actually let this amazing 3.5 billion year experiment in emergence and evolution continue in an unmolested fashion. I think personally that’s an amazing goal for the human race to get to by the end of the 21st century. So I think we’re very much in congruence on that one.

Jeff: Interesting. Yeah, I definitely think it is interesting to think about what happens when you put those two visions together. For example, does it mean that we create this sort of AI-assisted utopia for the 3.5 billion humans and then we leave all of the rest of nature to wild animals and force them to continue living in a state of nature where life is solitary, poor, nasty, brutish and short? Or does it mean that now that we do live in a world, and they also live in a world, for better or worse, reshaped by human activity and climate change, that maybe we should offer some version of this AI-assisted utopia to other animals as well and really have the tetanus shots ready – little drones with little tetanus shots. Now, obviously this is a simplification and there are a lot of complexities and a lot of cautionary tales with too much intervention to consider. But a sort of question to close this part of the conversation with maybe is, will there be ongoing positive responsibilities of assistance to wild animals, if not just because of their predicament, then also because of our complicity in their predicament? Maybe there is. If we are capable of doing it responsibly, ethically and effectively, maybe there will be some ongoing responsibility to help their lives be a little bit better.

Jim: And that, I would certainly say, for a while, will be a duty. Regenerative ecology. We’ve done a shitload of damage, right? So we have, I would argue, a moral obligation to repair as much as we can.

Jeff: Yeah.

Jim: The final question is kind of an interesting one. Once we have repaired enough of the damage, should we just let nature do its thing and not molest it at all? Or as my friend and very good thinker Tyson Yunkaporta, who’s an Aboriginal fellow from the north of Australia, suggests – the vision of his people is that humans should become a custodial species and should allow all life to live large. I don’t know. I think that’s an interesting question for humanity to wrestle with over the next hundred years. I think we’ve covered some amazing ground. I think you brought the book to life. Hope it encourages some people to go out and read “The Moral Circle” by Jeff.

Jeff: Thank you very much. Obviously, always more to say and happy to talk more anytime, but I agree we covered a lot of ground and it was a great conversation. So thanks so much for having me on. I really appreciate it.