Transcript of Episode 100 – Sam Bowles on Our Cooperative Nature

The following is a rough transcript which has not been revised by The Jim Rutt Show or by Sam Bowles. Please check with us before using any quotations from this transcript. Thank you.

Jim: Today’s guest is Sam Bowles, research professor at the Santa Fe Institute and professor emeritus at the University of Massachusetts. Sam’s an economist by trade, but he’s pretty much a polymath. He’s published in biology, anthropology, psychology, philosophy, and more, definitely a broad thinker. Welcome, Sam.

Sam: Thanks a lot, Jim. It’s great to be on your show.

Jim: Yeah, it’s great to catch up again. Yeah, Sam and I overlapped a bit at the Santa Fe Institute back in the double lots, and it’s great to reconnect. Today, we’re mostly going to focus on his book, A Cooperative species: Human reciprocity and Its Evolution, which he co-authored with Herb Gintis. Let’s start with the really big picture. What do you mean when you say that homo sapiens is a cooperative species?

Sam: Well, what I mean is that people really enjoy cooperating. Human beings, compared to other animals are very, very good at it. We cooperate in very large numbers with total strangers to achieve common ends. Sometime we do this consciously, as for example, when we go to a demonstration or we work together on a common project. Sometimes we do it unwittingly. For example, when we exchange goods in a market to our mutual benefit. We’re just excellent cooperators. The only other species that cooperate on the scale that humans do are the social insects.

Sam: And I’d like to draw a really big distinction between us and them. They cooperate for entirely genetic reasons. We have the capacities that allow us to cooperate, but we flexibly cooperate. In other words, we can look at a situation and say, “Oh, we need to have a cooperative approach to this.” In other situations, we go the other way. We say, “No, let’s compete over this. Things will work out better with the competition.” So because of our cultural and cognitive capacities, we have this added advantage beyond the ants, and bees, and other social insects. And we can modulate our cooperation as the situation changes. And that’s why humans became the dominant species on earth.

Jim: Yeah, it’s quite interesting. Of course some of our mammal relatives also cooperate a little bit. Chimps cooperate in hunts and patrolling the boundaries of their territories. Lions cooperate on the hunt, for instance. A lot of animals cooperate in grooming behaviors and other kinds of reciprocal transactions, but certainly humans cooperate at a scale that dwarfs all of those. I like to sometimes give the example of, “Nope, chimps aren’t going to create a Boeing 787 anytime soon.” How many years, and people, and distance that goes into that cooperation. Or another example I like to give is the LIGO [inaudible 00:02:44] gravity wave experiments, which took 30 years and happened on three different countries, et cetera, or even science itself, a cooperative engine that’s now 300 years old. It is something that is indeed probably the super power for humanity.

Sam: That’s right. I couldn’t agree more.

Jim: Yeah. Now let’s define a couple of terms that we’re going to use a fair amount here that may not be something that’s in the tool kit of all of our listeners. The first is the idea of a free rider, what’s a free rider?

Sam: Well, a free rider is a person who profits or benefits by someone else’s cooperation or someone else’s generosity with herself or himself. So for example, imagine a couple of students working on a biology project. They’re going to get a common grade. And one does all the work and the other one doesn’t. Well, the one who didn’t work is called a free rider.

Jim: Yep. This will come back up again and again in our discussion today. And the other is a little bit more specialist, but discount rates. What’s a discount rate?

Sam: A discount rate is literally the extent to which you value something today, and more than you would value the same thing at some distant time period. So for example, you would ask somebody, “How much would you pay for this meal if you could have it right now?” And the alternative would be, “How much would you pay for that meal, very same meal. You got a firm promise you can get it, but it has to be a week later.” Now people typically are much more willing to pay for something now than they are later. And there are good reasons for that. But the rate at which we discount the future, that is how much less valuable that is to us, is what’s called the discount rate. And that discount rate, it’s a subjective thing. It’s how we feel about the future, but it also has biological and other roots that are not [inaudible 00:04:35] simply [inaudible 00:04:36].

Jim: Yeah. And of course it varies by person and circumstance. A person’s on the verge of starvation, it’s going to have a very high discount rate for a cheeseburger now versus cheeseburger a week from now. But a overfed American may have a very different discount rate.

Sam: Yeah. And the evidence on that is really astounding. Most animals have extremely high discount rates, meaning the stuff now is really important and the stuff down the road is not so important. Now part of that is because the likelihood that they’re going to be around later is less than it is with humans, because most animals don’t have the longevity of humans. So obviously one reason why you might discount the future is you’re not going to be there or you don’t think you’re going to be there. And the other reason is you’re just myopic. You can’t imagine how much you’re going to enjoy that meal in the future because of our incapacity to think of things in a visceral way, if they’re removed from us either by distance, spatial distance, or over time.

Jim: Yeah. It’s also possible, the cognitive scientist scenario that I actually do some work in is that the cognitive science of temporality of time, humans have a pretty good sense of time. Even preliterate humans, non-mathematical humans have a fairly decent analog sense of time. It’s not so clear how much time sense other animals have. It may be something that came pretty late to the day in evolution. So they literally just may not be able to do anything like a realistic comparison or I think it’s overwhelmed by the current signal in the future would essentially meaningless. You don’t have the machinery for temporality. But anyway, let’s hop in now to the material we were going to cover from the book. And one of the things that I’ve always found very interesting is while we talked about Boeing 787s, and the LIGO gravity wave experiment, and science, all of which are artifacts of our modern world, the nation state, [inaudible 00:06:25], et cetera, cooperation goes all the way back as far as we know. What can you tell us a little bit about how cooperation worked in the pre chiefdom era, during the hunter-gatherer or forager period?

Sam: Well, we know, or at least we can guess intelligently quite a bit about that. If you find evidence that a species, it could be human beings, it could be Neanderthals, it could be any species, any species which is engaged in hunting large packages of meat, for example, like other large animals, it’s a pretty good bet that people who hunt big animals are cooperating. And the interesting thing about that is it’s not primarily because you have to hunt them in large numbers. It’s that when you get one, it’s a lot of calories and you don’t have a fridge. So you have to have some way of sharing this stuff around.

Sam: So the economics of hunter-gatherer survival is males hunt large animals, often, or other large packages like large collections of honey. They rarely find them. For example, among the Hadza in east Africa, the likelihood of a hunter getting anything when they go out is 3% per trip. And I’ve gone out with them and I can tell you that we didn’t get anything. Now, so there are two things about that that you then face. You’re not going to get anything most of the days, therefore you have to be able to share somebody else’s meat. And the other thing is when you do get something, you can’t really use it.

Sam: And I said you don’t have a fridge. Well, that’s the first problem. You couldn’t keep it. And secondly, you don’t have the capacity to defend that large whatever it is, kudu, wildebeest, or a large animal against the others if you tried to keep it from them. And keep in mind something like a kudu, a large ungulate in Africa. That’s something like a third of a million edible calories of meat. So that’s a huge amount of food. So we think that our ancestors, hunter-gatherer ancestors, did a lot of big game hunting and cooperated. And we also see some evidence of they’re constructing things which help them cooperate, like large traps and devices to lure animals or guide animals into someplace where they can catch them.

Sam: The last thing that is a little more indirect, there’s very little evidence that our ancestors had hierarchical social structures like governments, or any kind of status hierarchies. They seemed to have been a rather egalitarian lot, at least among family units. If that’s the case, they had to have some other way of governing. These were small groups, not as small as is sometimes supposed, but they probably had something like maybe 50 members or some number like that, all. So it’s a three generation group and it could be smaller at some times of the year. But they would govern presumably as modern day or the 20th century, 19th century hunter-gatherers do, by consensus and endless talking. And that’s what we observed. These groups talk endlessly, and some people are more persuasive and speak more and some speak less, and so on. They come to an agreement about such thing as, “Well, should we move? We’re not actually getting what we need to eat here. Should we move? And if so, where?” Processing that information.

Sam: So there are all these kinds of cooperation in sharing food, in getting food, in making decisions. And finally, of course, a group that didn’t cooperate couldn’t possibly defend itself against other groups. And one of the things that is very likely, is that our ancestors were quite frequently engaged in hostile conflict with other groups. That’s an important part of the story about how we came to be the cooperative species. But just keep in mind that if you’re in a group with a bunch of people who don’t cooperate and you’re under attack, well, you’re out of luck.

Jim: Yeah. And I thought that was a very interesting theme in your book, the interplay between cooperation and competition, particularly competition at the inner group level. And as you pointed out, this idea that something like warfare or the forager equivalent of warfare as somewhat pervasive in human history is surprisingly controversial among social scientists. Any idea why that would be?

Sam: Well, I don’t think you’re that soft-hearted a guy, but probably at some point in your life, you entertained the idea that our ancestors were peaceful people and we became corrupted somewhere along the way. And then take your pick. We either got corrupted by nationalism, or we got corrupted by capitalism, or we got corrupted by something. I think we’d like to think that we descended from a nicer human being than we are now ourselves. And I think particularly troublesome to many people is the specific use that I make of the evidence about the frequency of war in our past. And that is the paradoxical statement that people became altruistic towards one another, within a group, because they were also hostile and intolerant of people from other groups.

Sam: Now, of course I don’t mean that as a generalization. Many of the groups that we studied, hunter-gatherer groups are quite open. But the idea that we could have attained something of which we’re very proud, our capacity to help one another to follow moral reasons, ethical principles, and so on, we were able to do that by a mechanism that involved groups winning wars, is deeply important to many people. And of course I have been often criticized sometimes by people who are scientifically engaged in this research. Many times by people who are quite understandably disappointed to hear the bad news that we became altruistic because wars were frequent in our past.

Jim: Yeah. And you provide some very strong data, the death rates from warfare or near warfare in many of the civilizations that you quote the data on are … And the 5% to 40% of the mortalities were from conflict. And they go, “Holy shit. That’s worse than World War II. Right? Pretty much.

Sam: Yeah. Well, if you look at, in Europe, the century in which the most people per capita, or per capita basis died in warfare was the 17th century. Our hunter-gatherer ancestors, if I’m right, they averaged about maybe 14% of the deaths were from warfare. Everybody has to die from something. So you can calculate did you die in warfare, or did you die in your sleep, or in some other way? 14%, I could be off on that by quite a bit. But it’s interesting that the evidence that I got from archeological data and from ethnographic data, from a fairly large number of studies was really quite consistent. So that’s a huge amount of deaths. We’ve never experienced anything like that. If you take a short period of time, if you looked at the Soviet Union and Germany during the second World War, both of whom suffered huge casualties, of course, then you’d have more. But that’s, cherry-picking. Take a long swatch like a century and you won’t find anything like that. So yes, our past was a warlike past.

Sam: And I don’t say this to say that we were disposed to be warlike. Think of the conditions. in the past, up until the last 11,000 years, the climate was extraordinarily volatile. You had shifts of [inaudible 00:14:07] degrees centigrade over a period of a thousand years. Just for reference, we’re deeply, deeply worried now about a shift of two degrees centigrade. Had some big shifts even shorter than that. Now imagine what that means. As the weather changes and it gets cooler or warmer, and of course, other things change, too, the animals on which you subsist move. And you’ve got to move, too. Now, how much to move? Well, you have to move huge distances. To accommodate the kind of climate shift I just mentioned would involve taking a trip on foot from Capetown on the southern tip of Africa all the way up to Mombasa on the Kenyan coast, almost up to Ethiopia-Somalia.

Sam: So that’s how much we would have to migrate north and south to accommodate those temperature changes. On the way you meet other groups who are also looking for a new place to live. So our past was not that, “Okay, we have this place over here and two valleys over you’re there. And that’s been that way for five generations. Yeah, sure. Occasionally we have a little dust off about something.” No, that wasn’t it at all. We were all on the move looking for a good place. So not too surprising that conflicts were almost certainly frequent during our past.

Jim: Yep. That was very interesting, as you talked about the hunter-gatherer or forger epoch, you quoted my friend, Chris Boehm quite a bit. I love his book Hierarchy in the Forest, which I’ve told him he really should have titled Anti-Hierarchy in the Forest, where he goes into great detail, the adaptations, social adaptations, and probably social-biological adaptations that our forger ancestors accomplished to essentially create that egalitarian style of management that you talked about. Can you talk about that a little bit?

Sam: Yeah. Well that book was an inspiration for me. It wasn’t so much the book. I read some of Christopher Boehm’s work before, but I didn’t know Chris. And of course I hunted him down and we spent a lot of time together talking about those issues. Now Chris’ idea is this, that humans are very good at making sure that people don’t rule over us or boss us. And we form coalitions to prevent that, and so on. And we’ve done this, in the ethnographic evidence, probably way back to our prehistoric ancestors. And what it says is the case is, for example, suppose you don’t have a government, but you have a rule that everybody’s supposed to work hard at hunting or gathering, and then you’re supposed to share stuff. And then suppose there’s somebody who says, “Well, I’ll share at the pot, but I’m not going to do any work.” Well, what do you do about that guy? Because you can’t just dial 911 and say, “We’ve got a problem here.”

Sam: Well, what Chris shows, what Chris said, and what is very originally documented in the literature, is, “Well, first you gossip about the guy. And gossiping is good, because you get to see how many allies you might have if you have to do anything more. Then you start harassing him with either sarcastic remarks. And if he doesn’t shape up, you threaten him.” Chris Boehm and others have shown in many cases, you actually assassinate people who are don’t abide by the rules.

Sam: Now this is entirely just a bottom up kind of thing. And it’s coordinated, as I say, through gossip, humor, joking, and so on. Elementary democracy. And by the way, there are other animals who do this as well. There are a number of primates in which subordinate males will gang up on a dominant male. And the dominant male is mate guarding a female and they’ll challenge the male. And by the way, they almost always win. That is, the coalitions of lower down members of the same species, baboons, for example, they’re very good at working together. And then they have, then, sexual access to the female. So what Chris thinks is that long before we were biologically modern humans, there was the beginning of this process of cooperation in our ancestors that we now see in full flower in the process of democracy, the notion of individual rights, collective decision making by majority rule and so on.

Jim: And he also makes, I always thought, the interesting and somewhat ironic point that while maybe baboons can cooperate in groups to overthrow the alpha, but not so in chimpanzees, our closest relative. Where, generally speaking, the alpha will hang in there for quite a while because he can beat two betas, probably. However, as Chris points out, two human betas with spears can certainly kill the alpha human. And so the development of weaponry may well have been the tipping point that allowed humans to escape our biological heritage of dominance hierarchy as in chimps, and even to a lesser degree, bona, Bose [inaudible 00:19:03], and establish as much more egalitarian way of living.

Sam: Couldn’t be more right, the ability to kill at a distance, it played a huge role in the evolution of human society.

Jim: The other point which you made in the book, and which Chris and others make, is that the idea that these forager bands were very tightly related to each other, it turns out not to be true, as far as I understand, that people came and went from the bands sometimes as family units, sometimes as individuals. These people that were exiled from one group would get adopted by another, et cetera. So we learned to cooperate well outside the kinship circle at a quite early age.

Sam: I’ll take up that in just a minute, Jim, but there’s one more point I wanted to make about the distant past. The idea that humans always were a cooperative species is, in my mind, questionable. We probably developed our ability to cooperate slowly in stages. But some people have thought that it probably had to do with the development of our capacity for language. And that certainly is part of it, to lay down what are some of the expectations. There’s a big event in human pre-history, which is the peopling of the entire world, of the entire human world by some groups from somewhere in Africa. People used to think the Rift Valley. Now people think it could be somewhere along the Indian Ocean coast in South Africa Klasies river.

Sam: But what happened was really astounding. Somewhere between 60 or 45,000 years ago, again, it’s debated, the small group could have been a couple of hundred, left Africa and started to spread and reproduce coming fairly quickly into Europe, and then eliminating Neanderthals, only after some time. Getting as far as Australia in no time. 10,000 years later, they were populating Australia. And remember, there is very large bodies of open water between, say the Indonesian islands and Australia. So what did these people have that made them all of a sudden, they burst onto the world and populated the world pretty quickly. Now people have thought, “Well, there must’ve been some genetic change that happened there.” And people have speculated about what that could have been.

Sam: No one has come up with good idea. Most of the things that could have changed, for example, the construction of our ability to speak and so on happened much, much earlier. And so the idea that there was a genetic change that facilitated this just as … It’s a nice idea because then we could say, “Oh, there must’ve been some gene.” and just let it go. But nobody’s come up with a good argument. I think what happened is this, I think that one or more groups of humans figured out how to cooperate in opposition to other groups. In other words, they figured out how to defend themselves by developing a set of norms about generosity. They also probably figured out the norms I said about insurance and sharing. They got very good at that. Those groups are the ones that spread.

Sam: Now, how do we know that? Well, we know that some of the groups, for example, around the Klasies River, they were hunting gigantic animals. There’s evidence that they hunted an animal what’s called today, the Cape Buffalo, which is 2,000 kilos of animal. So these were obviously good cooperators. So my guess is that this explosion of modern humanity into the world that took place tens of thousands of years ago could have been promoted by the thing we did well, was cooperating within the group and us defending ourselves or aggressing against others. The question you said about group size?

Jim: And relatedness.

Sam: Yeah, relatedness. Yeah. I would say probably 30 years ago among anthropologists and those who study human evolution, there was a view that hunting and gathering-

Sam: … human evolution. There is a view that hunting and gathering groups were very small and they were primarily relatives. Now were that the case, it wouldn’t be very hard to explain how human beings became so cooperative, because most of the people with whom you’re interacting will be your siblings, your parents, and so on, with whom you’re closely related from a genetic standpoint. So helping them is helping your own genes. That’s a well-known possibility in biology, that is called kin altruism. It means altruism towards close kin, and there’s no real puzzle about how that could have developed.

Sam: But of course, what we observe today is people dying not for their sibling, but dying for their country or dying for some idea of a religion, cooperating on a grand scale in ways which are also not having to do with violence. For example, regularly voting for social insurance policies to help people other than themselves, who they feel they are somehow responsible for either because they’re a fellow human being or a fellow citizen and so on. So the kind of altruism that we observe and which makes human being so, so unusual is altruism strangers. The real question is how could that have evolved?

Sam: Well, first step is to think about were our ancestral groups really so small? Now there’s a lot of evidence that they were not. If you look at the size of what are called deems or a band, the average size in a group of studies that are of groups that could have been our ancestors is that the size of the band was 26 people. That’s neither small, nor large, but that band then would, of course, meet with others. In the summer, they’d have kinds of larger groups and so on.

Sam: Now, we’ve been able to study in detail, for example, if you have some pictures of all 540 members of some language group in Latin America, and then you go around to each person and you say, have you seen this person? Have you ever hunted with them? Have you ever exchanged goods with them? Even rather unusual things? Have you ever joked with them? In one case they even asked, did you ever tickle each other? I don’t know. Probably tickling is a big thing with them. And what was remarkable is, that in this group, the Ache, there was a tremendous amount of contact between people. So, the effective amount of people you’re dealing with is really quite large.

Sam: So, that then poses the problem. People are helping each other, even though they’re not related or not related very closely. That’s the problem that I set out to try to figure out many years ago. It seemed to me a puzzle and I’d always been interested in how humans come to want the kind of things that we want. In economics we say, that has to do with the evolution of preferences and in biology, we tend to talk more about behaviors, but I wanted to know how that changes over time. So I thought, well, that’s a really hard problem, isn’t it? And that’s when I started asking around, “Well, who’s ever studied this thing?” And that’s when I met Chris Boehm and that’s where I met Marcus Feldman, who was also associated with the Santa Fe Institute, and Robert Boyd, is also associated with the Santa Fe Institute. And that’s what really kicked off this project. How could altruism have evolved?

Sam: Now, the reason why it’s a hard problem is really pretty simple. Helping someone else at a cost to yourself, that’s the definition of altruism. Okay. Why would that not evolve? Well, just think about it. Suppose we’re in a group of 20 people and 19 of us are altruists. We’re all helping. And there’s this other guy who’s not helping. So now he’s going to be not bearing any costs and getting all the benefits. That means in the very long run, he’ll have more resources because he’s not bearing those costs. For example, he’s not going out on the hunt, therefore he’s not risking being injured or being killed and so on.

Sam: So that person who’s the non altruist, the selfish one, will have a higher likelihood of reproducing then will the ones who are being cooperative. So biologists have expected that what would happen in a group is that eventually the free riders, that is the non cooperative ones exploiting the cooperation of the majority in that group, they would take over. And I think that’s a pretty good argument. That is in every group, you would expect to see the non altruists doing well, unless there’s some other mechanism to understand them and so on. At face value, they’re going to have an advantage.

Sam: So since the 1960s, most biologists have thought it just was impossible for an altruistic trait to develop, except for altruism towards children, close kin siblings and so on. There was an idea, been around for a long time, it’s called group selection. It says that somehow, groups might do better if they had more altruists in them. And therefore, if most of the altruists are in groups which have a lot of altruists then they’d help, but in the ’60s and ’70s and ’80s, biologists came to the conclusion that, well, yes, it could in theory, but it could never work out in practice. Just empirically, it was unlikely that group competition could be a sufficiently powerful force.

Sam: But here’s how it could work. Suppose in every group, there are some altruists and there’s some non altruists and as I say, the non altruists are kind of taking over, but occasionally two groups meet and they have a contest for resources, maybe a war, some kind of conflict. And the one that has more altruism in it tends to win because they cooperate more, or perhaps because as cooperators in hunting and so on they’re better fed. So if the cooperative groups win, then, oh, well then they would populate the site of the group that they just defeated. Their population would grow. So that’s a counteracting force.

Sam: But what it comes down to then is that there’s a horse race between the within group selection, which is eliminating the altruists and the between group selection, which is eliminating the non altruists. It’s just a question of which of those is going to go faster. And that’s roughly where things stood in the ’60s, ’70s and ’80s, people said, “Well yeah, that’s right. But there’s no way in the world that this competition between groups could be as powerful a force as the natural selection going on within groups.” And that’s what peaked my interest. It’s sometimes an advantage never to have been a grad student or an undergraduate major in a field that you get into. Because I didn’t know what a stupid idea group selection was. I thought it sounded pretty reasonable to me.

Sam: It’ll be a little bit like, suppose you started studying economic policy today, but you’d never gone to undergrad economics and never gone to grad school in economics. Well, you probably wouldn’t know that tariffs were really a stupid idea. You wouldn’t know that free trade had to be the way to go. So you would start looking at well, in some cases, some countries have done pretty well using tariffs, other cases it didn’t work. You might approach it with an open mind. So I looked at group selection and I said, “Well, there are two things that are required for it to work.”

Sam: One is that Wars have to be frequent. Okay, we can tick that box already. And it has to be, of course, that the ones who are more cooperative win. I think that’s pretty plausible. But the second is, that the wars have to matter for natural selection. And what that means is that the people who die in the wars, either because they die in combat or they’re displaced and they don’t reproduce in the inferior new site that they have to occupy after they lose a war, those who lose the wars have to be different enough genetically from those who won. So that the altruistic ones, those with altruistic genes, when they take over, that’s a big bump up in the number of altruists in the population. If the groups were just a little bit different, while it wouldn’t, obviously, make much difference, it wouldn’t be a powerful force to counteract the individual selection going on within the groups.

Sam: So, that’s the second thing I set out to try to figure out. How genetically different are hunter gatherer groups? Now, I had set a tough job for myself because, if the groups had been really small, basically families, well then they’d be pretty different one from another genetically. But if they’re larger and include a lot of non-related people, then it’s not as likely that they’d be very different. So I spent a decade, well, that was a decade ago and I’m still working on it. So a couple of decades I’ve been working on how different are hunter gatherer groups genetically? Establishing that was the next thing I had to do.

Sam: But once with those two pieces in order, finding out how genetically differentiated the groups are and how frequent the wars were, then I could go ahead and just as an empirical matter, say, “Look, could this thing actually have worked?” So that humans became altruistic as a matter of our genetic makeup, not just as a matter of our culture.

Jim: And what was the bottom line on the relatedness question?

Sam: I was amazed and so was everybody else. Now, just a little background in population genetics. Let’s take a simple case. We’re only looking at one locus, where there’s a gene for altruism and a gene for not altruism or the absence of that gene, that’s of course a fiction. There’s no reason to think that there’s “a gene” for altruism. There’s probably a whole lot of genes or alleles affecting that. But suppose we just had this hypothetical gene. What we’d want to do is we’d want to say, look, within the group. There’s some differences. Some people have it, some people don’t have it. And within the other group, same is true. But then there’s a difference in the mean between the average in one group and the average in the other.

Sam: So what we want to look at is how different are the groups, one from another, compared to how different people are within each group. And there’s a measure of that due to a great biologist Sewall Wright, from the ’30s to the ’50s. Now, that measure of genetic differentiation can be measured on ordinary genes. Fortunately, starting in the ’40s, ’50s, and ’60s, [inaudible 00:34:05] gatherer populations around the world, people had been collecting blood and doing genetic analysis. I got the data from those studies and calculated this measure of genetic differentiation. And in some cases it was astounding.

Sam: For example, off the coast of Australia on two islands are people of two different language groups. You can see the islands from each other and both of them used to go to the mainland and I suppose they may have intermarried, couldn’t have happened very often however, because the level of genetic differentiation between the people of these two islands was of the same magnitude as the level of genetic differentiation between either one of those and groups in Siberia, for example, and far, far distant places.

Sam: So there was something going on, which meant that groups were maintaining some genetic separation, sufficient so that groups really were very different. Now that’s turned out to be controversial, because it’s an essential part of showing that group selection can work. There are still many biologists. I think it’s safe to say that biologists who have been educated in the last decade or two, understand that group selection is no longer in the doghouse. It’s now an important part of the evolutionary toolkit by which you understand how some behavioral traits may they have evolved, but those educated earlier still have the view that, well, you can forget about group selection. It just doesn’t work as a matter of empirical fact.

Sam: And that’s why I published a series of papers in the Journal of Theoretical Biology, in Science, in Nature, in a total of three or four papers in science, also the proceedings of the National Academy of the US. I had to publish these things in the top journals of the scientific fields, because they really were quite surprising. But I think both the frequency of war and the separateness of the groups is something which we can be reasonably confident of. That being the case, it’s quite likely that we would have developed not only a genetic predisposition towards helping each other, but also defining ourselves as a group different from the others. So the altruism, my co-author Jung-Kyoo Choi, a Korean scholar, we coined the term parochial altruism. Something parochial means narrow-minded or closed minded about outsiders. And so we used the term, sort of jarring and maybe offensive to some people, parochial altruism is you’re altruistic towards the people in your locality, who you call us, and not necessarily towards outsiders.

Sam: Now, the question then is, so now we’re talking about, two loci, two genetic locations in which there could be a gene for altruism or not, and there could be a gene for this parochialism or intolerance of outsiders or not. And then the question is, suppose you had just altruism as a gene, but no parochialism. So basically people were tolerant and never had wars. Pretty sure, well, that didn’t work. It can’t work when we showed that through simulations, but also it’s pretty easy to show through standard mathematical methods in population genetics that that would not work. And think about the opposite. Suppose you had a possibility of being parochial, but not altruistic, that also wouldn’t have evolved. What evolves jointly is the capacity to help one another and to distinguish between us and them and be willing to carry out violence against the outsiders.

Sam: Now, that’s a kind of a remarkable thing, that is, there could be, this is called co-evolution, these two traits of humans. Now, if you asked me, “Well, do you know that we’re genetically predisposed to be either altruistic or parochial?” No, of course I don’t know. There is no evidence of any such gene. What I was doing and what I’ve been doing for the past, I guess, three decades, is trying to show that the arguments about how there could not be a genetic predisposition towards altruism are false. Whether there is or not is another question, I was simply trying to clear the decks and to eliminate the, I think, false biological arguments, that human beings could not be genetically predisposed to altruistic behavior towards them, towards each other.

Sam: Because the other thing about the limits of this population, genetic reasoning, is the following. I think one of the reasons that people get upset when I talk about parochial altruism in war, is that people think that because this long history of warfare and competition, and also helping each other, because that’s our legacy, people think it has to be our destiny. And of course that is absolutely untrue. Human beings are a cultural animal and we’ve shown tremendous capacities to change how we behave towards each other, to completely reboot.

Sam: Now, that’s a wonderful thing about humans. I’ll give you a couple of examples. The idea that we have genetic capacities, if we do, to be parochial, simply means, oh, better watch out for tribalism, because that could be just below the surface. I completely agree with that, but it can be contained. But think of all the other things that are just below the surface, for example, sexual desire towards a stranger and the desire to have sexual pleasure just because there’s pleasure. Well, we just don’t do that, but you think we don’t have a genetic predisposition to have sexual pleasures? Of course we do. And it doesn’t even come up. It doesn’t even come up except for psychopathic people and rapists and so on. That’s what culture does. It says there are certain things that you’d like to do genetically, which you just don’t do.

Jim: Exactly. It’s Humes is, ought thing, right? That you can’t say, because we have some historical biological tendency to X, then that’s how we ought to live. When we’re perfectly capable of generating a culture which can diverge from our relatively mild, in many cases, genetic tendencies. Another one, another example, when you go back to the anthropological record, something like 10% of the deaths of males are the result of male homicidal jealousy. And we don’t do that one too much anymore, not zero, but we certainly don’t suffer a 10% death rate amongst males from homicidal jealously. We can learn.

Sam: One of the things that I take a lot of hope from about humanity and our ability to be a cultural animal and to overcome these hostilities, this will surprise you, what I take home from is nationalism. Now, nationalism has been the basis of a lot of wars, a lot of terrible deaths, and a lot of crimes have been committed on behalf of nationalism. I’m not talking about that. I’m talking about the capacity of a nation to create a group of people who call themselves French. And they had nothing to do with each other three or 400 years ago. They spoke a common language, but take the case of Italy. They called themselves Italians, but they barely could understand each other. And they certainly didn’t like each other. Now the modern nation, America, is the same.

Sam: That is we’ve created a nation out of people who come from entirely different origins and so on, who in other circumstances would have been at war with each other, and in some cases have been conflictual, even within the United States. But we’ve created an idea so that people are willing to [inaudible 00:41:59] their lives on behalf of other people who are really completely different. And what’s the reason for that? Well, they’re our neighbors. They’re Americans. When you think of the people, I think often of 9/11. When I think of the first responders, mostly fireman, entering the building knowing the building was going to come down. That’s unbelievable. They didn’t know the people in the building. They did it because that’s the kind of human being they wanted to be. Now we have that capacity. We’ve done it again and again.

Sam: If somebody told me when I was growing up in the ’40s and ’50s, somebody told me that we’re going to have an African-American president in the year 2008, I would’ve said, “What are you smoking?” That’s amazing. Now, do I think that racism is over in America? I’m sorry, I don’t think it’s over. But we have the capacity to do things we haven’t ever done before. And I think we have the capacity to address that one too, within America today.

Jim: Yeah. That’s definitely the case that we seem like we can do it. And you point out nationalism and, of course, the other even broader one are religious affiliations, where people say I’m a Christian or I’m a Muslim or I’m a Hindu. They can identify with each other, even a larger scale than nationalism. And then the question is, can we ever get to the level where we say we are humans? The humanist perspective.

Sam: Yeah. Well, it’s interesting. I think the us and them distinction is really important. We do lots of behavioral experiments in things like what are called public goods games, in which some people can cooperate and other people can free ride on them and then you see how people function in those settings. In that setting, people appear to want to cooperate and then if some people in group are free riding on them and then you play the game again, then they’re going to say, “Well, screw that. Basically last time I cooperated and I got exploited. I’m not doing it again.” So eventually cooperation unravels in those games.

Sam: However, if you complicate the game a little bit, like having a group A, group B, and group C, and they’re all playing this public goods game. And then the one who does best, gets the highest payoffs, is going to get a special prize from the experimenter. In other words, the ones that cooperate are going to be the winners. Once you introduce competition between the groups, oh, they cooperate like crazy, because it’s just like nationalism or something. So now let’s think about that. What is it? In the past, I think, I’ve been able to reach out to you as a fellow human being, because we thought there was something else out there that was opposed to us, and that brought us together and stressed our commonality. And that tended to be some other human being or some other threat, some external threat and so on.

Sam: Well, I wonder and when people say, “Well, when the Martians come, we’ll take care of this” But let’s not hope for that.” What I’m wondering about the recent pandemic, ongoing pandemic, is to some extent, a really existential threat, a massive pandemic could have some of that capacity. And indeed there have been really heroic measures undertaken by people, taking great risk in the medical profession, first responders and so on, and people working in grocery stores and so on. So I think looking at how we respond to that kind of threat, which is not a human threat, but in some kind of external threat will be very interesting, because our responses to that have been in many, many cases, highly cooperative and admirable. But also there’s been an element of tribalism. And it’s not just because a head of state would call it the Chinese virus, for example, the reported uptick in hostility towards Asian people, not simply in the United States, but around the world…

Sam: … towards Asian people, not simply in the United States, but around the world. So, there are these two sides of this thing. Is this going to be an opportunity for the cooperative part of our species to come out, or is it going to be a part for the tribalistic part to be expressed? And I think a lot of that depends on leadership.

Jim: Probably even a stronger example, because it’s going to be with us continuously for a long period of time, which is climate change in the broader category of what’s overrunning the carrying capacity of the earth in various dimensions.

Sam: I was thinking the same thing, Jim. That’s the big one, and we’re going to be facing that. And when I say “we,” I don’t mean we, and then there’s somebody who’s not we, that’s going to affect the world in common. And to an important extent, climate change is what you might call an equal opportunity problem. It’s going to affect everybody. Now, it’s going to affect some regions more and some regions less, but no region is going to be unaffected by this.

Sam: In a sense that’s good, because some of the victories that have been made for tolerance and so on, for example, the civil rights movement, really was a case that while there were a group of people who were being very poorly treated, and continue in many ways to be poorly treated, who need to have some changes be made in the rules of the game. Okay, fine. That was a very powerful movement and continues to be a powerful force in American society. But a lot of struggles went on for that to happen.

Sam: But I remember, one day at UMass, University of Massachusetts where I was then teaching, I noticed they were constructing ramps into all the buildings so they became wheelchair accessible. I thought, “Wow, that’s fantastic,” and it was everywhere, it happened in a course of a very short period of time. And I remember I’d seen the Chancellor actually going around the campus in a wheelchair just to feel what it would be like. And I thought, “Well, how did that happen?” Of course, there had been very important movements of handicapped people, but nothing [inaudible 00:48:03] the civil rights movement. And yet, massive victories were achieved.

Sam: Well, why is that? Well, I think part of it is, everybody’s in a family with people with handicaps. As I say, it’s an equal opportunity problem. Nobody’s exempt, and therefore everybody has the capacity to put themselves in the shoes of some other person and say, “Oh gee, I can see what that’s like.” They don’t even have to think, “It could have been me.” I think people are generous enough to just put themselves in the shoes of somebody else and say, “Oh my God, that would be really horrible if I couldn’t get into that classroom. Yeah, we got to do something about that.”

Sam: So, I think maybe climate change will bring out the cooperative capacities and our ability to put ourselves in other people’s shoes. Because there’s going to be no group that really escapes the effects. Although, tragically, some of the poorest groups in the world will bear the heaviest.

Jim: Yeah. That’s what, unfortunately, it looks like.

Jim: Well, we kind of popped up to a very high-level discussion, which I had in my notes for later, but let’s now go down to kind of a low-level of analysis, which you go into quite a bit in your book and I know in some of your published papers as well, where you have used relatively simple but insightful games with people in various cultures that teased out how people think and how they are both similar and different, to what degree they cooperate, to what degree they compete, et cetera. Let’s talk about those a little bit. Probably the most famous is the prisoner’s dilemma. Could you define what that game is and then maybe talk a little bit about what a naive analysis would say? And then we’ll go from there.

Sam: Great. The story about the prisoners is that there are two guys walking down the street in an affluent neighborhood, and they’re arrested and they have housebreaking tools on them. But that’s really the only evidence that the officers have. And so they interrogate them separately [inaudible 00:49:59] person is told, “If you’ll just admit that you were burgling, well, your partner will be put away for a long period of time, but you’ll get off after a year.” And suppose each of the prisoners thinks, “Well, if none of us confess, then we’re going to get out. But if the other one rats on me, I’m going to get stuck,” and so on.

Sam: Well, the structure of the game is that the best thing for you is to rat on the other guy, because whatever he does, you’re better off rating on the other guy. That really sort of posed a big problem for people who said, “Okay, so, if neither of you confess, you’re both going to go away for two years. If I rat on the other guy, I go away for one and he goes away for four. And if we both confess, we both go away for four. Well, obviously I’m going to rat on him because I’m better off rating on him than I would be no matter what he does.”

Sam: So, that game was kind of a problem for people because it’s so paradoxical. Students really have to puzzle over it. The thing that’s best for both of them, they shouldn’t rat on each other, is something which neither will do independently of what the other one does. Now, of course the understanding here is they can’t come to some agreement beforehand that would be binding on them.

Sam: Now, so when economists started using this game more than half a century ago, the idea was, “Well, it’s pretty obvious what people are going to do.” If there’s some strategy in a game in which your payoffs are higher independently of what the other person does, that’s called a dominant strategy. So, that’s what you should do, you don’t even have to know what the other guy did. So, don’t cooperate with the other one, be a free rider, rat on him.

Sam: So, when the game is played, a lot of people don’t rat on the other guy; they cooperate. Cooperate is called “don’t rat on the other guy,” and defect is called, “rat on the other guy.” So, when the game is played, a very substantial number of people just don’t defect. And very often, they jointly don’t defect and they get off or they get a light sentence.

Sam: Now, why is that? Well, it’s interesting. If you play the game sequentially, so one person plays first and then the other person plays, you get a very interesting result. So, suppose you’re in this game, you and I are playing, and let’s say I’m selfish but I happen to know that you think reciprocally. You’re kind of a person who will, you’re kind of tit-for-tat. “If Sam screws me, well, okay, I’m going to take him to the cleaners.” Great. I know that about you. I’m not a nice guy like you; I’m just watching out for myself. I think about this.

Sam: Okay. Well, let’s get out of the prisoner situation and just talk about some cooperation game. If I cooperate, then you’re going to say, “Hey, Sam’s a nice guy. I’m going to cooperate, too.” And if I defect, you’re going to say, “Sam’s a jerk. I’m going to defect on him, too.” Okay, well, that reduces the game to just two possible outcomes. One is we both cooperate, and the other is we both defect. So, obviously, I’m the first player in the game. Even though I’m entirely selfish, I’m going to cooperate.

Sam: So, what that means is, as long as there are a lot of people around like the one I just attributed to you, the kind of tit-for-tat guy, the reciprocator, the game is [inaudible 00:53:39] going to be played cooperatively a lot. And when people talk about the game, when they defect on somebody else, it’s not because they want to make a bundle in payoffs and so on. It’s because they’re so angry at the idea that the other person is going to defect on them. So, that game suggests that the fact that that game wasn’t played the “right way,” according to economists and game theorists, was a kind of a wake-up call.

Sam: But the real breakthrough in economics came in the 80s and 90s. I should say to my colleagues from sociology and particularly psychology, a lot of what economists discovered through behavioral economics in the 80s, 90s, and later about human cooperation and about how we’re not routinely selfish and amoral, psychologists and sociologists have been saying all along. So, it wasn’t really big news to a psychologist if an economist came along and said, “Oh gosh, I just discovered that human beings are not always selfish. And sometimes we’re ethical and we often care about others.” But this was news in economics because mostly economists don’t read psychology and sociology or anthropology, so they were not aware of those experiments that were done and the observations in other fields.

Sam: But in the 80s and 90s, a set of experiments happened which really blew the lid off. And it was not a prisoner’s dilemma. It was what was called an ultimatum game. And this game is really clever. Here’s the way it’s structured. It’s a sequential game, so there’s a first mover. Suppose I’m the first mover and you’re the second mover. The experimenter gives me a pot of money. And you know how much that is. It could be $100, it could be $1,000. I’m not kidding. Or it could be $10. It’s been played for large sums, small sums.

Sam: So, suppose the pot, which is what I got, is $100. Then the experimenter instructs me, and you hear these instructions, I now have the ability to send you some part of that. I can send $0, I can send $100, I can send anything in between. But here’s the hitch: if you don’t accept what I send, we both go home with nothing, pockets empty. If you accept it, that’s what you get.

Sam: Okay. Now, think like a game theorist. I’m the first mover, what should I do? Well, if you’re an economist and you think everyone is selfish, here’s how I reason. Well, okay. I’m selfish. Jim is selfish. It means Jim will accept a penny because a penny’s better than nothing. If he rejects the penny, he gets nothing. So, why would he ever reject the penny? So, I’ll offer him a penny and he’ll accept and [inaudible 00:56:22] end of the game. And I’ll go home with $99.99. Guess what? Game is never played that way. Never, ever, ever.

Sam: Here’s what happened when people play the game. The offers from me to you are usually about 40% to 50%. The most common offer is 50%. Offers below a quarter, I keep $75, you get $25, offers below $25 are not very common. And here’s the important thing: they’re frequently rejected. Now, okay, you think about that. Well, the thing that’s striking, to start off with, is you think, “Oh, Sam was really generous.” He gave, let’s say, the modal offer, $50. Well, that’s not that interesting [inaudible 00:57:15] I gave you $40 and kept $60. Why would I give you so much? I must be generous. No, not “I must be generous.” I just must be prudent, because I know you’re going to reject something. So, I’m just trying to figure out what’s the most I can get if I’m selfish.

Sam: The interesting fact is not what I do, which is to give you $40 out of the $100. It’s what you do, which is you reject $25 if you think it’s unfair. Because you just paid $25 to punish me. And you got nothing. You could have had $25. But you went home, pockets empty, just so that I would go home with my pockets empty, too. Now, that’s amazing. It happens again and again and again. And even when large sums are on the table, people are willing to say, “No, I’m not going to accept that.”

Sam: These games were played all over the world among [inaudible 00:58:10] and sure enough, everywhere, whether it was Beijing or Louisiana or Pittsburgh, and the games were very, very similar. So, sort of, no one ever published this, but just in the conversations I was having among my colleagues and people doing these experiments [inaudible 00:58:28] well, wait a minute, maybe we’re not genetically selfish. Maybe we’re genetically generous or ethical or fair-minded. And look at how common all these results are. My colleagues and I, anthropologists and economists, got together and we said, “That doesn’t sound right to me. I really don’t think this is just because everybody’s the same all around the world.”

Sam: Anthropologists, as you know, make a living by studying how cultures differ. If they didn’t differ, they’d be out of work. And so it wasn’t that hard to find a group of fantastic anthropologists and also game theorists and others. And we met and put together this group and we went into the field, I mean the field, we went to places that took two days to get to in a canoe in the Amazon. We went to places where [inaudible 00:59:13] hunter-gatherers or herders, farmers, around the world and we played the ultimatum game. Of course, we had to adapt the games so they could be done with illiterate people, and so on. And de worked out a pretty good way of doing that. And we played them for large sums of money. Basically, the pot that we were using would be roughly the equivalent of a day’s work, a day’s wages, in most of these cases, they didn’t have any wages, but what the wages would have been in some nearby town or something. So, the stakes are high enough. And what we found was amazing.

Sam: The first experiment we did was in the Amazon. And we found there the most, call it selfish, I guess, the most consistent with the self-interested view of the experiments that had ever been done with a small group of people in the Amazon. The offers were incredibly low. The average offer wasn’t 50% or 40%, it was more like 23%. And out of 70 people with these very low offers, only one offer was rejected. And you would never see that in Pittsburgh or in Beijing or Tokyo.

Sam: But, so then, we did this all over the world, and we found extraordinary things in Indonesia among whale hunters. We found that people routinely gave more than half. Why would you give more than half? Well, imagine this: you’re a whale hunter and you work with a bunch of people and you catch a whale. You bring the whale back, and what’s your part of that? Well, it’s not half. It’s a very small part of the whale because it’s for the whole village. So, they’re used to saying, “Well, okay, the other gets more than me.”

Sam: We found out a huge amount about cultural differences using these experiments. And I think it’s been a valuable addition to anthropological methods, because if you wanted to ask people, “Well, how selfish are the people who are these whale hunters compared to people here [inaudible 01:01:13] Mongolian herders?” Well, there are a lot of language differences and they’re sharing different things. By bringing a game, which was exactly the same game with the same payoff and the same rules, we were able to compare something that was fairly similar.

Sam: Now, I’m a good enough anthropologist to know that the fact that the game is the same doesn’t mean it’s interpreted the same in each of these places. But it’s a start. And we think it’s, as I say, it’s not going to substitute ethnographic work and cultural analysis, but it’s a tremendous aid because we’ve been able to sort of see that there really are big cultural differences [inaudible 01:01:52] you’ve probably already figured out is that what we found is that where people rely on cooperation to make their living, like the whale hunters, tremendous amount of sharing. And where people are basically independent, like the slash-and-burn farmers in the Amazon that we studied, it’s much more likely that they’d be individualistic.

Sam: And that’s a very interesting and fascinating idea, because what it suggests is that our norms and our values about how to share, how to cooperate, probably are closely related to how we actually live in the world, how we make our living. Again, we don’t know that that’s a causal relationship, but it’s so striking and you can think of the mechanisms by which it would come about. So, perhaps it is. And if it is so, then it’s an interesting statement, again, about this thing that I said I was interested in. I’m not just interested in the genetic basis for our behaviors, but also what do we learn from how we make our living? And then how does that affect the kind of human being we are?

Jim: Indeed. Of course, there’s been more recent work in some other dimensions around exactly that question. I’ve recently read Joe Henrich’s book, Weird, where he talks about the weird people [inaudible 01:03:03] Western-educated [inaudible 01:03:04] democratic and how we’re quite different on these kinds of tests than most other people in the world, though he also makes the point that the university students that you were talking about in Tokyo or Beijing, et cetera, are much more like [inaudible 01:03:18] Western folk than they are like the hunter-gatherer folk.

Jim: One of the [inaudible 01:03:23] research he does is he gives some examples, I think from Sudan, that participation in markets seems to also condition people towards giving larger components in the ultimatum game, and that people who’ve not operated in market economies haven’t learned how to do that reciprocal thing with strangers very well. And so there’s another vector, kind of a [inaudible 01:03:47] culture, behavior, genetics, all work together to produce these outcomes.

Sam: Yeah. Well, first I should say, Joe Henrich, who you just mentioned, was the guy in the canoe who went and did the first study in the Amazon. He was then a grad student. He’s now a Harvard professor [inaudible 01:04:04] fantastic work. But that idea, that there is a group of nations in which most of the experiments have been done, and that is really a distinctive group [inaudible 01:04:16] behavior. Now, you’ll find similar behaviors in some countries, but not all. But the thing that you mentioned about markets is really important. One finding that I mentioned is that where people cooperate in the hunt, they cooperate in the games. And the other one was in our same study, with Henrich and others, the other big finding was the one you said, which is people who had some… None of these societies were really market societies, but they had more or less market involvement.

Sam: And the ones that were more related to markets were both more generous in their first offer, in the ultimatum game, and they were more fair-minded in rejecting the second offer. So, this was really news, because what it means is that association with markets is somehow associated with having standards of generosity or fairness, one or the other, or possibly both. Now, think about that for a minute. There are many scholars, and many great scholars in the past, who have associated markets, thought of markets as being a school for self-interest. It’s not only that we have markets in which self-interested people operate, but people think that markets make you self-interested. And I think there’s probably something to that in some realms. But what we found was that in societies which were hunter-gatherer and small-scale societies, having some connection to markets made you less selfish, or at least they made you more fair-minded.

Sam: And you’ll be amused at this, Jim. This got me on the front page of the Wall Street Journal, much to the consternation of many of my friends, because the headline was “The Civilizing Effect of Markets.” I’ve been a critic of the way that markets work in many respects, and so are many of my friends and colleagues have done the same, as is common in economics. People were shocked to hear that I was saying that actually markets were the civilizing force. Well, I think in the case of our experiments, they were. And by the way, that’s not a new idea. It is, of course, true that organizing an entire society around the pursuit of self-interest in markets would be very likely to further the self-interestedness of people’s behavior. I think that’s probably true. But it’s also true that markets teach us exactly what you said. Markets teach us that, in interacting with a stranger, you can actually benefit if you obey certain normal rules of the game which are ethical rules.

Sam: And remember, in these small-scale societies in which our experiments took place, if you’re not market-related somehow, a stranger is not an opportunity; a stranger is a threat. They’re not part of your community. You don’t know what’s going on. You really can’t be sure. But what a market does is it inures you to the idea that you can routinely exchange goods with somebody whose name you don’t know, you may never see again, and you both benefit.

Sam: Now, Adam Smith, he had a wonderful passage in which he is trying to explain why you should trust merchants more than ambassadors. And the argument is, well, you have to keep coming back to the merchant and if he treats you badly, well, that’s going to be the end of it. The ambassador is, basically, is unaccountable to you. And this idea has been around for a long time. Voltaire visited the stock exchange in England and he [inaudible 01:07:40] wonderful passage, he says, “What’s going on here? We have Anabaptists and we have Catholics and we have Presbyterians and we have Muslims, and they’re all exchanging stuff and they’re cooperating. After work, one goes out for a drink, the other goes to the mosque. How can it possibly be?” So these people, he said, “The only person they would call an infidel is somebody who goes bankrupt.”

Jim: I like it. I like it. I think there’s an important aspect of economics which most people aren’t aware of, which may well be what’s driving this condition, which is called consumer surplus. Most people think that markets are dog-eat-dog, all about exploitation, and they certainly can be, particularly where we have market failure, but basic microeconomics shows us that, other than the last person that was willing to buy X, most people that buy X are actually willing to pay more for X than it actually sells for. And you draw the curves, you have the nice little triangle, which is the consumer surplus. So, most of the time when we’re outside of conditions of market failure, like monopoly or stringent licensing laws, et cetera, most people should be happy with [inaudible 01:08:52] and that they’re getting more than they would in theory be willing to pay for. And that idea of consumer surplus is surprisingly little-known in the general population.

Sam: Yeah, it’s an important concept.

Jim: … general population.

Sam: Yeah, it’s an important concept. And I mean, the basic fact about market interactions is that, with a few exceptions which are reprehensible, market interactions are voluntary. Now, if they’re voluntary, it means that both people expect to gain, and there’s some gain to be had. And the question about how markets work, or more generally how the rules of the game of society work, has to do with making sure that all of these possible mutual gains are exploited somehow. That means avoiding market failures. And secondly, then finding a way that those gains can be shared between the producers and the buyers and so on.

Sam: So there are two parts to every exchange. The exchange happens because of the potential for mutual gain, but then there being a mutual gain, the question is who’s going to get it. So for example, in the case you said, the first person who bought the first thing, well, he benefited a lot, he valued it for a lot more than the price and so on.

Sam: Now, if the seller can price discriminate and figure out how to charge everybody the maximum thing that they would pay for it, then it’s not going to be the consumer’s surplus. It’s going to be the seller’s surplus. So that’s just an example to say that this opportunity for common benefit is always mediated by the rules of the game that we’re playing. And very often people institute rules of the game to try to make their slice of the pie larger in ways that make the pie smaller. And that’s an ongoing problem in society. We have to find a way to exploit gains and then to share them fairly enough so that the game can go on.

Jim: Absolutely. In fact, I will confess back in my hard-ass businessman days, I was a bit more analytical than most. I used to try to educate my folks on the theory called map the demand per, which is exactly that, to figure out how we can price discriminate. It’s not easy, but if you can figure out how to do it, it’s a hell of a good way to make some money, more than you should.

Sam: Well, also, I don’t mean to exculpate you for your bad old days. But if you could have perfectly price discriminated, charging from everyone the maximum that they would pay, then even if you were a monopoly, you would have avoided the market failure. Because essentially the reason why a monopoly is a market failure is because the monopolist has to sell at the same price to everybody. And if they’re liberated from that, they’re going to keep on selling down to where the last guy is just willing to pay a penny more than the cost of producing the last good. And then there’s market failure, but of course, then the monopolist gets all the gain.

Jim: That’s the interesting thing, because it’s certainly not good from a public welfare perspective, but it’s very profitable if you can do. Let’s hop back up to prisoner’s dilemma a little bit. Because you mentioned this idea from Adam Smith about the ambassador and the merchant, and I want to move on to it. And we were talking about prisoner’s dilemma, but I think now we’ll hop back up to it, which is in the work on prisoner’s dilemma.

Jim: The next step was talking about iterated prisoner’s dilemma, where instead of just doing it once, you assume the game where people played prisoner’s dilemma many, many times. And you could then even build into a agent-based model some ideas about wins being used for reproduction, et cetera. And there we find very different results.

Jim: The dominant strategy from a game theory perspective and one-off prisoner’s dilemma is always defect, and we find that humans don’t actually do that, but there is a pressure towards that. Under iterated prisoner’s dilemma, we come up with very different strategies that will evolve in a simulated ecosystem. And typically, there’s some variation on tit for tat, as you mentioned in passing. Maybe you could talk about that a little bit?

Sam: Yeah, sure. I mean, basically if the prisoner’s dilemma game is repeated, what it means is that if you defect on me last period, I can defect on you now. So essentially, iterated prisoner’s dilemma game builds in the possibility of punishing free riders or defectors. And I mean, that was Adam Smith’s argument, basically the merchant that treats you badly. The merchant that you’re engaged in a repeated prisoner’s dilemma game, the ambassador and you are not. And that’s why the merchant is more likely to be ethical than the ambassador, according to Smith, although of course he didn’t use game theory language.

Sam: But let’s think about how that argument works. If the game is going to go on a long period of time and I’m thinking, “Well, okay, I can defect,” if I cooperate and you cooperate, we both do great for the whole game until it’s over, and that’s fantastic. If we both defect, which is the dominant strategy, the strategy each of us would follow if we were being selfish, well then, we’d get much less.

Sam: Now the temptation is I think, “Well, maybe I can get away with defecting on Jim the first round.” So I’d get a lot on the first round, and you’d get nothing or a little. But then what? Well then, you defect on me. Either you defect the whole rest of the game, or at least you defect once.

Sam: Now, you mentioned tit for tat. Tit for tat is a fantastic strategy. What it says is the very of end of it that seems to be a really winning strategy is called nice tit for tat. The tit for tat stuff is easy to understand. What’s the nice? The nice is cooperate on the first round, open yourself up to being a sucker, just be nice the first round. And then on the next round, do whatever your fellow player did on the previous round. That’s the tit for tat part.

Sam: Well, that’s a very good strategy. And what happens is, of course, faced with the fact that you may get punished if you defect on somebody or almost surely will get punished, people will both cooperate in the first round. And then because of tit for tat, they’ll cooperate until the game is over. And that very often occurs in experiments and so on, and also a tit for tat strategy playing against other strategies and simulations often does very well.

Sam: Now, many people thought that, “Oh, maybe that’s why we cooperate because we’re engaged in a repeated prisoner’s dilemma game.” So it isn’t really altruism at ll.

Jim: Well, it’s reciprocal altruism, which is a special case of altruism, right?

Sam: Well, let me differ with you a little bit. I think cooperating in a prisoner’s dilemma game is self-interest with a long time horizon. Obviously, if you have a high discount factor, meaning you don’t care much what happens down the road, the repeated game thing isn’t going to work because you’re not going to care about the future stuff. You’re just going to go for the high win now.

Sam: Now, what you said is quite right. Reciprocal altruism is a term used to describe cooperating in a repeated prisoner’s dilemma game. But in what sense is it altruistic? It’s not at all. I cooperate in the first round. I adopt tit for tat. I have no concern for you whatsoever. I’m just doing the thing that’s best for me.

Sam: So essentially, calling cooperation in a tit for tat game reciprocal altruism was a very unfortunate use of language, which has been very confusing among biologists and so on because it makes the self-interest with a long time horizon look like it’s an example of altruism.

Sam: So then we have to think, “Oh, well maybe hunter-gatherer groups were not altruistic at all. Maybe they were just practicing something like tit for tat.” There is an important theorem in economics called the folk theorem or game theory, which says this process of repeated games can make cooperation be an equilibrium in a game under rather general circumstances.

Sam: Now, let’s think realistically about this. Okay. It’s you and me, and we play a game, and you defect on me. Now, I know you defected on me because I got low payoffs. I didn’t have to see you defect on me. And so, then I defect on you next period, and you’ll shape up. You’ll anticipate that. You won’t even act badly in the first case.

Sam: Suppose there’s two of you. Okay. Then someone defects on me, that is they don’t cooperate. What am I going to do? If there was only one person, either you or the other guy did. And even if I knew who it was, what can I do? The only way I can punish the defector is by myself defecting.

Sam: So I defect then, and then you defect on me. Now, add to this a little bit of realism. Suppose you weren’t the defector. The third guy was the defector. So I now defect to punish whoever it was who defected. Then you notice me defecting and you see, “Oh, Sam’s a defector.” Then you defect on me.

Sam: So the general statement is the following. If you have more than four or five people in a group, and if there’s any noise at all in perception about who really did or didn’t defect, you can’t support cooperation, as I say, in groups larger than four or five with what’s called private information about who actually did what. And we spend a lot of time in our book with Herbert Gintis called Cooperative Species, we spend a lot of time on that because that’s really the alternative view about what must have been going on among hunter-gatherers. It’s either that they’re all family members, or they’re doing repeated games. And we showed that repeated games will work in dyads among two people that doesn’t work at the level of the group.

Sam: And as far as the small families, I mean, I think, I mean, a number of prominent scholars in this area used to claim that hunter-gatherer societies were like a camping trip. You go with your family, right? And I think that view is kind of out. And that plus the fact this repeated game argument doesn’t really work for large groups with noisy information means that something like altruism, genuine altruism, willingness to pay a cost to help others, is probably part of how hunter-gatherers cooperated.

Jim: Ah, yes. Now we get to really probably the meat of the matter. These evidences of these experiments does seem to point to actually two things. One that we genuinely like to cooperate. But as you point out in the book, we also genuinely like to punish. And you did some very interesting work with another one of these games, the public goods game, both with and without punishment. Maybe you could talk about that a little bit.

Sam: This is really fascinating. And to tell you the truth, I mean, I’ve been working on this for decades, but I don’t really understand it fully. So let’s think about a public goods game and the following. A public goods game is just a prisoner’s dilemma played with many people, more than two people.

Sam: So a common public goods game would be the following. Everybody in the group can put in some … I mean, suppose you’re given to some chips or something. You get 10 each, and you can put into the pot as many as you want. After everyone’s put them into the pot anonymously, then the experimenter doubles what’s in the pot and then distributes it equally to everybody.

Sam: Well, it doesn’t take too long to figure out that basically what you’d like is you’d like everybody else to put their all 10 things in the pot and you put nothing in. Because if you put your 10 in, then essentially you’ll put in 10, and you’re only going to get five of those back. In that situation, we’d expect people to defect, that is, not to cooperate. And that’s true. They don’t, in general, not in a single one-shot game.

Sam: But now suppose we change the rules of the game. These are almost always done on computers. So in the first round, we all contribute or not, do whatever we want. Then it shows up on my screen, ABCDEFG, whatever number of people we are, how much each person contributed. I can never find out their names, but I can see how much they contributed. And then the question pops up on the screen, “Would you like to reduce your payoffs in order to punish,” we would never use the word punish but “Would you like to pay something in order to reduce the payoffs of any member of your group?”

Sam: Okay. You run down the list, and here’s a person who contributed zero. They’re the ultimate free rider. So you take money out of your pocket, or out of your account, and you pay to have that person’s payoffs docked. Okay. Once you do that, game changer. People cooperate like crazy. Instead of unraveling, like a process that I told you before that as people get upset that other people are defecting on them, they cooperate more.

Sam: Why is that? Well, what happens in the early rounds is that people avidly punish the free riders. And by the time you get to the end of 10 rounds, everybody’s cooperating at a high level. There’s almost no punishing going on because there’s no one to punish, and you sustain cooperation at a high level.

Sam: Now, I think that’s a better way to understand how society really works. That is, like I said and like Christopher Boehm said, the person who isn’t contributing gets essentially a message that he’d better shape up or he’s going to be punished. There’s some kind of multi-lateral, not necessarily top-down, from your neighbors, from your family, from your workmates, from your friends say, “Well, you’ve got to shape up.” And that’s what’s going on in that game.

Sam: Now, here is the puzzle. In the first game, the public goods game, you don’t contribute if you’re selfish. You hold back your funds. And that happens fairly commonly. And as I say, it unravels and at the end almost nobody’s contributing anything. When punishment comes in, people behave differently.

Sam: But now, focus on this. Punishment is a public good. Just like contributing to the pot in the first place is a public good, punishing the other guy is a public good. Because now, suppose we’re in a group and there’s a bunch of defectors. I look down the list, and I see a couple of zeros. And I’m thinking, “Well, should I reduce my account so as to punish somebody?” And I say, “No, I’ll let Jim do it.” Right? Let’s [inaudible 01:23:22] do it, so I can free ride on your punishment just the way I would free ride on your giving to the public good.

Sam: Now, what’s amazing is that that doesn’t happen, or it doesn’t happen too much. It doesn’t happen enough so that the punishment option is defeated. That’s the puzzle that I don’t understand. People who in a public goods game, the standard public goods game, act selfishly and don’t contribute, when it comes to the punishment stage, they go crazy punishing the low contributors. Figure that one out.

Sam: Now, I see another possibility. Human beings have developed, perhaps genetically, so as to be very angry about violation of social norms. So I’m perfectly willing to be selfish. I mean, I’ve been through a lot of my life. I go shopping. I’m not considering. I’m just trying to get a good buy and so on. And there’s nothing wrong with that.

Sam: So you have this option in the public goods game, okay, maybe they told you some story in which the word public good occurred, but you can be selfish about that. But somebody free riding on me, I’m going to go to town on them. And I mean, that’s a very, very robust finding, that we’re avid punishers even if we’re not avid contributors to the public good.

Sam: And that’s good news for society, because what it means is that there are mechanisms, as I said, mostly informal, they operate outside of the government, outside of markets that just the raised eyebrows, the sarcastic remark, the shunning somebody and so on, they’ve got us to actually behave reasonably well.

Jim: Yeah. But go full circle. That’s back to Chris Boehm’s operating system for the hunter-gatherer band, right?

Sam: That is exactly right. In fact, that’s exactly how I think hunter-gatherer bands worked. I think it’s useful to think of hunter-gatherer as a typology. It’s too few. Of course, there’s a lot of heterogeneity. People are very different, but suppose there are three kinds of people. There are unconditional selfish people, there are unconditional altruists, and then there are people who I call civics. They are re civic-minded.

Sam: The selfish people always take as much as they can get. The generous people share 50-50 when they meet somebody. And the civics, well, they’re just like the generous people. They share 50-50, but if they meet somebody who’s not sharing with them, they get on the phone, and they call all the other civics, if there are any in their group, and they say, “This guy just didn’t share with me.” And then they beat on him or whatever they do.

Sam: Now, that’s an interesting model. Because what it says is that you can have a society in which there are mostly civics, and any time a selfish person comes along, they don’t do very well because the first time they try to take something, all the rest of the civics gang up on them, and that’s it for them. But if there are no civics in the population and one or two civics come in, they’re going to lose because they’re going to always be essentially calling up maybe the one or two other guys, and they’re going to be fighting with this person who’s grabbed something. And they’re going to occasionally lose those battles.

Sam: So in that kind of society, you could have essentially, I’ll call it a world according to Rousseau, the French philosopher or Swiss, in which there’s a community of people who are responsible for each other and for maintaining order in a multilateral, decentralized way. That’s the civic society.

Sam: But there’s also the Hobbesian society in which you have the selfish people essentially just grabbing and fighting with each other, grabbing from each other and occasionally exploiting a sharer. And that also can be an equilibrium, and you can see the civics are going to have a hard time getting in there because there are going to be just too many selfish people to punish.

Sam: And I think that’s kind of a reasonable model of our distant past. And what I think is probably true is that most hunter-gatherer societies were civic. They wouldn’t have worked very well if they didn’t. Occasionally, they’d make a conversion to this kind of war of all against all, the Hobbesian idea. But then those groups would be kind of malfunctional or dysfunctional. And they would come up against a civic group in some kind of contest, and then that would be it for them because they wouldn’t win the contest because they basically wouldn’t be cooperating.

Sam: And that’s kind of a modeling way of describing how I think why a hunter-gatherer society could have sustained itself as a co-operative and consensus-based society. And I think it’s a big, big mistake to think that our ancestors were primarily altruistic. It’s much too simple. If we’d have been just altruistic, I think it wouldn’t have worked. We had to be willing to engage in the punishment activity, as you mentioned in the public goods game.

Sam: Again, I mean, it’s kind of embarrassing for somebody who has not only appreciated ethnography but spent a fair amount of time studying and visiting and being part of demographic societies, it’s a travesty to say we’re going to describe this as a three-type population in the model. But I think it gets at what’s really going on, and it’s very nice because what it suggests is that maintaining order is an essential part of a decentralized society, and it can’t be done by altruists.

Jim: I actually love that. And probably we should wrap up here. We’ve kind of gone on. I wish we had more time, but it’s been great. And again, this is something I find interesting because I’m involved in some radical social change organizations. And unfortunately, from my perspective at least, some what I’d call utopian think that we can live in a world of all altruists. But I don’t think that works.

Sam: Let me tell you one thing about that. I didn’t say it because it’d be too hard to understand, but what unravels the civic equilibrium is not the selfish ones. It’s the sharers. Because if all you have is civics around, then the civics and a sharer comes along, and they’re behaviorally identical to a civic because there’s nobody grabbing their stuff, and so nobody’s fighting.

Sam: Well, if through genetic drift or migration or whatever, enough of the altruist ones, the unconditional sharers, get in the population and the number of civics gets small enough, then all of a sudden the selfish ones can take over by exploiting the altruists. There get to be a lot of the selfish ones, and at that point there aren’t enough civics around to police them.

Sam: So it’s the altruists who unravel the civic society. I mean, I don’t want to say this on the air. I hope you’re not going to …

Jim: Well, this is on the air. We’re still on the air. I haven’t wrapped it up yet. Go for it. I think this is important and true, by the way.

Sam: Look, altruism is essential to a decent society, but it has to take a civic form of being willing to uphold social norms with respect to others and not simply unconditionally generous towards others. That is, I don’t think that kind of unconditional altruism is a model. I mean, I appreciate very much altruistic people, but I don’t think that’s the behavioral basis by itself of a good society.

Jim: I strongly agree. Let’s wrap it right there. Thanks, Sam, for a wonderful show.

Sam: Yeah. I enjoyed it a lot.

Production services and audio editing by Jared Janes Consulting, Music by Tom Muller at modernspacemusic.com.