The following is a rough transcript which has not been revised by The Jim Rutt Show or Kristian Rönn. Please check with us before using any quotations from this transcript. Thank you.
Jim: Today’s guest is Kristian Rönn. Kristian’s the co-founder and former CEO of Normative, a tech company that automates companies’ carbon accounting. He has a background in mathematics, philosophy, computer science, and artificial intelligence. Before he started Normative, he worked at the University of Oxford’s Future of Humanity Institute on issues related to global catastrophic risks. Welcome, Kristian.
Kristian: Thank you. Thanks a lot for having me.
Jim: Yeah, I’m really looking forward to this conversation. As I was reading the book, it was like, whoa, whoa! These are like a lot of the things that listeners to our podcast have been thinking about and I’ve been thinking about, et cetera. Enjoyed it a lot, learned some things, added some notes, sent some pieces of it around to people and things of that sort. For you GameB people out there, I’d highly recommend this book.
The area that Kristian explores is closely related to the GameB concept of the Multipolar Trap. And in fact, I was a little bit surprised he didn’t mention the multipolar trap, but then I did a search afterwards and found that it is in the acknowledgement where he talks about our own Daniel Schmachtenberger, Liv Boeree, et cetera, and we talked about a little bit before showtime. And Kristian, tell us why you did not use the word Multipolar Trap in the book.
Kristian: Yes. I think there was a couple of words that I could have used. So I wasn’t actually familiar with the term GameB until after I handed in the manuscript and discovered this whole community around it, which is quite amazing. Multipolar Traps and the word Moloch I had heard before, however.
But ultimately one of the reasons why I decided to go with the word like Darwinian Traps and Darwinian demons instead is because I wanted to make the word somewhat more descriptive and then really put emphasis on the evolutionary natural selection origin of the phenomena, if you will, or the evolutionary game theory origin of the phenomena.
But I also said it in Liv’s podcast that since I started writing the book, Liv has had a brilliant TED Talk popularizing the concept of Moloch a little bit more. So if I would rewrite it all today, I probably would’ve leaned towards using that concept because I don’t see any use in terms of fragmenting concepts too much.
Jim: Gotcha. All right, so for GameB listeners out there, we’re going to be talking about something very closely related to and in many ways overlapping the concept of the Multipolar Trap. Well, let’s get started. You start off with the book with the Parable of Picher, Oklahoma. Why don’t you tell us that story?
Kristian: So essentially the story is all about this mine in Picher, Oklahoma that got started in the beginning of the 20th century. I don’t remember the exact dates, and Picher became the fastest growing town in the US, more or less. It created a lot of jobs. It created a lot of wealth. And newspapers at the time remarked that Picher more or less has it all. However, what happened due to how mining what was conducted back then is that it created all of these hazardous shat piles.
So you didn’t care about health and safety back then. So these piles just got created with all of this sludge material that was highly, highly toxic. And obviously, as you might imagine, if you build these toxic piles close to a city, children starts playing there, people breathe in this dust, and people all of a sudden started to die left and right. So what was a huge success story, created a bunch of jobs, ended up being a ghost town. So they had to close the town down.
You’re not allowed to live there anymore because of health reasons. It just went from this fantastic thing with opportunities to this horrible thing that killed a bunch of people and made a whole town into a ghost town. And it’s a story that I keep referencing in the book how none of these executives that started the whole operation, they weren’t evil people, they didn’t want to do active harm, but it just happened anyways because of the profit maximizing motives of the mining operation. That’s essentially a long story short about Picher.
Jim: Gotcha. Oh, by the way, I forgot to tell the title of the book, The Darwinian Trap: The Hidden Evolutionary Forces That Explain Our World (and Threaten Our Future). And as usual, we’ll have links to the book and anything else we discuss here on the episode page at jimruttshow.com. I think one of the key ideas there in the Picher story is that, as you said, these people could not have bad intent at all necessarily, and yet bad emergent effects can occur. So why don’t we talk a little bit about that, what you call the Darwinian demons?
Kristian: Yes. So Darwinian demons, I define it as a selection pressure that makes it adaptive for any type of entity or agent or organism to behave in a way that destroys the welfare or destroys the environment around them. So it’s essentially a selection pressure, like in natural selection, that compels people to act in a bad way. And to just dwell a little bit deeper and go through some of the basics of natural selection.
I mean, the whole idea with the selection pressure is that… So take, for instance, let’s say rats or mice that are living in the forest. Then you have predatory birds that are flying. And if they see a mice, they catch the mice and eat it. So then it becomes adaptive for the mice to camouflage themselves. Because if you have the camouflage, then you’re less likely to be killed.
So one could say that there exists this selection pressure in the environment in the form of these predatory birds that makes it adaptive for the mice to change their colour. And equally in the rest of the world or the human built world, it is for instance adaptive for enterprises to maximize profit. And that can compel enterprises to do bad things. You might not care about the environmental policies or you might not care about if there is slave labor down the value chain because it’s adaptive for you not to care about that by default at least unless there is regulation around it.
But the same concept can be applied to the workplace, how we do hiring. It can be applied to the natural world. It can be even applied to molecules or civilizations and the society that we live in. So the idea of this selection pressure that makes it adaptive to behave in a bad way is quite general indeed.
Jim: Yeah, because unfortunately, the world and the ecosystems that we live in are very high dimensional, but the evolutionary pressures tend to be relatively low dimensional. Let’s take a animal in the forest. Frankly, the selection pressure is only one thing. Does it live long enough to reproduce, period? And how it gets there doesn’t rightly matter whether it’s good or bad for the rest of the ecosystem.
Yet the ecosystem itself is very broad. And fortunately in biology, the ability of biological entities to disrupt systems is relatively limited most of the time. So the emergent effect of the web of food and all that sort of stuff is at least metastable while we have big collapses from time to time where keystone species go extinct and a bunch of other stuff does. In general, the system keeps going on.
Now, in our world, unfortunately, as you’ve pointed out, we have lots of power, especially things with big, big, big companies and governments, et cetera. Let’s think of a company and the impact it has on all of its employees, and all those employees have impact on their communities and their families. So there’s vast network, high dimensional impact of the impact of any given company.
Picher and its lead mining being a fine example. And yet, as you point out, the life force of evolution for the company is profit. And at first you say, “Well, why are they so locked on profit? Isn’t there enough profit?” And maybe you have wise management that could say, “All right, we’re making enough money. We can spend some money on remediation of lead mining,” et cetera. Well, unfortunately, the system is deeper than that around money on money return.
Well, the investors invest in a company that has a lower return. You’re an entrepreneur, I’m an entrepreneur and a businessman, and that Goddamn pressure of meeting the hurdle rates of the investors and you’re in competition with every other investment vehicle in the world. So the ones who pay the least attention to the side effects produce the highest returns and gather the greatest amount of investment.
Kristian: Exactly. I mean, if you are not going to do it, someone else is going to do it. And then you will end up losing because it is adaptive to behave in such a way where you are profit maximizing, unless you fundamentally change the rules of the game. You said earlier as well that nature, natural selection is metastable. I think we are going to probably discuss that later on as well, but that’s something where I’m not actually so sure, but maybe we should put the pin on that for now.
Jim: Yeah, you do give an interesting perspective on that. We get to it. We’ll talk about your particular catastrophic view of and maybe how lucky we are. I don’t know. We can talk about that. Can you give some examples of… And that’s useful for the audience to hear some real world examples. One that I happened to have some personal experience with was the Wells Fargo aggressive sales practice.
I’ll let you tell a story, but I’ll give my own personal story first. My mother was one of these Depression era kids, grew up very poor, but was relatively successful in life, my father and her. And she was very, very good at managing her finances after my father. Even before my father died, she always managed finances, but she started having dementia and I had to take over her finances when she was like 83.
And I discovered she had four Wells Fargo accounts for no good reason. And I went and closed all three out of the four down, and then later it all came out. So tell us the story about… This is really a fine example of an internal endogenous set of narrow dynamics that just had terrible effects.
Kristian: Yeah, exactly. So I mean, fundamentally, in any type of system, you might have a set of metrics that determines your survival in the system. I mean, that’s a general way of putting it. And in the case of Wells Fargo, they had very aggressive sales targets. They had a success metric. You need to create new customer accounts and you tie the bonuses to the creation of those accounts. And you might even get fired if you don’t recruit enough new customers.
So it created these incentives in the bank. And I don’t remember the exact name of the people, but essentially what happened is that they started to create accounts for customers without the consent of the customer, just so they could optimize for that particular metric. Then eventually this got exposed and it became a huge scandal around it. And I think it’s just such a brilliant example of… You have so many examples of this.
You have Wirecard. You have Enron. You have I guess Theranos. The whole enterprise landscape is filled of these more or less scams or fraudulent practices. What is interesting to zoom out a little bit from Wells Fargo in particular is that in the corporate world, you tend to window dress all of these practices. So accounting fraud becomes aggressive accounting, mass layoffs, you might call it something else like right sizing, for instance.
So you invent all of these words within the corporate world to make the behavior and the act that you’re committing more permissible. Breaking the law, for instance, might be referred to as the cost of doing business. I mean, if the fines are lower than the revenue that you will generate from doing what you’re doing, it’s just the cost of doing business right.
Jim: Funny you should mention that. That insight was the formation, the original origin story GameB, where Jordan Hall and I were having a conversation. He was a generation younger than I was, and we were comparing the business ethos when we each came into the business world, me in 1975, him in 1994. And we agreed that the moral aspects of business had really been reduced to 1994 to where is an activity arguably legal and profitable? If so, then you must do it.
But this conversation was in 2007. And we then agreed that in 2007, it had degenerated even further to it may be illegal, but is the risk adjusted costs of getting caught greater or lesser than the benefits? We have some amazing examples. How many times have Facebook and Google been fined billions of dollars? Multiple times each. Certainly in the hundreds of millions and in the billions, I believe. And yet they keep doing this shit. Why? Because it fucking pays, right?
Kristian: Exactly. And the thing is that it’s so easy to blame name and shame and blame the CEOs. The thing is that as the CEO, and I know because I’ve worked as a CEO, I’ve built a startup, you’re under this immense pressure from your board to maximize revenue. And if you don’t do that, you might get replaced by someone else. But then the board, I mean, they’re under massive pressure from their investors, and their investors are under massive pressure of their investors.
So we have this network of pressure that is pushing in this direction. And if you don’t play the game, you’re going to get kicked out. So that’s why one of the things that I hope to change the narrative around with the book is that whole bad apples narrative that, okay, this is just a bad apple over here. This person behave…
Jim: No, it ain’t bad apple, it’s a bad system. You referenced the famous saying it ain’t the plan, it’s the game, right?
Kristian: Yeah, exactly, exactly.
Jim: The way the GameB synthesis on this works is that we say it’s the pervasive force field of money on money return at every scale. Because as you point out, it’s a whole food chain. The funds of funds are allocating their funds according to the returns of venture funds. The venture funds are getting LPs based on their returns. The companies are getting their investments based on at least estimates of what their returns are going to be.
So this whole stack pervasively is formed by relatively short-term, three to 10 year money on money return. And what we have as a society is the emergent result of that after we have stripped away the legacy morality. Because when I came into business in ’75, there was still a fair bit of morality in American business. I worked for a quite good company with very ethical people, and there was things they would not do even if they were profitable.
But unfortunately, the neoliberal revolution of the ’80s and then onward from there has basically stripped away any moral restraints and we’re strictly at money on money optimization. And this is the advice I give in my little practice of CEO advising is here’s one of the most important things you do as a CEO if you’re at a company that involves sales at least, and that is your sales compensation plan.
And I always warn people with this very cryptic advice, which is you will get what you pay for whether you want it or not. Whatever your sales compensation plan is is what your salespeople will do, even though that may not be what you actually want. So be fucking careful when you write your sales compensation plan, right?
Kristian: Oh, definitely. And you can have all sorts of perverse incentives between sales and marketing and different functions within a entity, so definitely.
Jim: And if we land that, we could say that we’ve organized… The sales equivalent is that we have decided that short to medium-term money on money return is the only thing that matters. And the result we’ve gotten from that has proven that’s probably a bad fucking core idea. However, this phenomena is not just about economics. I thought of a very interesting contrasting example, and this is your classic arms race, which is closely related to the Multipolar Trap, and that is beauty filters online. Tell us about that.
Kristian: Yeah, this is something that my friend Liv Boeree talks a lot about as well. So essentially, if you want to be an influencer on social media and if you pull teens, all guys want to become YouTube celebrities like MrBeast, and all of the girls want to become Instagram famous or TikTok famous. So it’s certainly an artifact of our time. But in order to win in that game, I mean, especially if you want to become a female influencer, you need to be good-looking because there is an algorithm…
Jim: Well, not necessarily good-looking, but this fat lip bullshit that has become a thing. I guess it’s kind of like one of these bizarre sexual selection things. It’s ugly as fuck, but people seem to want to be doing it.
Kristian: It’s all down to the social media algorithms again. I mean, they are built to maximize for retention. So then as an influencer, you need to go wherever the algorithm brings you. As a matter of fact, if you are more good-looking, then people will engage more with that content. The thing is that looks is more or less this static thing that you’re born with unless you enhance it in various ways.
And I mean, makeup is such a way in which we have enhanced our looks for thousands of years, more or less. And then as of recently, we had plastic surgery, so we’re getting more and more of that. But on the internet, especially with generative artificial intelligence, you have these beauty filters where you can just take a photo of yourself and then you become immediately the Hollywood version of yourself with perfect skin, perfect proportions and everything.
And the thing is that in order to survive in this landscape, you need to take and apply those beauty filters. Because if you don’t do it, someone else is going to do it. And then the algorithm will gear all of the views and all of the engagement towards those accounts. And it might seem as a little bit benign like, why should we care? Photoshop have been around all the time. The thing is that this is just a click away.
Photoshop required significant expertise, but now everyone can do it. And you can actually see it in the mental health statistics that people are getting affected from it. And it turns out that when plastic surgeons were being polled, the most common request that they got is, “I want to look like my AI beauty filter.” And some of the plastic surgeons had to say, “We can’t do that. It’s anatomically impossible. You can’t have a nose like that. You can’t have a chin like that.”
So all of a sudden, we’re seeing this selection pressure to look in a way that is anatomically impossible. And that dissonance does have an effect on mental health for younger people, especially.
Jim: Particularly women, as you say. At least in the United States, the number of young say women 15 to 25 with mental health issues is like staggering. It might be 40%. It’s like, what the fuck, right? It’s kind of crazy. But I suppose if you’ve defined yourself on your Instagram image and you’re looking at it all day, and then you look at yourself in the mirror, that is going to produce some kind of dissonance.
And of course, now you know why The Jim Rutt Show is audio only. As my father said, “Boy, you got a face made for radio.” But I have a good voice, so voice only is my competitive advantage. I’m not looking at those internet beauty filters to see if it can make me look like Brad Pitt or Arnold Schwarzenegger or something. It would’ve to be a hell of a filter is all I can say.
Kristian: Soon in the future, we will both have real time beauty filters and we won’t be afraid to show our faces to real people in the real world. Maybe we’ll head in that direction. I guess we’ll see.
Jim: That’s just not good. Now, you give some other everyday examples, and one that I’m interested in, because I’m involved with science governance a fair bit, is the perverse incentives that the scientific community, the research community face.
Kristian: Yes. I want to point out before I go here that I think the scientific method and the institutions of science is the most truth convergent thing that we have around. You should trust scientists more than anyone else on the margin. I want to put that caveat out here so that what I’m about to say isn’t used for science denialism or anything like that. Science is really, really important and science works, but that’s why fixing these flaws are so instrumental. So what are those flaws?
So scientists, surprise, surprise, like anyone else, in order to survive need to put food on their table. They need to get paid, and you typically get paid through grants or donations. So how do you optimize for that in the end of the day? There is a couple of metrics that have started to evolve within academia. I mean, one of them is the idea of an h-index or just pure number of citations.
So if you get cited a lot by other influential people that are cited a lot, you could think about it a little bit like the academic version of Google’s page rank algorithm more or less. But that means that this metric, like any other metric, becomes susceptible to hacking. You can essentially get a good metric without doing all of the hard work. And there is various ways of doing so that are kind of benign to more or less fraudulent.
So one thing that I think some scientists can identify with is when you collect a bunch of data, you will have outliers in a sample. I mean, obviously those outliers, it’s wrong with a measurement device because it looks too funny. So let’s just remove them, and then we can get closer to a statistically significant result. Or if you have a lot of parameters in the model, you could do P-value hacking.
Jim: Oh yeah, that’s the worst one.
Kristian: That means that you essentially slide the dials to get a “statistically significant result.” And if you have enough of those parameters, you can make more or less anything statistically significant.
Jim: Or at least you can find something that’s statistically significant. You can’t make anything statistically so good, but you can find something that is, right?
Kristian: You can find something, exactly. No, that’s a good point. So you can find something statistically significant and pass the threshold of getting published in a major journal somewhere. Then if you go down the ladder to the more fraudulent things, there has been a lot of reports lately and a lot of it thanks to AI where people have analyzed the images in journals to detect, has someone used Photoshop here?
There is a bunch of scientists that have been caught Photoshopping their images and fabricating essentially evidence in their papers in order to publish their findings in a prestigious journal and get a lot of citations. And then there’s also the question around… So that has more or less created this… This climate has created a replication crisis where a lot of studies and academics results, they don’t replicate.
So you might read in the media, and media tend to sensationalize things as well because they have a selection pressure to get as many clicks as possible and optimize for ad revenue. So therefore, all of their science headlines need to be clickbaity. So you might read contradictory things. Oh, chocolate is good for health, or chocolate is bad for health, or wine is good for health and wine is bad for health. So it creates ultimately a landscape where people are quite confused.
And there is also the incentive to create new and exciting findings rather than doing the boring backbreaking work of trying to replicate someone else’s research. I mean, we need more of that in general, but that is not properly incentivized in the scientific community right now. So I guess I gave you a plethora of different things that are wrong, but they all emerge from the perverse incentives of how we set up the whole scientific institutions and funding and science and so on.
Jim: This is a good time to introduce an important idea that you talk about in the book, Goodhart’s law and the H Rank. Maybe give us just a quick take on what’s the H rank and what is Goodhart’s law and why the two interact in a not so good fashion?
Kristian: Yeah. So like the h-index, I don’t know the exact details of how it’s calculated, but it’s citations and it depends on who you are cited by. Is that another successful scientist? And it’s this metric like whenever you go to Google Scholar or other academic databases, you get assigned as a scientist this h rank or h-index. And that’s where Goodhart’s law plays a role. So Goodhart’s law is this observation that whenever a particular metric becomes like an end in and of itself, it ceases us to become a good metric.
And the reason for that is in a lot of situations, what we actually value, it’s hard to distill that into one single metric. And it’s hard to do that in a way where it’s fundamentally not less expensive to hack the metric rather than doing the actual work that the metric is supposed to represent. So going back to enterprises again, one metric for an enterprise might be like the ESG metric, for instance. So how does Goodhart’s law play a role in the ESG met…
Kristian: Goodhart’s law play a role in the ESG metrics of businesses. So the ESG metrics are supposed to measure how sustainable is this business or how responsible is this business? The thing is that in order to optimize for that, instead of actually reducing carbon emissions or actually being decent to your employees, you can just produce piles and piles of policies and paperwork and you can have an environmental policy, a health and safety policy, a policy for this and policy for that. And that is fundamentally a lot cheaper than doing the actual work. And that’s how you get a good ESG metric without having to put in the hard work. And same thing when it comes to the age index in academia, it might be a lot cheaper to engage in a little bit of P-value hacking here and there, like removing a data point here and there to get more citations and sort of optimize your age index than it is to do fundamental discoveries that will actually deserve all of those citations.
Jim: Yeah, indeed, indeed. Let’s now move on to some of the mechanisms beneath this. One of the more interesting ideas that you put forth is, why isn’t everything cancer? Why don’t you use that as maybe a pretty long chat about life and how it is in some ways surprising everything isn’t cancer and how has nature dealt with all this stuff?
Kristian: Yeah, so that’s a great question and that’s a question that I’ve asked myself from an early age as well, I played around with, and perhaps we will sort of go in there a little bit more in detail, but I played around with a lot of evolutionary simulations as a kid. I saw that the defectors tended to win in those simulations, meaning that the “Darwinian demons” won. So if you adapted your behavior in a way where you defected or screwed over everyone else or behave like a psychopath, you tended to be rewarded. But if that is really the case, then in any type of population it would just converge towards everyone being defectors and everyone being an asshole. And so what are the assholes of the human body? I think of myself as sort of this being, this entity, but I mean actually I’m just a colony.
I’m a colony of cells that are collaborating and cooperating and the defectors in those cells, they’re called cancer. I mean, they don’t care about the well-being of the organism. They just want to replicate as quickly as possible. So then the question is, if defection is evolutionary advantageous, why is not everything cancer everywhere? How is complex life even possible, because we’re sort of this layer upon layer upon layer of collaboration like micro molecules collaborating to create our metabolism and genes, collaborating to create a healthy genome and cells collaborating to create an organism, organisms collaborating to create a society. So it seems quite perplexing like how is that possible?
So that’s what I sort of deep dived into when I asked the question, why is not everything cancer? And the simple answer is that what is adaptive to the organism as a whole is not necessarily the same thing that what is adaptive to the individual cells. I mean an individual cell might have incentives to defect in some way because then they can replicate really, really quickly, monopolize resources in the body. But for me as an organism, it’s adaptive for that not to happen. So my organism, in order to have multicellularity at all, my body had evolved various mechanisms to police that sort of rapid growth and uncontrolled replication of cancer cells and regulate so that resources are somewhat equitably distributed between cells in the body and regulate in a way where different parts of different cells form different functions, so essentially division of labor.
So I think by looking at this layer upon layer for which life has slayed all of these Darwinian demons in the past that can sort of hold the key for us to figure out how do we slay the “global cancer” and the global cancer here is uncontrolled power maximization of nation states or various groups or uncontrolled profit maximization of enterprises and so on.
Jim: This is actually quite interesting and particularly you think about stages in emergence, right, where cancer is not an issue if you’re a single-celled entity. In fact, you could say it in some sense, a bacteria is kind of a distributed cancer and it kind of just works, right? But once you become multicellular, and multicellularity has evolved multiple times. But the one that led to us is the one in the Cambrian explosion where animals essentially learned how to orchestrate many, many cells into quite effective beasts within a period of 5 million years, a blink in geological time essentially life evolved.
All the phyla of life we have today except one which is quite amazing, the things that led to us existed within 5 million years of this invention of this new kind of multicellular technology and this multicellular technology had to have as part of it, a whole bunch of technologies that basically defeated the game theory and made it of cancer. And so cancer, while it exists, it’s notable that cancer hits humans mostly after we’re done reproducing right. Now, there’s some exceptions to that and some very tragic ones. This arms race between the technology of multicellularity and the game theory of cancer has in general been one, or at least those of course survivorship bias here. Those who have survived have had this quite complex regulatory systems that allow it to out-compete the game theoretical tendency for cells to just go crazy with self-replication.
Kristian: Exactly. So I mean essentially in order what makes me possible as an entity, despite me being a colony of all sorts of bacteria in my own cells fulfilling different functions, it is because of regulation. We regulate and police defectors and punish those defectors. If it wasn’t for that, I wouldn’t exist as an entity. And I think we can sort of see the parallels in today’s society. I mean, for instance in let’s take the Baltic Sea, it’s the sea close to, it’s next to Sweden, next to the Baltic states and Finland and so on, and it’s one of the most polluted seas in the world, and the fish has more or less disappeared. And it’s a classical commons problem where if you don’t have any fishing quotas, it is adaptive for you as a fishing company to just catch as many fish as possible because you want to do it before everyone else does it.
So you want to maximize the profit, but now because of the European Union, and we have now all been engulfed in this bigger regulatory entity, that sea has a chance to come to life again because we regulate and police that the sea is not being polluted and that overfishing is not happening. So I think this metaphor of cancer is applicable to multiple stages. And in fact, to individual cells, there exists a thing called an equivalent to cancer and that would be selfish genetic elements for instance-
Jim: Copying genes.
Kristian: Or even before that, within sort of the RNA first world back when probably it depends on if you’re a fan of metabolism first or like our RNA world hypothesis. But anyways, you might have had these small, small molecules and you could see sort of cancerigenic behavior amongst those molecules as well. So it seems to be truly this universal thing with Darwinian demons or Moloch or multipolar traps or whatever you want to call them.
Jim: And if you pull back a little bit and you do call it out, essentially, I would suggest that the biggest picture way to look at this is the always ongoing war between cooperation and defection. I mean, think about cancer is cooperation is we get all the benefits of being a multicellular beast, be able to swim to the ocean and eat little fish and do all kinds of cool things. But if we defect, then this line will die out. And so of course, keep in mind cells and such are not conscious. These are not conscious decisions, these are emergent results from evolution. When we get to humans, the game becomes a little bit different because we are not driven by biological evolution very much anymore. We’re now driven by social evolution, and again, we have this issue of defection versus cooperation. And if we use that as our top level lens, it all starts to make a little bit more sense.
Kristian: No, totally. And I mean I think in terms of cultural evolution or I mean sometimes it’s referred to Memetics as well, it gives a whole new sort of lens of viewing the world that similar to how we as, I mean me as Kristian, I need to control the cells in my body somehow to collaborate in order to exist. Certain cultures need to exert control over their members in order to exist at all. So I mean, I don’t think there is a just random chance that the most successful religions in the world have certain mechanisms built into them where you’re supposed to penalize people who leave the faith and you’re supposed to recruit more people to join the faith. If you have those things as sort of core tenets in your culture or religion or society or whatever, and you’re quite good at the culture in and of itself is good at applying top-down control, then it will be successful.
But again, it’s a war between different levels of selection because those bigger entities that we create like cultures and enterprises and nation-states and so on, it might not actually be adaptive for them to have our welfare in mind necessarily. It can be adaptive for them to just rule through dictatorial means where we punish you, and we do so really harshly if you defect from our particular system. So there can also be things like bad cooperation.
Jim: Now let’s move to where it really gets kind of scary, and that’s this concept of kamikaze mutants and evolutionary suicide.
Kristian: This, it’s actually an interesting story behind this. So back when I still was a teen, I wanted to become a better Python programmer, so I got obsessed with coding. I found it so incredibly addictive to write code and compile code. So I played a lot of video game in my early teens, and then I sort of changed that for programming later on. So I built these evolutionary simulations that were quite simple, to be honest, with predator prey dynamics and a little bit of simple genes encoding how fast or how efficient a particular organism is. And what I saw happening again and again is that, “Whoa, life is not stable in these evolutionary simulations. Sooner or later, the predators evolved to become super fast at running and moving, and then they kill all of the prey and then they kill themselves because they have nothing else to eat.”
And obviously this is a very sort of simplified toy example of what is going on in the real world, but it really shocked me, like, “Wait, what is going on here? Is this thing happening in reality as well?” And that’s when I actually quite later in life started to go down a rabbit hole of kamikaze mutants and evolutionary suicide. So the idea is essentially that sometimes life can go in a direction and a particular organism can get a set of mutations that are sort of adaptive in the short term, but it kills you in the long term. So one example of that would for instance be like let’s say you have a prey like an antelope or something like that because of this arms race between the antelope and the jaguar or whatever it might be in the interest of the antelope to evolve in a way where it runs really, really quickly and spend a lot of energy on avoiding predators.
But turns out that maybe the same thing that makes you avoid predators means that you spend less and less time on reproduction and as a result, you die in the end of the day. Or to take another example of evolutionary suicide that or what’s most likely an example of evolutionary suicide. So we had sort of the great cod collapse a couple of decades ago, and one of the explanations that emerged was that for the cod to sort of survive in the short term, it’s actually adapted for it to be a little bit smaller because then it can avoid, it’s more likely to avoid the fishing nets. But if you become smaller as a cod, it made the cod at the same time less fertile. Then we had sort of this selection pressure for the cod to avoid the nets, but at the same time it became less and less fertile and all of a sudden the entire population more or less collapsed.
The same sort of phenomenon of evolutionary suicide have been observed in everything from slime mold to cod to predator and prey. And I would suspect that a lot of the species’ extinctions that we have seen in the past might actually be acts of evolutionary suicide. I think the panda is probably a good example of that. They’re so incredibly clumsy and they have optimized really, really narrowly around just eating bamboo. So it means that they’re sort of freaks of nature. They just need to sit and eat all day long. And if there is just a bit of a disappearance of bamboo, they’re more or less, I mean, they’re gone if the bamboo disappears because they’re so incredibly clumsy and so incredibly specific in their skills. So I think if it wasn’t for humans, I think we sped it up a little bit at one point, the killing of the pandas. But now us humans are probably the only thing that are keeping them alive. So it’s one other example of a species that might’ve gone extinct because of evolutionary suicide.
Jim: We know that larger animals, the average existence of a species is only about 2 million years, right and so that something’s going. Some of it’s environmental change, some of it’s co-evolution, and some of it is kind of maladaptive evolution that works in the short term, but not in the long. And that kind of interacts with change as you point out. As long as the temperature is the same and there’s nobody else eating the bamboo, then we have an ecological niche that works for the panda. But if somebody else is eating the bamboo or it gets a little colder and the bamboo dies, then the panda’s out of luck. And actually, it’s just an interesting example in North America we have two very similar species, the wolf who is amazingly well adapted to a certain way of being basically hunting large mammals like deer and things like that.
And then we have the coyote who’s genetically very similar, but has evolved to be able to be a complete omnivore, eat absolutely anything including suburban trash and things of that sort. And the wolf was basically eliminated from the United States at least, except in the northern fringe. The coyote has invaded from the west and is now everywhere. And so genetically very similar, but just different enough that it has a much more opportunistic set of behaviors has allowed it to survive while the wolf has essentially disappeared. Well, now let’s move from these biological games, which as we talked about before, generally just to impact one species and maybe some closely related ones to kamikaze mutants in human social space.
Kristian: Kamikaze mutants in human social space. I would say that any reasonable person looking at the future trajectory of humanity would say, “Wow, there is a lot of risks here.” There is the risk of nuclear war that might kill all humans around. We have climate change that are going to, within our lifetime, could make 50% of all species go extinct. So it certainly seems to be the case that as humans, it has been adaptive for us. And it’s not necessarily genetics alone. I think it has a genetic component, but I also think it has a cultural component where it has been adaptive for societies to be aggressive in general, aggressive and expansive. Extract resources really quickly or build bigger and better weapons than your competitors.
So maybe to protect yourself against them or maybe so you can attack them and steal their resources. So we’re in the middle of these resource arms races. We’re in the middle of power arms races between nation states and something that hopefully we will talk about that later. We’re also in the middle of an intelligence arms race with artificial intelligence. So I think those are sort of examples of human societies encountering this kamikaze mutants more or less. And yeah, I’m going to pause there.
Jim: Yeah. One of the ones you dig into in some depth, which is quite interesting, was nuclear weapons and close calls, so-called broken arrow incidents, which there’s been a lot. And you didn’t even mention a couple of scarier ones on the Russian side where they thought the US was launching missiles and one guy apparently made the decision not to retaliate, right?
Kristian: Yeah, Petrov, he probably preempted like a third world war, just one guy, and he’s probably the reason why we’re all alive today.
Jim: Yeah, one relatively low ranking guy. And you actually put something out, which I’m well aware of, but most people are not, which is when you’re in a situation like the nuclear arms race and the nuclear deadlock and now getting worse with more and more people with nukes, even if most of the time everything works out, you’re rolling the dice constantly and even at a low probability of coming up with two or even playing risk three ones, right it’s going to happen. I just ran a number this morning when I was preparing for the show. If you have a 2% probability of coming up with the bad result and you roll the dice a hundred times, 87% of the time, you’re going to get the bad result.
We’re both evolutionary programmers and understand combinatorics, and you change the numbers, you get different results. But the reality is if you’re in a continuous game of rolling dice and all you got to do is have a bad result once, sooner or later you’re going to have a bad result.
Kristian: Exactly. So I mean that’s why in a hundred, I mean let’s say for instance that the annual risk of nuclear war would be 2%, then it means that in a hundred years, if you have an annual probability of 2%, you would be with your math 80% probability that we have some sort of nuclear war. And 2% is not entirely unreasonable as an annual probability. Like a friend of mine, Seth Baum, sort of modeled annual risk and concluded that it might actually be around 1% looking at all broken arrow events and close calls and incidents. So just looking at the empirical data of all incidents that we know about that could have caused nuclear war, and then I bet that there is a lot of incidents that we have no clue about because it’s a close guarded government secret.
Jim: Indeed, indeed. And this theme of rolling the dice constantly is really, really important. And it’s something humans suck at. When humans look up and see a fucking lion, they know you better god damn run, right? But if it’s some slight low probability event, we were not evolved, for good reason, because your most likely thing was to starve to death or be eaten by a lion. The fact that you’re cutting down 1% too many trees each year and you’re going to end up the poor people on Easter Island with no trees and you can no longer build boats. I think similar thing happened in Iceland actually. We are just not good at that. And if we’re going to survive, we got to get good at that, which is kind of an interesting problem. We have to have a social solution since we do not have a biological tendency to be able to deal appropriately with low probability, but constantly recurring risks.
Kristian: Exactly.
Jim: All right, we’ve talked about nukes, and as you pointed out in your book, nukes are mighty dangerous and they’re rolling the dice constantly, but they have some positives, which is it’s hard to build a nuke. And there are some interesting choke points, which you call them control nodes I think it was like for instance, sources of uranium or sources of centrifuges or sources, this is harder, of super precision machine tools, though I do like to remind people a group of yahoos built nuclear weapons in two years in the 1940s without any computers and without any precision machine tools. So don’t put too much credence on those choke points, but one that may have very different attributes are engineered pathogens. Tell us about that?
Kristian: Yeah, so before I do actually, would it be okay because I feel like there is one point that I want to weave a little bit on because you mentioned that the nukes, it’s sort of throwing the die all the time. I mean, you’re throwing the die again and again and again. And I think in a sense, and this is I think a broader and quite important point in the way evolution through natural selection is doing the same thing over and over again. Evolution through natural selection you can, I mean, as a computer scientist, I see it as an optimization algorithm. You optimize a fitness function through mutations that happen somewhat at random, right? The thing is that natural selection, it doesn’t predict into the future. It just looks at what is adaptive right here, right now. It doesn’t look into will this thing still be adaptive like a hundred generations or 10,000 generations down the line.
And why I think this point is important is because I think there are almost these evolutionary landmines or mutational landmines. I mean, if you visualize the fitness landscape as a real landscape where you’re climbing certain hills, you’re going in a certain direction, evolution is sort of almost this random walk. And along that random walk, there might be mines and minefields. So there might be mutations that are deadly in nature. And I mean, from our perspective, from a human perspective, it might be sort of adaptive to develop more and more dangerous weapons. I mean, that’s why we have gone from, so I have sort of this graph in the book where the first weapons are stone tool, but then in order to win, when you’re competing against other human societies, you need to build more and more dangerous weapons. So you go from stone tools to metal swords to bow and arrow to gunboats and Gatling guns, and then eventually nukes, right?
But at some point you might bump into a thing that is so deadly that it will kill you more or less. And in a sense, evolution through natural selection is also this random walk across a technology tree more or less. So, I mean, intelligence might be such a technology or running really quickly might be such a technology or collaboration might also be such a technology. And I suspect if we look at past mass extinctions, like a lot of those mass extinctions have been caused arguably by bacteria multiplying really rapidly or algae multiplying really rapidly, causing massive climate change or changing the chemistry of the oceans and almost killing all life on the planet.
And I mean, the same thing sort of happened when the first Cyanobacteria that could actually produce energy through photosynthesis. I mean, that massively changed the atmosphere. So we had this great oxygenation event that almost killed everyone and being a predator or being a parasite or being a virus, those are also technologies. It’s technologies that are being discovered through this random walk in a technology tree. And I think some of these technologies along the way might be deadly, and it doesn’t just apply to sort human created technologies. And I think the whole sort of distinction between human societies and the rest of, I don’t know, nature and biology is somewhat artificial.
Jim: Yeah, you could say that it could be an attribute of essentially any technology landscape if you think about it over a long enough timeframe.
Jim: For a long enough timeframe. It took many, many millennia to cause atmospheric changes and such. And clearly, evolution does not think about millennia. It only thinks about the next generation. And the same, of course, is true in our domain. Though things are happening a shitload faster than at the million-year mark in human evolution right now. So let’s move to that next one, which might be the scariest one of them all. Though we’ve got so many good scary things, hard to rank them, and that is engineered pathogens.
Kristian: I think some of us might have seen the movie, Oppenheimer. It is about the more or less inventor of the atom bomb, where he led the team at Los Alamos Labs, with the most brilliant scientists that created the atom bomb. And I mean they were all aware of the consequences, that we’re creating a weapon of mass destruction that can kill millions of people. But they all decided to do it because they felt like they had more or less not a choice because either we do it or the Nazis do it. And we better be in control of this technology rather than the Nazis.
The thing is that I don’t think the atom bomb is the worst that we can do. I mean, we know from Covid that viruses can be incredibly dangerous. They can put the whole society on hold, more or less. And I mean, back in the days when Covid was first discovered, we thought it was a lot more deadly than what it actually turned out to be. And I think also for good reasons, because we have had the Spanish flu and the Black Death and all of these pandemics in the past that have killed millions of people.
So the thing is that, through CRISPR, through synthetic biology, it will be possible and it is possible to engineer viruses. So you could sort of imagine a super weapon that has more or less the virality, and spreads as quickly through the air like the common cold. And it might have the deadliness of rabies where 100% of people die. And it might have the incubation period of something like hepatitis B, which is 8 to 10 months. So if you would combine all of those things into a super virus, you could more or less have something that spreads throughout the population for a couple of months. People are not even noticing that they are sick because the incubation period is so long, and then all of a sudden everyone starts dying like flies.
Or even, I mean, I don’t know if it’s worse, but one variant of that would be, okay, let’s program this virus so it only targets particular people with a particular genetic disposition, for instance. Just like there was an arms race in the sphere of quantum physics and quantum mechanics in terms of creating the atom bomb, there will be an arms race in the domain of biological weapons of mass destruction, where there will be someone in the national security establishment of China or the US saying, “We better develop this first before the Chinese do the same or before anyone else does the same, or the Russians or what have you.” So that is something that I’m incredibly worried about. I think we can do far worse than nuclear weapons. And if you sort of plot the chart that I talked about earlier, like, how many people you can kill in a minute with the most deadly weapons at the time, it is an exponential chart. So if you believe in the trend line, we will inevitably hit weapons that are more or less world-ending.
Jim: And you talk about nation states. The nation states mostly have signed a treaty banning, flat banning, biological weapons. Though we do know the Russians were cheating massively during the Cold War. But game theory tells us there’s always a temptation for even nation states to cheat, though it’s a little harder for nation states to cheat than the other players, which is individuals. One of the issues with bio pathogens is, unlike nuclear weapons, the threshold to do it yourself in your basement, is it here today? If not today, then soon.
Kristian: Yes, definitely. And I think in order for any type of ban to be effective, you need the enforcement functions. Without the enforcement functions, it’s basically useless. And at least, on the international arena, the problem that we have is that there is no global police. Or I guess when we had sort of a monopolar world for a while, the US was the closest thing that we had to a global police. So the situation is quite dire. And I think what you mentioned as well, if there are technologies that are available to individuals, then it’s even harder to govern, right?
Jim: Right. Well, we have nukes, we got engineered mutants. There was also other interesting ones like nanotech. But one that’s currently pretty much in the public eye is AI risk. Why don’t we give you a spin on that?
Kristian: Yeah. So I think AI risk is probably the biggest risk that we’re facing right now because of several reasons. I think what makes AI unique is that, just like human intelligence, it’s sort of a dual use technology. I mean, we can use our intelligence for good and for bad. I mean, we can use it to produce nuclear power, but we can also use it to build nuclear weapons. And I think what AI will enable is for us to venture into completely new areas of our tech tree. And by that I mean, through the help of artificial intelligence, we could, for instance, find cures to diseases.
So there is this example that I bring up in the book with a company that built an AI to essentially find novel chemical compounds in the field of medicine. But what they discovered that if they just flipped one sign in the code, instead of finding compounds that make us more healthy, they could use the same AI to discover compounds that were likely to kill us. So by just flipping a single sign in the code, they could use the AI to find 40,000 new and novel chemical weapons that has previously not been discovered. And with any type of sort of dual use technology, whether it’s used for good or for bad depends entirely on the incentive structures in which it’s deployed.
So I think something that a lot of people are starting to be aware of is the negative consequences of social media. So I remember a time, like 15 years ago, when there was talks about social media creating global democracy. So there were Ted Talks and there were articles about dictators will have nowhere to hide now because of social media. And now when we look back at those statements, then we see them as quite absurd because social media is today being used by dictators to win elections. They’re used as a tool of genocide against Rohingya Muslims in Myanmar, for instance. So what went wrong?
I think what we need in general is we need to look at the incentive structures in which the technology is deployed, and not just the technology in and of itself. I mean, social media is a technology, and I think it could have been a tool for global democracy, but the incentive structures weren’t there. The incentive structures was a pay-per-click model where essentially you have a feed, and you feed people with content that will make them click more ads. And that’s how you generate revenue. And the algorithm needs to be geared in a direction that optimizes for that revenue. So in that landscape, you actually make more money and get more attention if you have clickbaity headlines or if you make shit up. So it was all about how the incentive structures looks.
And I think it’s the same thing with artificial intelligence. Artificial intelligence can be used to solve climate change. It can be used for discovering new medicines, and it probably will be used for some of those things as well. But we need to, again, look at the incentive structures. In the end of the day, there is this thing called the alignment problem. So it’s essentially like, how do we define an ethical AI? And how can we prove that an AI behaves in a safe and ethical way? And we haven’t solved that problem. And it might very well be unsolvable.
But even if we solve that problem, it’s, to me, not entirely clear that market forces will select the ethical AI. If you are the CEO of a large enterprise, do you want to select the AI that optimizes for profit or the AI that optimizes for human values? Of course, you’re going to select the profit maximizing AI. It would be irresponsible for you not to do that. And you probably would be fired. And arguably, it would be illegal for you not to maximize profit and shareholder value. And similarly, if you are Israel, Ukraine or Russia and you deploy AI in the field, do you want it to optimize for human rights, generally, or do you want it to optimize for you winning the war in the end of the day? So I mean, that’s why I’m worried about AI because AI will enable all of those dangerous new weapons and technologies that we could have never ever imagined. And couple that with incentive structures that incentivizes power seeking amongst nation states and profit maximizing for enterprises, I think we have a catastrophe in front of us, to be quite honest.
Jim: Yeah, indeed. And I sometimes sum up, I have a little talk I give about seven AI risks. But one of which I think I stole it from another [inaudible 01:14:20] person, but I’m one of the ones that’s popularized it. It’s kind of very much like you’re talking about, it’s a way to summarize it a little bit more powerfully, which is one of the big risks, not the only one, is that AI is speeding everything up. If we’re in a bad crisis and are heading for the cliff at a very high rate of speed, which it seems like we are, one of the biggest things AI is going to do is make it go faster. And if we don’t develop the wisdom to steer away from the cliff, it’s just going to give us less time to protect ourselves.
And I’m going to do a little sidebar here on the incentive structures and how there’s an even more dangerous way that they inter-operate. We talk about the money on money return or the economic incentive one, we also talk about the power one, war, force, et cetera. So look, I discovered when I was thinking about this, is the two are linked in a kind of very bad way, which is at least since American Civil War, and probably before that, military power is pretty closely correlated with economic power. So if you’re going to compete in the military power dimension, you also have to optimize for the economic power dimension because, to some degree, particularly if it’s a big war, your economic capacity is going to be the defining capacity. And it may be your AI capacity in the next war, which is correlated to how many Nvidia chips you have, et cetera.
And so we’re double-fucked, right? We have the economic game, which it’s at least imaginable we could back away from. We could say, we got enough. We could actually live better with less if we could get rid of these status games that we play, or change them to different status games. Humans will always play status games. But should it really be the size of your house and the flashiness of your car and the size of the rubber boobs you’ve got? Or should it be something else? But can nation states do that stepping back from the economic game if the economic game supports the power game? And so, that’s a double fuckery basically.
Kristian: That’s a very good point. Then we basically have almost this complex system of interlinked success metrics where, ultimately, the ultimate metric is fitness or survival. I mean, you need to survive until you reproduce to have your genes continue. But emerging from that success metric is that you might want to optimize for power in order to defend yourself or maybe take resources from your competitors, at least in systems with limited resources. And last time I checked, our planet is limited. It has limited land, it has limited minerals, it has limited a lot of things. But then, in order to optimize power, you need to optimize economic power as well. So it is almost like society’s this web of complex emergent success metrics that are all emerging from the fitness metric, if you will, or the mother of all metrics in a way.
Jim: Indeed. Now we could spend the next hour talking about nothing but AI. And I’m sure we’d have fun doing it. But I think we’ve said enough about AI. And I think at this point I will suggest, I call what we’ve been doing so far, the litany of shit.
Kristian: Yes.
Jim: And in fact, I have the domain name, litanyofshit.com. And I’m going to put a website up with all this bad stuff in it someday so that I can just say, go read the litanyofshit.com. And I’m not going to tell you about all the bad stuff. You probably already know it. But if you don’t know it, go read that. Now let’s switch to, what can we do about it? One of the things I did like about your book is that while you had a fair bit of litany of shit, you also had more than usual thinking about possible solutions. So how do we save life from life itself? Which is actually the name of part three of your book.
Kristian: Yes. So I think we need to look again at how did we cure cancer? How did we solve all of these Darwinian demons and multipolar traps in the past? And there is this brilliant paper by evolutionary game theorist or mathematical biologist, Martin Nowak, on Five Rules for Cooperation, where he sort of essentially outlines various ways in which cooperative behavior can evolve. And I sort of simplify that somewhat in the book and focus on two main mechanisms. So one is through some sort of centralized power, more or less.
And I mean, that’s what’s happening more or less all the time. So I mean, let’s imagine for instance, the problem of the Baltic Sea that I talked about earlier, like overfishing, for instance. A way of solving that problem is to say, “Hey, we’re going to have a fishing quota.” And why are fishermen going to care about fishing quotas? Because we have a monopoly of violence from a particular state where it’s like, “We’re going to put you in prison or fine if you don’t follow these quotas.” And that works, but it just moves the problem into a different layer, a different level. Because then, all of a sudden, you have nation states that are competing and you have anarchy in the global system.
So then in order to solve it, you would need some sort of centralized world governance or world government. And it certainly seems like we have sort of headed a little bit in that direction. After this First World war, we had the League of Nations trying to prevent the war from happening again. And just a few decades later, we had an even worse war, the Second World War. So it happened again. And by the way, this time we had nukes. So that’s when we created the United Nations. And there was sort of this strong movement towards world governance.
The thing is that the UN, they don’t have any enforcement functions. I mean, if the ICC says that Putin is going to be tried for crimes against humanity, nobody cares. I mean, he even goes to countries that have signed on to treaties where they’re supposed to arrest him, but they don’t arrest him because why would they? Nobody’s forcing them to do so because the UN doesn’t have any enforcement functions.
But I think there are some good news, right? I think the problem of the equation is fundamentally that it’s against the interests of individual nation states to give away their power to a higher entity. And something that I bring up in the domain of centralized solution is the European Union, because it’s actually an example of 14% of the world’s countries voluntarily giving up sovereignty. And that’s because the incentives have been high enough. You get access to the single market, and you can have rapid growth as a result. So you have to give up a bit of sovereignty, but you get a lot back from it. And the UN never had that. So I think that’s one way in which we could pursue and slay these Darwinian demons and solve the multipolar traps. Another way that I talk about in the book is sort of more decentralized solutions.
Jim: The EU is a way. But as you do point out, there’s a risk of, whenever you centralize power, there’s always a risk of bad decisions. But it’s a trivial one, but it’s annoying as fuck. It’s the goddamn EU cookies policy. Whoever came up with that should be taken out and fucking flogged. It’s like, why the hell, 15 times a day, should I have to click on cookies? Of all the things I don’t give two fucks about at all, cookies is close to the top of the list. I don’t give a shit about cookies. Those guys know everything about me anyway. What do I care? But I certainly don’t want to have my attention broken. Our attention is our most important. Anyway, it’s an example of where an attempt to do good can miscarry because they’re fucking stupid or something. I don’t know where that policy came from. But it shows you that even a relatively benign coercive entity like the EU can produce some bad results.
And when we talk about world government, it makes me even more concerned, as I often have said on this podcast. Well, I’ll be in favor of world government when we have at least five worlds, so that we can compare and contrast how different world governments work. If we don’t like one, we can go to another. And so it’s kind of really interesting because some of our problems are indeed global, like climate change being the classic one, forever chemicals, microplastics. There’s a number of truly global problems that we confront. And yet the idea of global governance, it’s a scary thought.
Kristian: I very much agree. In the book, I sort of bring up the potential issue of totalitarian lock-in. If this government becomes somehow a totalitarian state, then we might sort of live in a dictatorial regime. And I also bring up this example that is originally from Nick Bostrom’s brilliant paper, the Vulnerable World Hypothesis. So he sort of imagines a world where all we’re trying to do, we’re trying to optimize around the minimization of global catastrophic risks or existential risks. Because technologies can be dangerous, technologies can create the next pathogen, etc. So one way we could solve this through global governance is to create a global surveillance state, where everyone has a ‘freedom tag’. But it’s obviously a piece of surveillance. You’re surveyed all the time. You can’t do anything without the global state knowing about it. So we’re taking away freedoms in the name of reducing global catastrophic risk.
And I mean, Bostrom, some people have misinterpreted him and thought that he somehow promotes this idea. He definitely doesn’t. He’s a philosopher, so he essentially talks about trade-offs here. Do we need to trade off freedom in order to have a more stable world where technologies won’t kill us? So that’s one of the reasons why I’m somewhat skeptical towards these centralized solutions, because it could become a global tyranny. And again, like Goodhart’s law, it applies to any narrowly defined success metric. And if we give a global state the ultimate success metric to prevent global catastrophic risks, I don’t think that’s going to be necessarily a very good world to live in.
However, I do want to say that I would want to see some more global coordination and centralization around issues that relate to sort of a global commons like microplastics and environmental change and nuclear weapons, etc., etc. And ideally, that should be done in a democratic way. I think the problem with creating these super state and super entities is that you sort of have two levels of selection, but the causal arrow only goes one way. It’s like the entity that is enforcing you to do things, but there is no causal arrow upwards where you can sort of decide the direction of this new super state entity. But that’s what democracy is good for. So I would want to see some more centralization, if we are super careful and do it in a democratic way.
Jim: Always tricky.
Kristian: That is very tricky indeed. Yeah.
Jim: Yeah. I participated a few years ago in two group meetings on how to solve the multipolar trap in international affairs. And we came up with only one, and it’s not a hard solution, but it helps. You did not mention this in the book, but I’m going to throw it out just to get your reaction on the fly, which is, you talked about surveillance state against individuals.
Suppose we had, for one minute probably, it’d only be, global agreement for radical transparency at the nation state level so that all the countries agree to two things. One, that any citizen can go absolutely anywhere in the non-private sphere, i.e not into your house. But if Russia has got some factory out in the Urals, we don’t know what it is. We all agree that if I want to go there and walk around, take pictures, I can. So the right of surveillance is universal. And freedom of speech. Those two things together could be a systemic prophylactic, at a minimum, alert when bad things are going on and make us all much more comfortable in things like nuclear disarmament.
If we could agree to nuclearly disarm, and radical transparency of the non-private sphere, we would be able to have pretty high confidence that no one’s going to sneak a nuke in on us. And the same is true about biological warfare. And of course, you need freedom of speech so that you have then have the right to publish that result. What do you think about that as a non-coercive universal solvent against some of these risks?
Kristian: I love the idea. It’s actually very similar to what I think we will sort of talk about next. So I have this idea of global surveillance and absolute transparency of global value chains and supply chains as a potential solution because then you don’t infringe on any particular individual’s privacy or freedom. You’re just making supply chains more transparent. And I think this is sort of a similar idea. Anyone should be able to inspect what a particular nation state is doing and not doing, how they’re using your data, what they’re building. Are they building dangerous weapons or whatever it is?
So I really like that idea. I think to sort of build that on a global scale, you would still need some sort of centralized enforcement function. I mean, I could see sort of a multipolar trap, again, where, let’s say that for whatever reason, every nation state agrees to this, but then it might be adaptive for some nation states to sort of take it less seriously over time because of national security concerns or to be competitive and keep secrets from other states and so on. So it’s going to be hard then to sort of enforce upon them to stay transparent forever, unless you would have some global sort of monopoly of violence somewhere, I guess.
Jim: I would say there’s one alternative, which is collective reaction, right? Let’s say 194 states agree, radical transparency plus free speech. We all agree to it because it’s the least intrusive way to keep ourselves somewhat safe for the future. And we agree that we will bring hardcore, 100% sanctions against any economic sanctions, I mean total cutoff, no cheating, against anybody who violates this rule. And one of the beautiful things about radical transparency is we can detect a violation instantly. One of the big problems about say, biological warfare things, who knows what they’re doing in a fucking mine someplace? But as soon as Russia says, “No, you can’t inspect that factory in the Urals.” You have violated the rules, 100% economic sanctions against you. I suspect that, again, there’s always a defection problem. But I like the combination of those three things.
Kristian: No, exactly. And I think what you’re essentially saying would be somewhat based on the mechanism of indirect reciprocity.
Jim: Let’s just underline that. Then now let’s move on to, I think your biggest contribution in this book, which is, you have a relatively well-thought-out, though I will point out a few flaws in it, I think.
Kristian: Yes. Well, I want to hear about those.
Jim: Yeah, that is your idea of a series of reputation systems to form indirect reciprocity. Before you do that, maybe talk a little bit about the different kinds of reciprocity, and then your idea-
Jim: …reciprocity and then your idea for implementing a system of indirect reciprocity based on distributed reputation systems?
Kristian: Yeah, so going back to what I said earlier, that paper from Martin Nowak, Five Rules of Cooperation. So he outlines five different mechanism for which cooperation can evolve, and one really obvious one is direct reciprocity. So that’s the idea that if I scratch your back, you scratch my back. So it’s like a direct relationship between the two of us where we reward each other if we behave kindly towards one another. Another one is like kin selection. So that’s sort of more of a genetic thing where it might be sort of in the interest of let’s say these quote unquote, “selfish genes” that we’re all stronger together if we collaborate. Then there is also the idea of network reciprocity. So that’s sort of more related to sort of physical boundaries that you might see in nature, like interconnectedness through mountains or other environmental barriers where it could even be social network.
So before it was sort of called spatial reciprocity, but now it’s called network reciprocity because you can sort of model the interconnectedness more elegantly through graphs. Indirect reciprocity, it’s kind of direct reciprocity, but instead of a direct relationship between you and me where we scratch each other’s backs, I look at your reputation. So if you behaved shitty against someone else and I sort of saw that, or I heard that through rumors or in some other way, I’m not going to accept you as much. I might sanction you or I might give you a harder time. So that’s the whole idea of indirect reciprocity.
Jim: Okay. And then you have a specific idea of a distributed reputation marketplace.
Kristian: Yes. So I think the whole idea, if I want to distill it, it’s looking at the world economy. It is so incredibly interconnected. So I think that is the one thing that sort of gives me hope that when it comes to manufacturing like nuclear weapons or rogue AI or biotech, you need equipment, you need know-how. And that equipment has a long and complex supply chain with an AI company needing chips from NVIDIA, the NVIDIA needing manufacturing from TSMC and TSMC needing lithography machines from the Netherlands. So everything is super complex. Even a simple thing like a chicken sandwich is too complex for us to make from scratch. And by truly making it from scratch, growing the grain and then milling it and then feeding the chicken, et cetera. There was actually a guy on the internet that tried it and it costed him like $1,500 and took him six months.
Jim: Yeah, I love that example that showed that the power of cooperation, right?
Kristian: Exactly. So our whole world economy is this interconnected network of collaborators where we all sort of need one another in order for the system to work. So then the whole idea is okay, if we somehow had radical transparency around those value chains, let’s say you have a AI company that is behaving irresponsibly, and if you could predictably know that that AI company is going to be sanctioned by the chip manufacturer for instance, then they can’t conduct business anymore. Or if you have a biotech company that is acting irresponsibly, then they might not get access to the PCR machines or the lab equipment from the sub suppliers. So I call those things like critical governance nodes. So it’s sort of this bottleneck where if that supplier or that part of the value chain sort of says, “Sorry, I’m not going to be a part of this anymore.” Then the whole project more or less collapses.
So what does reputation have to do about that? Well, we need a way to determine who is an ethical actor, who is a good actor, who is expected to behave responsibly and irresponsibly in the future. So the core idea with reputational markets is that you create sort of a decentralized reputation score through prediction. So reputational markets is built upon the idea of prediction markets. So you might have seen as of lately that the pollsters did a lot worse than the prediction markets in terms of predicting the election. So we have these prediction markets where people can bet like who do you think is going to win? Is Kamala going to win or is Donald Trump going to win? And if you make the correct predictions, you get a monetary award. So it’s sort of a little bit similar like betting on sports betting. For instance, is this going to team win or this other team going to win?
And if you get it right, you get a monetary award and incentive. So you sort of have this mechanism for truth convergent. If you yield true results better than anyone else, you will make more money than anyone else. So a reputational market is a prediction market with a spin. And that extra thing is essentially instead of making predictions about elections, you’d make predictions about particular outcomes that we intrinsically value. So it might be, for instance, what is the probability that BP will have another oil spill within two years? What is the probability that open AI next GPD Five will aid a major terrorism attack by 2026 or something like that? And if you just take the aggregate of all of these decentralized predictions, you can essentially yield a final score, like here is how much good and how much bad we think that this particular entities is going to do for the world.
And then that score could be used as a means for these critical governance nodes or critical players in the value chain to say like, “Hey, open AI has a pretty low and bad score now, so we’re not going to sort of supply the GPUs to you.” And I think the beautiful thing here is that there is no centralized entity that is sort of the arbitrary of moral truth or truth of any kind, and neither epistemic truth. It’s all sort of the decentralized collective intelligence of everyone involved more or less.
Jim: Yeah, I love that part of it. And I imagine that you could have open entry on creating reputation markets, don’t have a single one, right?
Kristian: Exactly.
Jim: So if there’s 200 of them out there and maybe they specialize, there’s one for the petroleum industry, here’s one for agribusiness, et cetera. Two problems I thought of, I will say I should let you know and let my audience know is this. I’ve been following prediction markets since 2003 or ’04 when they first came out. I went and visited Robin Hansen who was more or less the inventor of the idea, and he and I have stayed in contact ever since. So I’ve thought a lot about prediction markets. And to make them work you need… So this is the first potential question, is you need to be able to phrase the proposition in a crystal clear way that you will have clear evidence at the end of the term on whether it was achieved or not. So let’s take for instance your example, open AI, the use of open AI leads to a terrorist attack.
How the hell are you ever going to prove that? One, what is in a terrorist attack? Maybe you could say that kills at least 50,000 people or something. I don’t know. How are you going to prove that open AI was significantly involved? Maybe they used anthropic or maybe open AI was only a teeny little piece. That part seems extremely difficult in binding dangerous behavior in a way that is sufficiently clear to be able to definitively say yes or no on the payoff function at the end of the term of the bettable proposition.
Kristian: I agree. It is a challenge overall. For your particular example, a piece of evidence might be the police confiscates the computer, looks at the chat log and they see that the terrorists sort of used prompt injection to get the model to tell it how to make bombs. But I think the broader point is how do you prove causality overall? Who was responsible, was it them or was it someone else? How do you prove that causal link? And I think for the market to function, you sort of need to develop best practices over time. And this is already a problem, not just with reputational markets. It’s problem with, as you point out, with prediction markets in general. Like that piece of attribution and just making it specific enough where you can say that a particular prediction is resolved or non-resolved. And so I think this is best practices that just needs to evolve over time. I think a second problem that I discussed a little bit in a book in relation to that is that sometimes in order to resolve a prediction, you need disclosures from the company itself, right?
Jim: Yeah, that was going to be my 1A. Okay, so because let’s think of a simpler case that’s easier in some ways than the open AI question about attribution. Let’s say you’re going to just bet on how much carbon will result from the oil produced by BP in 2030. That’s actually something you could resolve if you knew how much oil they sold. And fortunately, oil is a transparent market and you actually could resolve that one. But there are other ones where you have to have disclosure. So this is where again, universal solvent of radical transparency helps systems like this. And again, so especially if they’re going to be bettable you have to have a high level of confidence that at a reasonably high level of precision, you can have the data from the sources who may not have any incentive to give it to you.
Kristian: Yeah. And I think one way that has sort of played around with conceptually how you could incentivize disclosures is to essentially say that, okay, until the resolution date of the prediction, we’re going to have a penalty function or something like that where your score is being reduced more and more over time as an entity if you don’t disclose a particular piece of evidence that is related to a prediction. So one way this could be sort of handled by the market is that all the market participants that sort of more or less bet on a certain outcome, they could also give a signal to the market of saying, “Okay, what is one piece of evidence that would make the biggest sort of difference in terms of my credence towards the particular outcome?” And that might be, for instance, oh, a transparent disclosure report from BP of where they’re selling all of their oil and in what quantities they’re selling all of the oil. And then hopefully that would incentivize disclosure if you are penalizing non-transparency in terms of lowering the score for non-transparency.
Jim: I just had an idea. I think this might work, it might be bullshit, but when you make up on the fly it could be. Let’s say it’s the BP oil thing, and so you have proposition A, the amount of oil that BP will sell in 2030. And then you have a second proposition, will BP provide a transparent enough data point for the adjudicator to call result on the 2030 oil? And people could bet on both. And the rule could be that if the answer is no on the first, it’s no bet on the second and people get their money back basically. But the signal from the will BP disclose itself is then actionable in the second level aggregation.
Kristian: Yeah, no, exactly. I think that is a great idea and that is very similar to what I described in my book, that you could have various sub bets related to the bet. And those sub bets could be about causality and disclosures. So it could for instance be like, okay, which suppliers does OpenAI have for instance? Because that might be important to know, okay, if OpenAI gets a really bad reputation because they might cause a terrorism attack, then in some way the suppliers of OpenAI would be indirectly responsible for that. So it could sort of be a sub bet that creates a causal link between different players in the value chain. And the other sort of type of sub bet could be around, okay, will there be enough information to even disclose or resolve this particular bet in the first place?
Jim: You could see some engineering around this,
Kristian: Yes.
Jim: But of course there are limits to what you can measure with prediction markets, is that they need to have a specific callable proposition and you have to have the data. So that’s class of problem number one. Now here’s the second problem. This might be bigger or maybe it’s not a problem, which is you conceive of the idea of an aggregate score. Now, that’s going to be lots of people are going to have different opinions on how to weight things differently. OpenAI is selling their face identification software to China. How many demerits is that versus one chance in a hundred each year of creating the paperclip maximizer? Well, how do you trade those two things off? And you can’t use a prediction market to trade off multiple prediction markets. Somebody has to say, “Here’s Jim’s aggregate score and here’s Kristian’s aggregate score.” It’s not obvious to me at all how you would use a market mechanism to get any kind of convergence on those aggregate scores for players.
Kristian: Yeah, so there’s various ways of doing that. One way that I propose in the book is more of you can do it through cardinal or ordinal utilities. So one way you could do it for instance is that you would ask market participants, what do you intrinsically of value? And then sort of say, “Okay, this is how much I value happy life compared carbon emissions or compared to something else.” And then you could extract more or less an aggregate utility function for the market as a whole from that. Obviously you’re going to run into a bunch of paradoxes in relation to that. It’s well known in sort of social choice theory, everything from Arrow’s impossibility theorem to the liberal paradox. And you might have contradicting values and so on.
Jim: Oh, you will. I guarantee it.
Kristian: Oh, yeah, 100% you will.
Jim: I know people who research this theory, humans are not consistent in their value.
Kristian: Yeah, exactly. There is not going to be a perfect way of doing this. Another way could be, for instance, for market participants to say, “Do I value the outcome of this bet more than this bet?” So it would essentially be just a preference ordering of bets. And from that preference ordering, utility weights can be derived and something similar is already done in health economics for instance. So we have quality adjusted life years and disability adjusted life years. And those utility weights are derived by asking people, okay, how many years of full health would you give up in order to cure your blindness? Or something like that. And then you just have all of these sorts of comparisons of health outcomes and let people order them. And that way you can derive utility weights. So this would be something similar but at a more massive sort of scale.
So mathematically you can do it, but as you point out, there will definitely be contradictions. So I think this is a scenario where it would be good to have multiple competing markets. And I think a good sign that the ecosystem is somewhat healthy is if there is convergence actually. If the markets sort of converge on what is the reputational score on a particular entity. And there is some level of agreement on that to a similar extent that bookies within sports betting tend to agree on is Arsenal or Manchester United is going to win this game? They sort of tend to agree on the probabilities there.
Jim: I’m going to point out a fallacy there though, the reason they agree is because of the opportunity to arbitrage. You would not have the opportunity to arbitrage if people had different indices. In other words, if I blended the factors differently, there’s no way to arbitrage that. So the reason bookies work is that if these two bookies differ, I just do offsetting bets and get a free win.
Kristian: No, that’s true. And I sort of bring that up in the book as well, that sort of empirical statements are different from normative and ethical statements. If I have the statement, the Eiffel Tower is this tall, I can sort of resolve this statement by looking inside a book if I trust the encyclopedia, but if I don’t trust it, I can just go there and measure it through some easy like trigonometry. But it’s not entirely evident how you can do the same thing when it comes to people’s overall values. So that’s where you would to some degree, have to sort of trust the wisdom of the crowds in a way.
Jim: Okay, let’s imagine we managed to solve all these engineering problems and we get some useful signals. Now, give us a relatively brief chat on how different players would use those systems in ways to produce an emergent effect towards the good.
Kristian: So I think the key question that we need to answer is that why would someone care about their reputational rating to begin with at all? So I’m sort of telling a story here where, let’s take the example of OpenAI again, and maybe people think that I’m shitting too much on OpenAI, but I don’t think enough people are shitting on them. But that’s another conversation for another time. But the only way it can work is if, let’s say Nvidia cares about the reputational rating of OpenAI. But why would they care about their reputational rating? Well, maybe their reputational rating gets a hit if they don’t sanction OpenAI. But then why would Nvidia care about… So it becomes sort of this infinite nesting doll situation where OpenAI only cares if Nvidia cares, Nvidia only cares if TSMC cares, and TSMC only cares if ALSM who creates the lithography machines in their factories care, et cetera.
So we sort of have this network thing. And at some point there needs to be one of these governance nodes that genuinely cares and they sort of inject the caring into the system, if you get what I mean, how does that work?
Jim: Where does that come in? Is it the consumer? If the consumer cared at some part of the value chain, the end consumer, that could work. But that’s going to be a hard part, finding the anchor. Otherwise, there’s nothing to pull things through.
Kristian: Exactly. So it could be consumers, it could also be pension funds for instance. We have sort of seen a somewhat successful movement for pension funds to partially redefine their fiduciary duties, that if I hold all of your pension money for you to have your well-deserved pension in let’s say 30 years, then I better make sure there is a world there in 30 years, for instance. So pension funds are responsible for half of the investment flows in the entire world, like pension funds and sovereign wealth funds. So I think they could inject and become sort of this anchor in order for the system to work. And that’s what they have somewhat done already with ESG metrics. The reason why investors and companies care about optimizing their ESG metrics is because the pension funds care, and then they need to care and then their suppliers need to care and the suppliers of the suppliers.
So these ESG metrics have grown to become this multi-billion dollar industry. And it all started with the pension funds. But as I told you earlier, most of these ESG metrics are total bullshit. You can hack them so incredibly easily because you have sort of this centralized authority. It’s like MSCI or Sustainalytics, they have different names these like ESG data vendors. But they’re all sort of centralized and they all make up these weightings somewhat like arbitrary. But nonetheless, I think you could leverage pension funds in the same way for reputational markets and hopefully make the things spread. And at some point, hopefully would get a stable equilibrium where it would more or less be the default that if I don’t care about my score, I know for a fact that I will be sanctioned. The same way people know by default that if you have killed someone or if you are a rapist or if you’re doing horrible things, people are not going to treat you very nicely. That’s just the default.
Jim: Yeah. If you can get that anchor, then the chain of causality can work. So let’s deem that that works. All right, so this is actually quite interesting and to my mind, somewhat original contributions. So people that GameBee people out there, make sure you read his section about the distributed reputation markets because it got me thinking I got to say. So now our last thing before we wrap, is your last chapter in the book is what can actual people do tomorrow afternoon or better still tomorrow morning?
Kristian: So I think everyone can do something. I think the main thing is sort of what I call Darwinian demon literacy. I think we need to understand that a lot of the issues that we have are systemic in nature. So if we just blame individuals and have this bad apples narrative, we are not going to enact system change. So I would say learn sort of this literacy, learn about Moloch and multipolar traps and GameBee and all of these concepts because it will really give you a new lens for which to look at the world. A lens where you think in terms of how do you actually change the game. So I think that’s one really important thing. And once you have that lens, you can do a number of different things. So you can for instance, vote for candidates that don’t play the game A, which is to vilify your political opponent and be polarizing. And instead talks about systems and how we change the systems and incentives and so on.
So vote with your vote, but you can also vote with your money in the end of the day. And that’s how you place your money and your pension funds. It is the goods and services that you buy. Obviously it doesn’t make the biggest of a difference, but it makes a tiny difference on the margin. But then I think it’s about not just system change, it’s about discovering the multipolar traps in your local environment. It might be in your workplace where you have a totally messed up performance and growth management system where [inaudible 02: 01:33] are getting promoted. So then try and change that system fundamentally. And I have this principle that I call the principle of value alignment, which is essentially that you need to define what is it that we actually value here. And how do we measure the things that we actually value and how do we incentivize people to take action in a way where those values are sort of being fulfilled.
So it’s more or less what I describe as almost like a three step algorithm in your everyday life to sort of think about, okay, are there Darwinian demons here in my workplace or even in my family or even within myself that I can somehow slay? And obviously people have different capacity to enact change on the global level. I think everyone can make a difference locally in their workplace and in their communities. And on the global level if you have a lot of money, if you have political influence, there is obviously a ton more things that you can do. The money that you spend will sort of matter more in general. And if you have a lot of people that listen to you and a platform, then that will matter even more to some extent.
Jim: Yeah, like for instance, I regularly tell people, if you have kids don’t let them get on TikTok because worse than cigarettes.
Kristian: Yes, exactly.
Jim: Alrighty. Well, I want to thank Kristian Ronn for a very interesting conversation. This has really been a lot of fun. And the book is The Darwinian Trap, and it has the Jim Rutt recommendation that if you’re interested in this stuff, the book is well worth reading. And it’s not too hard, surprising. Despite the difficulty of some of these ideas, he does a wonderful job of explicating them.
Kristian: Oh, thanks a lot for having me, Jim. This has been a treat.
Jim: It’s been a lot of fun.