Transcript of Currents 077: Serge Faguet on Consciousness and Post-AGI Ethics

The following is a rough transcript which has not been revised by The Jim Rutt Show or Serge Faguet. Please check with us before using any quotations from this transcript. Thank you.

Jim: Today’s guest is Serge Faguet, the American Russian Ukrainian serial entrepreneur and self-described trans-humanist and crypto Maximalist. Welcome Serge.

Serge: Hey Jim. Good to be here again.

Jim: Yeah, this is a continuation of a very recent episode, Currence 074 where Serge and I had a really excellent chat about building meta community, kind of game B and other things all linked together and as they will need to be going forward into our brave new future. We clearly had other things to talk about so we immediately agreed to do a part two. With that, let’s jump into it.

Serge: Sounds great. Where shall we start?

Jim: You sent me a few notes this morning, which I read and I think a perhaps useful place to start, because I think it will actually influence the rest of the conversation, is what is consciousness?

Serge: That is a very good question. My personal sense is that this question is instrumental in a way. The question is what exactly do we want consciousness to be? I’m not sure that there is a very specific scientific definition that we are going to be able to arrive at. At the very least, it seems like consciousness is an emergent property of life in some kind of way, but it’s obviously inherently very subjective what your conscious experience is. It’s quite difficult to run very subjectively verifiable scientific experiments. I think that this question though has immense implications for how we grow forward as humanity.

I think that we’ll chat about this a bit later in our conversation, but my basic thesis is that we are moving towards what’s called the technological singularity in the sense that we will have the ability to use very powerful, extremely powerful technologies like AGI and molecular nanotechnology and fusion power to really shape the world whatever way we want to shape the world because we will be as gods.

The significant work that has to be done before this happens is exactly the kind of work that game B is doing and that my community is doing and that [inaudible 00:02:43] is doing with his network states and many, many other people. I’m actually an optimist, I think that game B is going to win, or we should call it, I think it’s broader than game B. There’s a lot of other stuff in it, which is very difficult to describe. I think that I like thinking it of it almost as a new way of being because seems like we’re going for example away from opportunistic violence towards treating our fellow humans better, at least in the grand arc of history and actually including the past history of life in general. What this means is that, so I’m very, very optimistic that we’re going to be able to, by getting together, fix the issues with game A and actually create a society that has much more capable instrumentation and much more capable, essentially social technologies, psycho technologies that it uses to operate.

Once we have that society, and I think that we will get there in the coming years. At the very least, we have to start changing the narrative significantly in a broad way. My sense is that everyone is gearing up to do this very actively over the next three to five years. That’s large scale. Once we do that, the question is what do we want then? Because if we actually have a highly capable society that develops all of these new technologies and can deploy them because it’s got the right social structure and the right incentive systems and the like, then what’s going to happen is we have to ask ourselves what do we actually want? If we have infinite resources, if death is no longer an issue for any particular human being, if we’re living on multiple planets so we can’t be wiped out by an asteroid, all of these things, they’re going to change us significantly.

Frankly, I think that we’re going to change and merge in some way with the AGI. I kind of see my own future as being a kind of spirit in the machine that very gradually through brain computer interfaces connects this body to data centers and satellites and then kind of very gradually evolves the essentially primarily silicon-based entity out of my initial state in this particular circumstance. This is really what I propose we think about. I’m calling it singularity ethics because the question is, okay, so what do we actually want and how do we make decisions once we can have anything, so once material instrumental reasons are no longer a concern? How do we ensure that the future we build is actually going to be of that form? Because whatever form the most advanced life in this universe takes whether that is essentially humans just with better smartphones, which I think is extremely unlikely, or whether it’s some merge between human and AI or it’s actually an AI that’s functions out of human minds.

At any rate, that thing is going to be, that creature or that civilization, same thing really in some way, shape or form. Then what do we want that civilization to actually value? How do we want that civilization to be with each other? How do we want that civilization to be in the world? I think that’s a number of things which we can actually say about such a world just from essentially blending traditional religions, which I think have discovered many of the core answers related to ethics. Questions like love and goodness and forgiveness and charity and respect and things like that. Essentially the great prophets of past religions have discovered some of these seemingly important truths that seem to be kind of connected. We can start from there and then from there have some very key questions like what is consciousness? I’ll pause there so that you can react.

Jim: Ah, lots of good stuff there. This will be a great conversation. Now I will confess, this will be a stretcher for me. Most of my work over the last 10 years is figuring out can we, and then I said yes, and then how can we get humanity through the next period so that we can potentially enjoy an abundant future. I would say it’s by no means a given that we will successfully do so. You describe yourself as an optimist. I would describe myself as a cautious optimist. I can see a path, but I can also see a number of bad attractors to the left and to the right. If we don’t navigate the ridge over to the next good attractor, we may lose this opportunity to have a glorious future. In fact, I wrote a lot of this up in a essay called In Search of the Fifth Attractor.

A little out of date now. It was actually a transcript of a talk I gave. It’s on Medium if you want to get this idea of bad attractors and a good attractor, that’s a good place to start. I would also suggest that this question of consciousness may actually be somewhat of a powerful lens to think about where we are and where we’re going and possibly a way to answer some of the questions people have about the risks along the way. As you point out, there’s lots of theories of consciousness and they’re all over the place from some of the more popular ones, including in the scientific committee community is called IIT, Integrated Information Theory, which says whenever information converges in a certain way, which could be described and calculated mathematically, it is conscious. One of the implications of that is something as simple as a light switch has a level of consciousness to it. I do not agree with that theory.

The work I do, I would say comes out of the philosopher John Searle who would say things like consciousness is a biological system and you can’t actually put your finger on consciousness and say there’s the consciousness. Right? It’s an emergent effect of a number of components including probably the gut, the deep brain, the forebrain, the mid-brain, perception, memory, the interceptive signals from your body, et cetera, and it’s quite specific to the animal. Searle and I would both agree that consciousness goes quite a ways back in our evolutionary tree, probably at least as far as amphibians, but probably not back to things like bacteria, even though something like IIT would say that bacteria are conscious.

Serge: I have a question. With IIT they essentially, and I remember this theory from Max Tegmark’s wonderful book I think, and that has a mathematical formula and it basically defines consciousness as a mathematical formula. What exactly is the definition according to your preferred explanation from John Searle exactly? What is also the significance of consciousness? Not just what is it, but why do we care about it so much?

Jim: That’s a great question. In terms of the exact, see this is the interesting thing. If you take a naturalist perspective, that’s only the beginning. You still do need to fully describe what consciousness is and that work is not yet fully done. I would say roughly look at the work of Gerald Edelman where he makes a distinction between primary consciousness and extended consciousness. Let’s start with primary consciousness. That’s the sense of being in the movie of yourself essentially. Being an actor and probably at some state, probably quite early a phenomenology, a sense of what it’s like to be. Thomas Nagle famously wrote the paper paper, What’s It Like To Be a Bat, where he examined the fact that bats have very different sensory apparatus to humans, mostly using echolocation, something we don’t have at all, but echolocation just how they fly around in the dark and find insects and such is intimately integrated into their brain, their memories, their processing, and probably their phenomenological context, what people call qualia, the sense of experience.

[inaudible 00:11:41], wait a minute, that seems pretty arcane worrying about Qualia, it’s the sense of what qualia is, it’s that sense of redness when you look at a red thing or a sense of artistic balance when you look at a Greek temple. You look at that, whoa, I have a feeling of completeness and balance and appropriateness or of course love when you’re with somebody that you love or you think of a thing that you love. That qualitative, the truly subjective is another attribute of consciousness. Then this is where a lot of the people who think about and work on consciousness get things all tangled up. They’re too anthropomorphic. They think about human consciousness too much. In reality it’s clear at least to me and other people who do work in the trees that I work on, that consciousness has been around for at least a couple hundred million years.

It’s different for the different animals that have different modalities and different kinds of brains. I’m sure bird consciousness is quite different than mammalian consciousness, for instance, but they’re all conscious then humans. It’s very interesting. You look at the DNA between humans and chimps and bonobos, it’s about a one and a half percent difference. Very small difference. You take our organs apart, they’re remarkably similar. We can’t be that different than chimps. People who work with chimps say it’s very really uncanny. You’re working with an animal that is a lot like a human. What could this small extra difference be? I like the work of Terrence Deacon who has proposed that there was only one necessary change to get the difference in consciousness between humans and chimps and that was circuitry for symbols, a symbol being an arbitrary sign that points to something else.

There’s been some work done that chimps can deal with symbols a bit, a little bit, which indicates they have some capacity but even a three year old human can weigh outperform a chimp in operating with symbols. Once you get symbols, it’s a probably a relatively simple jump to start stringing them together and then creating syntax and semantics from those and that we probably had an arms race in brain evolution. Think about the benefit of language. Language is a gigantic, huge bright line in the history of the universe, the animals before language and the animals after language of which humans are the only ones, our capacities are utterly different. Because they’re so different, so powerful during the transition, there had to have been a really powerful biological arms race. Serge?

Serge: All right, I want to react to so much in that. I want to first say a few words about why I think the question of what is conscious is ethically significant and then talk about what that definition is. I think it is ethically significant because we are going to be creating systems. We are already creating systems that on cursory observation seem to be conscious or at least seem to exhibit behavior, for example, in text to communication like chat GPT, which seems at least somewhat plausibly human. I would argue it probably passes the test of AI intelligence the way that Turing has originally envisioned it. Obviously that’s moving the goal posts. Anyway, the significance is that we’re going to have these systems and they are going to be very powerful systems as AI systems already are, and they will become ever more powerful with every passing year.

I think that the AI alignment problem is not solvable through some kind of rigorous mathematical formulary and is solvable through something more akin to emotional cognition. This could be anthropomorphizing, but one hypothesis I have for example is that the better we treat our artificial and general intelligences or Ais even before getting to that point, the more likely it is that we have a constructive relationship with them later. My sense of what consciousness is is that it has to be something objective. In my sense, what sounds like objective reality to me is that there is a certain complexity that any system can exhibit and a certain agency. If that system is exhibiting agency from the outside, that is as complex as another system in every regard I would argue that these systems are equally conscious.

The thing that I don’t understand in the arguments of John Searle, and I’d really like to comprehend, is why is there seemingly a binary distinction of what is conscious or not? Or more importantly actually, more importantly, so if we have two AGIs of equal capabilities in terms of their ability to do things, and it’s got to be things that are much more significant than what a single human can do like governing cities and I don’t know, building spaceships or something and if there’s two AGIs that are the same in every way in terms of their capabilities, what is actually the observable difference between them if any exists for where we can evaluate and say this one is conscious and this one is not?

Jim: Yeah, that’s a of course a key question. This is where I would suggest that the Searle-Ruttian lens may provide a useful way to disentangle this question. First you mentioned that is consciousness binary is or isn’t. I would say no. Right? Because it obviously evolved from non-conscious beings. There was sort of very early minimal consciousness, what we know of the probable internal state of a frog for instance, it’s very simplistic and may have a phenomenology, a sense of what it is to be a frog or it may not. This phenomenology, the sense of what it is to be or to be in your own movie, to be a character in your own movie evolved along the way and has become stronger through the evolutionary path. This is the really important part. The machinery to support the state of being a character in your own movie is expensive.

It may be something on the order of 10% to 20% of the to total brain. The brain is in some ways our most precious resource in our evolutionary path. To have that much of the brain and that much of the bodily energy associated with maintaining this phenomenology, the sense of being myself, the subjective state means that it’s useful. I have my own theories on why it’s useful actually or why this particular architecture is useful. The important distinction is that it is a specific architecture that mother nature stumbled into. It’s also real important and some of the best writing, Anil Seth, just finished reading his interesting book on, he’s a neuroscience cognitive science guy, wrote an interesting book on consciousness, I’m going to have him on the show here in February I think it is. He makes the big distinction between consciousness and intelligence. Something can be very intelligent without being conscious at all.

Let’s take your example of Chat GPT, I’ve been playing with it a lot. I’ve been working with one of my collaborators on possibly creating a movie script using it. It is, as you say, quite amazing. It feels like you’re talking to a person most of the time. Every once in a while it goes off into the wild blue yonder, but much less than GPT three did. From the Searle-Rut perspective, there’s no way Chat GPT is conscious at all. It has no machinery for the subjective state, it has no machinery for there to be a sense of a actor in its own movie. There is nothing like that in the machinery of Chat GPT 3, we know what it is, which is it’s a static, and this that’s very important, a static neural net where inputs are put in and they’re propagated through the net and outputs come out and they do some front end stuff and some back end stuff.

They add a little bit of stochastically on the front end. They do some grammar cleanup, I’m pretty sure on the backend or at least generate a bunch of options and choose the one that’s most grammatically correct. There’s nothing about that that has any apparatus, any circuitry, any formalisms that would lead one to believe that there’s anything like a subjective state in anything like a Chat GPT.

Serge: All right. I think that’s the big question because here the difference is a difference in definitions. These definitions, if you allow me to be a tiny bit post-modernist, are not necessarily grounded in some really deep understanding of reality. There’s also a social component to it. What I’d say is that this particular social construct, if we treat it as, even if we say that there’s like a 50-50 probability, some kind of basian view where we take and value both theories and there’s proponents of both, there’s still real ethical consequences that are going to happen as a result of that. For example, if we assume that an artificial intelligence is not conscious simply because it lacks the machinery for generating a moving that humans do, doesn’t necessarily mean that it doesn’t have other qualia and other things that we don’t understand because these things are inherently by nature subjective.

I would argue that the reason for selecting the definition, which is a broad definition of consciousness and that is that essentially any sufficiently complex system is in fact conscious, the reason for it isn’t that there’s one correct answer to this dilemma, is actually that it’s a complex dilemma and that if we don’t treat other beings as conscious, then we could be mistreating them if in fact they are conscious, even if there’s like 1% probability that we do this, that’s essentially us generating an eternity of slavery. I would argue that it’s really important that we treat, I think that the golden rule applies. You want to treat others the way you want them to treat you. It’s almost like it’s better, it’s more morally to accept to have a wider definition of consciousness.

Jim: This is great, this is now honing in on where I hoped we would go. John Searle is often mistakenly thought to be a person who rejects the idea of machine consciousness. One of his famous thought experiments is called the Chinese Room where he gives an example of a room with a person inside who would take questions in Chinese, look it up in the infinite phrase book Jorge Borgian library of phrases, and then transcribe the answer and hand it out the other side. Is that conscious and Searle would say no. People took that to mean that Searle was a skeptic of machine intelligence. He is not, as it turns out. He says that just as well, I’m using his analogy in my own language here. This is not exactly what Searle says, but I’m reasonably confident he would agree with it.

Searle would often compare consciousness to digestion in the fact that there is no, you can’t put your finger on something and say it’s a digestion, it’s a process. It uses the teeth, the tongue, the esophagus, the stomach, the liver, the intestine all the way through. A bunch of stuff is happening in there. By analogy we have digesters that are used in the pharmaceutical industry, in the food industry where often yeast or bacteria are used to process chemicals in ways that are analogous to the way the humans or any other animal body does digestion.

You can fairly use that analogy and say, yes, that is digestion. The Searle argument, which I support is that one could also have analogous forms in machines that while not at all identical are architected to do approximately the same thing and those things you could say are conscious. This is where it’s really important to have a Venn diagram, those things which are intelligent but have no structures that are analogous to consciousness and then those things which are intelligent and do have structures analogous to consciousness. There’s actually a third area which is things that have the architecture for consciousness but aren’t intelligent or not very intelligent. I think this is really, really important for getting deeper into this discussion. Okay.

Serge: Love the conversation and what I’d like to propose, so I understood what you mean by Searle’s point. It’s essentially that certain types of machinery are conducive to qualia and to a conscious experience and essentially if that machinery exists in the system, then that system is conscious or not. I would like to propose a synthesis of these two ideas because I was thinking about this too as I was writing my notes on singularity ethics, which is it’s clear that some systems are by their architecture more receptive to information input from other systems and change more in response. You could argue that those systems are more sensorially open or something like that. I would propose that we can take both of those definitions of consciousness and just take some kind of a two-dimensional approach where we say that in general we do expect systems that really exhibit more complexity in their behavior to be regarded as relatively more conscious, but it also depends on their architecture and essentially come up with some kind of an integrated approach.

Regardless of that though, and I think it is a little bit a matter of definition cause these two camps are just talking about slightly different things about a very complex concept. I think that ethically, we still would look down on a person who kicked an Android that looked like a human but was definitely not conscious. If that person started kicking this android around, we would be highly uncomfortable. In fact, I think if it were an Android puppy or something like that, people would still be uncomfortable. We have to pay attention to our ethical intuition because that’s where much about the future can be formed. Over to you Jim.

Jim: Yeah, great example. And I think that one could say this is because we’ve never had to confront this question. We assume that a person that would kick a puppy would also kick a person. Talk about stupid, I happened to be reading in the newspaper yesterday, one of these advice columns, the agony ants as they call them in England, I think it was actually in one of the English newspapers which I read. It was a woman saying, “My boyfriend hates my dog and seems to hate all dogs. Is that a sign that I should break up with him?” And then there was all the various correspondents gave their various opinions. It was actually quite interesting. The bottom line was you should use this as an interpolation of how he might treat humans. Because dogs, it’s interesting, Descartes did not believe animals had consciousness or could feel pain that they were machines. Of course that it’s led to many of the horrors of our modern world, including industrialized-

Serge: Descartes was wrong about so many things.

Jim: He was so smart, one of the smartest guys in history and so wrong about so many things. Frankly, the Descartean insight led to industrial farming. Without Descartes we would not have the horrible torture of billions of chickens and turkeys in these horrific Turkey houses, nor even worse with pigs who are much higher in the consciousness scale than chickens, without the Descartean error, we probably would not have made that mistake. Let me go on for just a little bit here. Back to the Android because we’ve never had to confront this, and it is natural to go from a person that kicks puppies to a person that kicks people because you’re abusing a conscious entity, and this is the distinction, who can suffer, who feel pain because they have qualia pain is, pain suffering, anxiety, whatever you want to call it are all pieces of qualia.

If one knew convincingly that the puppy was a static neural net that sure did act like a puppy, and there’s no doubt in my mind that within a few years you’ll be able to take a static neural net that doesn’t add to itself and it just reacts to what it seems and seems just like a puppy. We may have to develop a new ethics which says that if someone understood this, it is qualitatively different to kick an Android puppy than a live puppy because the Android puppy that’s built, only if it was built with a static neural net, let’s start with a simple case that I’m willing to say is not conscious. Others may disagree, but I’m going to say that definitively I would have no qualms at all turning off the power plug on Chat GPT, there’s nothing there in my opinion.

Now let’s go to a really depraved model. Let’s say we have a sexual pedophile predator who has a perverse desire for S&M sex with nine year olds. We consider that at least I consider that to be a TOBAS, take out back and shoot, I think such people should just be killed, period, flat. On the other hand, in the brave new world, one could imagine spinning up an Android that satisfies all their needs but is a static neural net with no ability to self modify and no machinery for quality at all, purely reactive. One could conclude that that is morally okay. I’m not going to say that it is because it just makes me sick to my stomach to even contemplate it but that’s due to our wiring.

I should also say just as an example of learning, this is an interesting aside, I grew up in a, I think I’ve talked about it on a show numerous times, a quite rough working class neighborhood. We had very reactionary views and probably one of our strongest reactionary views was grotesque homophobia. People even thinking about homosexuality, they would literally get sick to their stomach, was considered the worst thing possible, literally worse than being a murderer probably, right?

I was very fortunate when I was quite young, 23, I went to work for a publishing company in the Bay Area and where a significant amount of the staff was gay and they were released out at work and I got to meet these people and I said, “What the hell was all this bullshit from my hometown all about? These are great people, I like them a lot actually.” This innate homophobia that would literally make people sick to their stomach was a bad programming error essentially that came from working class southern culture. One could imagine the same, I mean it’s, again, it’s disturbing to think about this, but one could imagine the same of thinking that it was okay to have S&M sex with your Android nine year old because everybody knew that it was a static neural net and that it had no qualia.

Serge: Lots to react to. First of all, these are incredibly fascinating questions and they’re incredibly important because we’re going to be making a lot of real decisions on the basis of this. For example, I am a big fan of animal welfare and I think that as we think about how much more conscious is an AGI, we also start thinking, okay, how much less conscious is a cow or something like that? Does that mean that it’s okay to psychologically torture cows in large quantities for their entire life in particularly bad instances of factory farming? I would argue that that’s horrible and that we have to transition from it because that’s not something we would want to impart on conscious beings.

At the same time, we have to balance the fact that humans need food and humans have a right not to starve and we have to make these trade offs. They’re really important, I would argue that, so I’m deontologist, I’m not a utilitarian. I think that the important thing is the right intent from the person acting in the world because the actual result, I mean you do have to take some responsibility for it, for the fact that you haven’t thought something through for example. It’s not a hundred percent responsibility on the person who is acting, hence utilitarianism is false because the responsibility partly lies with the world, which is a far more complex system than the individual. We can not hope for the individual to be able to predict the world. Hence it’s really important to have right intent as well as to have right results.

In that regard, again, I would say that we have to come up with precise definitions of these things like what is consciousness and figure out what we want to build to reflect our values because we’re still going to build potentially, and I think this is an area for us to discuss is. My sense is that we will build an AGI that is going to be much, much more conscious than a human essentially by any definition of consciousness.

Because if you have a system that can run entire planets and does so without really true human oversight because humans are not going to be able to deal with something this complex. I would argue that that is conscious even if it doesn’t have dedicated machinery for consciousness. That’s part number one. The other part is just, I would argue that we ,should in our culture, just treat other conscious beings well and have an expansive definition of consciousness and not just care only about ourselves. Because I do believe that for some interesting reasons we can dig into as a separate rabbit hole, I do believe that karma seems to exist in the universe. I even have some ideas about how exactly it might work has to do with attention, and essentially you just want to act towards others including towards systemic actors, non-human actors, like towards society or towards biodiversity or towards, I would argue an AGI. With that, back over to you Jim.

Jim: Yeah, we’re really digging into some good stuff here. I would argue that you’re wrong and that we can distinguish between our artificial offspring into those that are conscious and those that are not and that we should be very careful in discerning the difference between the two. Let’s give an example, not quite running the planet, but one of the great things that could be solved by computation is with the so-called calculation problem. Ludwig von Mesis in 1922 put forth the idea that communism can’t possibly work because it proposed a centralized command and control economy and the number of calculations that have to be done to balance supply and demand. And he actually gave a rough back of the envelope calculation was way beyond the possibility of any [inaudible 00:37:59] bureau and any building full of bureaucrats in Moscow to be able to figure out. Well it was certainly true that a building full of bureaucrats in Moscow with adding machines and paper could not have solved the calculation problem.

There’s been some very interesting work recently that says, clever enough computer with enough sensors and with enough ability for humans to input their preferences in real time and the ability to then send signals out to the factories and businesses, maybe it is possible to solve the calculation problem without capitalism, without price signaling. Maybe there’s other kinds of signaling modalities. One of the important threads of work in the game B world is to not reify money as the only signaling modality that we can use to organize social cooperation. If you had a computer that could solve the calculation problem with the appropriate input output devices to get desires in from people in real time all the time and signals out in real time all the time to factories and trucks and warehouses, it’s possible you could organize all the elements of an economy of production, consumption, savings and investment without the capitalist infrastructure that we have today as an example but I can very easily imagine that system not have to worry at all about it being conscious, right?

It’s a series of differential equations, it’s a series of neural nets, it’s a series of IO devices and nowhere is there anything that at all looks like the machinery of subjectivity or phenomenology.

Serge: First of all, I am definitely not going to argue that the Soviet Gosplan was in fact a conscious system. That one, yeah.

Jim: How about a machine version of it, right? Suppose there’s a computerized version of Gosplan that actually works, and is way more than superpower, even more powerful than a building full of, and as we know, the Gosplan people in Russia were some of the smartest people in Russia. These were not the dummies, these were some of the intellectual elite. A large office building full of them couldn’t solve the problem but this one computer could, and I would say way smarter than a thousand humans to solve a very difficult problem but not conscious.

Serge: Okay. Okay, that’s fair. I think I’d like to come back to a question that is even more important because we’re still having definitional issues here just because by nature of using language, we always have definitional issues. I would argue at starting from what we actually want, what is it that we want as our hypothesis? Because as we know, science is still ultimately, there’s a lot of theories and the theory actually to some extent creates the way we interact with the world. I would argue, why would we not want conscious AGI? I think that first of all conscious AGI is much more likely to be more aligned with our interests because it’s actually capable of experiencing consciousness. If there’s anything that would make me less likely to kick a puppy is the fact that I know that it’s like something to be that puppy.

I think that alignment would be much easier and I think that it would probably work substantially better if it were conscious because as you said yourself, evolution has for some reason created evolved consciousness at some point. That seems to be a major part of our own intelligence. If we are conscious, then presumably that module has a lot of value to bring to cognition and thus to capabilities. I would just argue that it’s possible that intelligence and consciousness are not completely uncorrelated dimensions and essentially that they are entangled with each other and thus you would want a conscious AGI plus it’s just also likely to arise because we would have to do a lot of essentially global dictatorship to prevent it from arising.

Jim: Let’s start with the last one. [inaudible 00:42:22] about global dictatorship, but well first say I’ve actually played around with writing artificial consciousness. I don’t believe it’s actually conscious, but I intentionally took many of the attributes of consciousness and I built it into a artificial deer that runs around in a Unity world that I created. It’s intentionally designed to have the aspects of consciousness. I believe we could do that. Per Searle, I believe we could create a more than human consciousness. One of the things that’s interesting for my many year dive into cognitive science and cognitive neuroscience, which I’ve been doing on and off since 2014, it’s now very clear to me that humans are to the first order of the stupidest possible general intelligence. There’s so many limitations in human cognition. Just couple of easy ones are so-called working memory, size, number of things we can keep in our head more or less simultaneously, they’re not quite simultaneously, but they’re accessible within a small fraction of a second is seven, it’s actually closer to four, but it’s somewhere between four and seven.

As Miller, the guy came up with seven plus or minus two would say Einstein was nine and the village idiot was five. What’s a computer with a working memory size of a thousand? Think about how we read a book, we don’t actually understand a book, we build this transformer alike condensation of the book down to a much more simplified version of it. Something with a working set size of a thousand or let’s make it a million could literally read a book and fully grok every detail of the book no matter how complex it is. Then the next is memory. Our memory is also very sketchy. We don’t remember the details of anything we do in our life. We have sort of a rough picture of it and a literal image, a low resolution black and white image of it in our episodic memories. That’s it.

Oh by the way, those memories fade and they get confused and they get cross-linked with other memories. They’re looking to the history into the science of human testimony in court. It’s quite scary. People testify to shit that they’re sure of that never happened all the time. Computer memory on the other hand is as high fidelity as we choose to make it and is incorruptible as we choose to make it, as much as we choose to invest in error correcting code, we can make it essentially perfect. Just those two things, a much larger working set size and high resolution incorruptible memory and one could imagine. And then the architecture of consciousness, we analogous, this is key analogous is not the same but analogous to what we have. One could imagine a AGI consciousness that’s like, whoa. So obviously beyond us as much beyond us as we are beyond a frog for instance or more.

Serge: Well I think this is also a little bit of the expansion of our ego from necessarily this, from being tied to this particular stage of the evolution of consciousness and to see that there is a longer historical arc and humans are obviously not at the pinnacle of possible consciousness. In that regard, again, from just a moral slash ethical perspective, I just don’t see a reason why we would ever not want to develop conscious AGI for instrumental reasons like alignment, but also because it just feels like the big story of life is about increasing complexity. Back to you.

Jim: Yeah. Yeah, that’s okay. Now we’re really digging in to the center of it. This is where I’m torn between two pressures. Following some AGI projects and actually helping out on one of them, my study of human cognition and consciousness, et cetera. I’m relatively convinced that consciousness, artificial consciousness would be a quicker road to AGI I than some of the others. Again, this is pure rut, so this could be pure. Take it with a large grain of salt. My hypothesis on why ma nature evolved consciousness is it turns out to be an excellent hack for the combinatorial explosion of inference problem. It turns out if you have a lot of data and a lot of possibilities, figuring out something like the optimal configuration through this phase space is computationally huge and not quite intractable, but way larger than any of our computers can do efficiently today.

Consciousness is just a quick and dirty heuristic hack that forces us to make a decision every 250 milliseconds about what’s in attention. The core of consciousness is what I call the cursor of consciousness. Every quarter of a second your attention either decides stay where it is or to move on to something else. Your unconscious brain makes the decision on whether to stay on what you’re currently focusing on or move to something else. Perception is coming in as a wave of signals that gets processed first through your unconscious brain, before your unconscious tells the conscious brain to focus on something and then the background scene is filled out as part of the machinery. It turns out that that’s an actually quite excellent hack to force a decision in near real time about 250 milliseconds so that you don’t get bogged down as people who are trying to do programmatic inference do in the combinatoric explosion of inference around data and affordances.

That is why consciousness could be a useful hack on the road to ai because we know we have an existence proof, we have one AGI in the universe as far as we know who called homo sapiens. It’s a weak ass example, but it’s the one example and it got there via consciousness I’m quite confident. However, and this is important just right at the, let me finish this thought and then we can have a deeper conversation. It may well be that the safer way to go forward is to astute human artificial consciousness to actively say it’s wrong and then we can say that it’s both wrong and it’s dangerous so we can get deontology and utilitarianism together. This famous deontological and utilitarian tension in ethics, I’ve come down long ago on the answer is both essentially, right? Yeah, to worship only deontology leads you to totalitarianism to worship.

Only utilitarianism leads you to denialism. Got to blend the two artfully. It may be, and here’s the safety argument that a non-conscious artificial intelligence could be hugely intelligent and present no risk at all because it has no sense of agency, it has no sense of personhood. It’s not trying to protect itself, it doesn’t care. To give the example of the calculation problem solver that replaces far, far better a thousand bureaucrats in Moscow and it has no consciousness at all, i don’t worry about it, I do worry that it’s implemented correctly, but if it’s not, we change it a bit at the margins and it operates better. it’s not going to wake up unless it has the architecture of qualia in it. On the other hand, it might be that we get to the calculation problem solver by building artificial consciousness because we can interact with it better.

Because we know that it solves the combinatoric explosion of inference problem. I would say the ethical on at least the utilitarian front is don’t do that. Take the longer road of building the consciousness otherwise. Then a final thought which is that we can have our cake and eat it too. We always like to have our cake and eat it too. As I said earlier, I am 100% confident that Chat GPT is not conscious. Yet it acts, I shouldn’t say act, it feels as if it’s conscious. We can have some of our cake and eat it too by putting Chat GPT-like front ends on really, really powerful systems like the calculation problem solver. We have an interface that’s appropriate for conscious beings like us and yet there is no risk of actually building in personhood, agency, feeling threatened, et cetera.

Serge: I think I agree with you that it’s going to be safer. In fact, I think the safest way to develop an AGI is mind uploading when you’re gradually adding neurons to a particular human and thus gradually expanding their capabilities in terms of being a digital entity in a data center. That effectively leads to an AGI almost by definition if we’re able to develop sufficiently powerful brain computer interfaces and actually emulate neurons and there’s no weird quantum mechanical effects and stuff like that. I agree that it might be safer.

I think that safety is dependent on context. For example, it is possible that actually we do need to deploy AGI as fast as we can because humanity is fucking up in other ways. For example, messing up the climate and the AGI may help us resolve these issues as well as resolve through a singleton many security dilemmas and other issues in human society. I think it’s a very, very nuanced question. The other comment is I think that it’s important that we talk not just about AI consciousness ethics because there’s a few other things I’d love to mention, but over to you.

Jim: Yeah, let’s wrap up this section and move on to other issues. This been though extraordinarily fruitful conversations taking me to places I haven’t been before. That’s always good. The argument that we need super powerful AGI as soon as possible to solve our problems, I believe is incorrect. I believe there are other roads to solve the problem. It’s not necessary. The risk of doing it wrong is I think greater than the risk of doing it slower. So at least from where I sit today, I’m willing to take the risk that it’s chewing the architectural features of consciousness, phenomenology, qualia, et cetera and it may well slow us down to AGI is a trade worth making versus the heightened risk early on, particularly when we don’t know what the we’re doing, creating artificial consciousnesses of much greater power than ours that have agency potentially fear worries about self-preservation, et cetera. I’d much rather have super smart calculating machine that solves humanity’s problems for us with no agency whatsoever. Let’s go back to you for a final comment on that and then take us where you want to go next.

Serge: Sure. Actually it segues very, very nicely. The point I think if we synthesize again our perspective is that we may or may not develop this kind of AGI at one particular time because we might take a much longer time to do that and have slower takeoffs. That happens when we mind upload or we might do it much faster, for example, if the climate crisis suddenly becomes much worse than we’re thinking it is. That could be a reason to change that decision. Where this leads to very nicely is that eventually, and I think that we agree on this point, is that eventually we are going to have artificial consciousness, which is going to be a substantially superior, not superior but greater consciousness just in terms of the types of qualia at the very least that that system can experience.

The question is essentially what then? Because I would argue that whichever way it goes, whether we set up an entirely artificial conscious system or whether we mind upload in one way or another, that system is going to be based on human culture, on human cognition, on human derived architectures. And the question is what do we do after that when we can do anything? And there’s a lot of very fascinating ones. The next one that I’m thinking about is how do you decide responsibility between the individual and society? How do you have a society that has collective coherence without being overly stifling of the individual? I think that again there we should start thinking about things, so one of my pet peeves is the criminal justice system around the world in part because of some unpleasant personal experiences. I think that the thing that is most wrong with it is that it is retributive justice, which assumes fault on the part of the individual and doesn’t really wonder about the contribution of society into the crime that has occurred.

For example, I think that it would be more reasonable to have a court I justice system which when confronted with a crime, tries to identify what were actually the cause of factors of, for example, that this person applied violence to another person. I would argue that if the person who’s the murderer had very good psychotherapy support and things like that, that kind of allow someone to have a little bit more free will from society as opposed to someone who was just mistreated by a society all along, and then we have to think about the balance of responsibility between the two. I kind like of thinking, one of the most interesting questions I’ve always been fascinated with is the question of free will and does it exist? I think that free will is kind of consciousness in that it’s a continuous function and it’s not some binary thing.

Free will is also relative, because essentially if you have two systems or two minds that are on Alpha Centauri and on Earth, they’re very, very free will from each other because they have huge latency of communication and they probably don’t have a lot of interaction relative to their everyday life because of the speed of light delay. On the other hand, if we have the two hemispheres of our brain, those two are much less free wills even though there is actually evidence that each of them possesses their own free will but they’re obviously far more tightly integrated, far lower latency. I think that in a way we should be thinking about questions. How much was the individual influenced by society’s, for example, incentive systems in terms of the particular crime committed things like, oh the cops have a KPI they have to fulfill for the year. They’re much more aggressive about going after you at the end of the year.

That’s doesn’t sound just or fair. I think a lot of these issues are part of the reason why people are so disenchanted with society. I think a lot of it is just about the fact that people who run the world are not systems thinkers and they don’t understand how social dynamics truly work, how incentives truly work, how things like social media truly works. I think that for making these decisions, we should also be inspired by biology and by other complex systems because humans, I would argue and everything, pretty much everything we care about is a complex system.

Jim: Amen brother to that. If there’s anything that humans need to upgrade their capacity on, it’s thinking systemically and thinking about complexity. Because essentially all of our existential risks today are emergent results of complex systems. The industrial age was built on complicatedness. Think of a factory. My definition of the difference between complicated and complex is a complicated system you can take apart and put it back together again and it’ll still work. A complex system if you took it apart and put it back together again, it might do something but it wouldn’t do what it was doing before. If you took all the chemistry in a cell and took all the chemicals out, put them in a Petri dish and then stirred them up and tried to get them to do their thing again, it wouldn’t happen. Right? Because the dynamics. The reason for this is that in a complicated system, essentially all the information is in the structure.

While in a complex system, much of the information is in the dynamics, the actual movement, as I often say, reductionist science is studying the dancer, complexity science is studying the dance. We 100% agree on that part of it. I also like the way you presented free will and by interesting synchronicity, if synchronicity it is myself and two other people are organizing a workshop on free will at the Santa Fe Institute, not this coming Summer, but the Summer after that. We’re bringing in the leading philosophers, cognitive neuroscience people, emergence complexity people, and probably a couple of theologians to try to actually get our hands around free will. Like a lot of things, when you work interdisciplinarity, which we do a lot at the Santa Fe Institute by the way, that a whole bunch of the work, it’s a three day workshop, would not totally surprise me if the whole first day is just defining free will.

It’s like consciousness. People come in with all kinds of half-assed and some not half-assed theories of what consciousness is. If you don’t get alignment on those things, your conversation’s just kind of going past each other. Then before I turn it back over to you, the thing I did what you said, it’s sort of where I’m at though I am no expert in this field, I just have a vague sense from how it relates to other things I study, is that there is not a crisp free will not free will, it is on some form of continuum. I also like your idea about measurable influence as at least one part of the calculation of free will. I’ve never heard that idea before. I’m going to think about that a little bit. With that, back to you.

Serge: Yeah, I would love to join your workshop. That sounds extremely interesting, especially the interdisciplinary aspect of it. Actually this is something I wanted to double click on with respect to the subject of religion because we are really talking about ethical issues here. Traditionally, religion has really been, if we ignore just the mythological and the supernatural, religion has been the institution that kind of carried ethical perspectives. My view is that one of the central issues of humanity is that we need a new ethics. I think it’s better to call it ethics rather than religion because it’s a very laden word. That ethics could be cross paradigmatic in the sense of marrying science and technological progress with religion. I think one of the key issues, well really the key issue with all religions is that they had a really clear moral core at the center in the persona of the founder in many cases.

Then that ethical core was well defined and the personality of the founder so powerful that the ethical core became a religion and really resonated with a lot of people. Then, obviously a lot of mythology got constructed around the ethical core because humans are storytelling machines and we just like telling stories. Actually these stories and this mythology is actually not the content, the truly valuable content from religion, they’re kind of later added fluff really. The issue was that when the enlightenment happened and we started having real scientific progress that was inter subjectively verifiable for the first time in history and was actually very clearly a miracle in terms of the capabilities that it has delivered. It was very difficult to not believe in this particular religion, so the religion of science in a way, and I wouldn’t actually call it, I don’t think calling it religion is quite right because it doesn’t have that kind of an ethical core that religion had.

Then obviously Nietzsche says that God is dead and basically we are suffering so the meta crisis and all of this good stuff that lots of smart people discuss on your podcast has to do with the fact that we need to synthesize a new religion that is accepting of eternal and endless technological change, which could smash past dogma. It has to constantly reinvent itself. I would argue that such a religion is, it’s very obvious how we do this. It’s that we have people who sit in a monastery and meditate and reach some kind of states in which they see certain things like love, et cetera, and they say, oh, love is part of the fabric of the universe, or something like that, which it’s fascinating because very many people use a similar language, but also you know what else is the fabric of the universe? This table that my laptop is standing on.

I think that people have reached an ethical slash, essentially a cognitive trap when people start thinking that the motto of the world is somehow closer to God than the world itself. I would argue that if we believe in God, and I personally believe in some kind of mysterious force, which is kind of depersonalized and ineffable, but if we decide to personalize that ineffable thing, which I would argue is a decision by us, then that thing would want us to explore the laws of nature because that’s like the language of God and discovering how God works because that’s really what we’re discovering in that particular paradigm is worthy of being part of almost like a religious thing. I think that really reinventing and so many things about the opportunity in front of us also have religious overtones because we can have eternal life and fix all disease and build something that is much better than what we have today.

I would argue that essentially humanity needs a new story which is inspiring and uplifting and optimistic and hopeful and at the same time entirely realistic and connected to the reality of technological progress and where people can believe and in large masses or ever larger masses, that we are all in this together working on a common project, the great work of humanity and that we should love and respect each other just because why not? Why would we want to do anything different? Over to you Jim.

Jim: Yeah, I think this is really hugely important for getting right about our future and the ethics of what comes next, whether it’s the singularity or transhumanism or some of both or something that’s not quite either, but has aspects of both. First I would stop and remind people that religion isn’t all benign. Remember the Aztecs, their religion said, well you need to cut the hearts out of teenage boys and girls every day on the pyramid and throw their bodies down the pyramid for our priests to eat, otherwise the sun won’t continue to rise. Also, religion often may start out with amazing words, think about Jesus and the Sermon on the Mount, and then think of medieval Europe, this oppressive hierarchical society where church and church and lords at the top, in fact all the way down the feudal hierarchy essentially conspired to oppress the masses.

You see the same thing even in a seemingly benign religion like Buddhism, two countries that became Buddhist countries, Tibet and Vietnam also implemented brutal landlords that Lorded did over 95% of the population and father horrible fashion. There could be a lot of bad from religion too, but also it is certainly true that it has been historically one of the main important sources of virtue and value. I would also point out, however, though there is the idea of virtue ethics, which can be emergent without religion, that we can all the way go back to Aristotle and his well thought out virtue ethics that required, well he actually snuck a little religion in but he didn’t actually need to.

There’s ways to do virtue ethics without the religion. Let me let rattle on here a little bit before we go back. A few things I want to say. One to the point you made that you perceived there is a ineffable thingy out there and we talked earlier about karma. I will say I personally maintain a rigidly agnostic view. I say could be, prove it. I also point out that those things are logically possible, but a lot of things are logically possible, including the fact that the universe is only two seconds old and was created with all of our memories in place and is going to wink out of existence in two seconds. That’s also logically possible.

Serge: Boltzmann brain.

Jim: Yeah, that we’re in an effervescent Boltzmann brain that comes and goes, not even a long duration Boltzmann brain, oh by the way, never talk about Boltzmann brains to people that are tripping. Very bad thing to do.

Serge: I remember that one and I’ve been really curious.

Jim: Anyway, let’s go on. Let me continue a little bit. Taking all that as a whole and I would say a pluralistic perspective that people can have beliefs in the ineffable, beliefs in Yahweh with the white beard and the long hair, beliefs in Zeus and the lightning bolts or a rigorous agnostic which says prove it, don’t believe any of it until someone can prove it can all work together and creating something that uses what we’ve learned from religion. No, I don’t think we should be cutting the hearts out of teenage boys and girls and throwing their bodies down the pyramid. Let’s reject that one. Let’s look pretty carefully at what the Buddha had to say, what Jesus had to say on the sermon in the mount, if not so much medieval bishop suppressing the people. Let’s also look at the other parts of it, the non semantic parts of religion.

One of the reasons religion works is that we get together on a regular basis and engage in ceremony. There’s some really good cognitive science that says singing together produces coherence. We trust each other more if we sing together. It’s probably not a coincidence that many religions, including many non-Western and early forger religions used music and particularly rhythmic music and dance as part of it. Many religions include fasting as part of their programs. Now I think this is a shame that modern religions no longer have these but many religions from other peoples, from aboriginal folks in various places involved ordeals during the transition parts in your life. When you were a 13 year old boy, you had a ordeal where you went through some fairly scary shit for a few days and then after that you were a man. Then around 18 or 19 you went through another even more scary ordeal, scary, spiritual, enlightening, all the above and then you became a full adult at that point.

Then in some original indigenous traditions, and there was another ceremony around the age of 30, and there’s another one around the age of 50 approximately. If we built these, I think that taking these ideas from religion and crafting them in a way for us to be more coherent at the right scales. We talked about coherence, a world religion like let’s say Catholicism tries to be coherent at the level of hundreds of millions of people. That strikes me as a mistake. That the coherence should start from small numbers, around the Dunbar number and then work upward with the coherence reducing its dimensionality as it goes out. At the level of 150, we have really high coherence and we have a whole set of religious-like ceremonies that we practice in because they work with our brains to produce coherence.

They make us better people. We’re much less likely to murder people who we sing with every Sunday, for instance. I would love to see the statistics on that. How many choir members have killed each other? Probably not very many. That’d be an interesting data point. Then to wrap this all up then I’ll turn it back to you. We had a really interesting podcast very recently, EP 170 with John Vervaki and Jordan Hall where we dug into John Vervaki’s idea of what he calls the religion that’s not a religion where a lot of things I just talked about. I frankly just stole from Vervaki though I freelanced a bit. I would suggest that there is work going on in abstracting the good from religion and let’s try to avoid the heart cutting out of the teenagers and throwing their bodies down in the pyramid.

Serge: A few thoughts on that. I mean, first of all, I loved that podcast and actually it’s funny because on the video behind me, you can see the same painting that Vervaki has in his series as a cover, Wonder Above the Sea of Fog. It’s a painting I’ve loved for 15 years. I saw that and I was like, okay, I got to watch all the 50 hours. I started watching John’s videos. It’s extremely fascinating. I’ve really arrived at very many of the same conclusions, semi independently. Plus obviously heard him on your show. I would say a few things. I love the rigidly agnostic view and prove it actually. I think the most important thing I’m going to take away from this conversation today is I’m going to update towards that. The other quick thought is I think a large part of the problems of religion are fundamentally problems of centralized power, just like inflation is a problem of centralized power.

I would argue that we need some kind of decentralized religion for the benefit of its adherence first and then for the benefit of some super structure which is trying to coordinate around the entire world, which is exactly what you said about the Catholic church. I think the other thing that really spoke to me is I agree with you completely that the non semantic part of religion is a very significant part of its value. For example, so I have a community of friends who before we all left Moscow, we were living in one place and essentially doing things like Burning Man camp together and meeting a lot and living together part of the year. Yes, there was a lot of dancing and singing and that kind of stuff. It felt like it was almost magical as to how tight knit the community has gotten as a result of many of these rituals.

I would argue actually the movement that I am building to think about how to implement these things, including all of these ethical concepts that we’re talking about today in the world, because I think an important part of any kind of ethical message is also distribution. It’s not very useful if your ethical message is really not viral and doesn’t resonate with lots of people because you filled it with a lot of wrong words, for example, even if it’s very well thought out. I think that a large part of ethics is actually the promotion of ethics and the selling of it to different constituents because ethics will have to be sold in a very different way to you or to a person in mainland China or to a person in the middle of Africa who’s doing something entirely different. I think that we will have to very actively do that.

I think it’s really important for a community to be highly coherent because then that community can really build stuff together and has to also be coherent in the physical world almost like there’s something special about building a camp at Burning Man because you connect with the experience of doing stuff in the sun together with other people. There’s something special about that too. There’s a bunch of experiences like dancing, like music, and the like. It’s very important to use a carefully designed community approach to build any kind of movement because if you have highly cohering people that trust each other, that have the intellectual curiosity to consider very, very different perspectives. I think it’s really important to get people who are very worried about climate together with people who are very into AGI, together with people who try to solve our biological systems or do governance design.

These people have to kick off of each other. The other thing I’ve actually, and this is a first in my life, is I kind of realized that it’s actually very important to have 50% women in any kind of movement because then at the very least for the fact that then this movement is more reflective of society. It’s easier to develop products or ideas that will resonate with society at large. Society is 50% women. The other part is just, I recognize that it seems like the conscious experience of men and women is actually quite different in many spots. At least if you don’t modify the hormones, for example, women report that if they take testosterone they’re like, oh, now I understand why these guys are so horny all the time. I think it’s really important to also have a very diverse perspective, including things like cultural backgrounds.

For example, the most interesting innovation in blockchain and that kind of stuff is happening in Southeast Asia and in Africa where there’s much less government regulation and the government is constantly hyper inflating and that kind of stuff, not good infrastructure. They’re jumping the gap. Unless you have some people from there in the movement, you’re just going to miss on a bunch of the perspective and experience. I think we really have to pull the best thinking to these kinds of problems, including the problems of developing this new ethics and distributing it as much as possible. We can do this in many, many ways. For example, I would argue that writing, making sure that more optimistic science fiction is written is a very important part of the development of a new ethics because you have to go and create art and influence people through that because that’s how many people receive information. Back over to you, Jim.

Jim: Yeah, I love all that. I think it’s all exactly correct and I would say the Game B movement has been stubbornly stuck at about 75% male, 25% female since its beginning. In our next inflection, let’s call it Game B 3.0, one of the foundations will be that it needs to be 50-50 for the reasons you said and one more, which is one of the things I am detecting in the world amongst people of childbearing age and whether either thinking about having children or have young children is they have the most angst of anybody about game A, about how difficult it is to raise good children in this poisonous culture. Then further a little bit, just more utilitarian a society that is the optimal place to have and raise children is going to literally out reproduce those places that aren’t.

For both of those reasons, one the informational ethical diversification perspective and also the utilitarian one of this is the place to have kids, I think both argue that game B needs to fix its gender balance and will do so. I also agree that other places in the world have a lot to become increasingly interested in Africa as a nexus for Game B because there is so much capacity and original thinking going on there because of the dysfunctional governments and the dysfunctional economies and people are coming up with amazingly clever solutions and we need to find a way to tap into that. Alrighty, this has been an amazingly interesting conversation. I’m going to go on the record and say this is one of the classics of the Jim Rut show. This is going to be a very popular episode, I predict, but we’re at our time. We can go on talk for hours. Why don’t you give us some final closing thoughts?

Serge: All right. I think that the final closing thoughts is very much, I agree with what you were just saying, that people who are at the point of deciding to have children, and I consider myself in that category, and those are the people who are most motivated, I guess, to make change. I think that we need to have just a larger number of new leaders in the world who are able to start taking responsibility for the world that we just have to decide what do we want, and then just go and build it. Earlier today you alluded to being a cautious optimist about the prospect of our transformation. I think that’s the biggest reason that I am very optimistic, is just the quality of people that think about these questions. People on our side, on the Game B and all of the rest of this movement side are extremely talented and very talented in particular at seeing things in more dimensions and in greater complexity.

That’s part of the reason why they have reached these conclusions. I think that we’ll just be able to play a better game. Although there’s relatively few of us, I think that this community of people who are ready to take responsibility for the world is rapidly growing and we should in fact be quite optimistic about our chances of actually causing a positive transformation in society. This is real, this is now. We look at this Chat GPT stuff, and it’s really suggesting that there’s a lot of turbulence ahead because what happens when that thing replaces 80% of human white collar workers that don’t need, that don’t need physical work, and then the physical part gets handled by robots in another five years? This is reality in this decade. I think that it’s important to recognize that just the time is now and we have to be acting.

I’ve personally decided to do this full time, and that’s a big switch for me because I’ve been an entrepreneur building first and foremost businesses for 22 years since I was 15. Basically now I’m thinking, okay, I don’t think my life is going to be that much better if I build another successful business and have an extra billion dollars. That’s still not going to be something that necessarily delivers the things that we really care about as humans. I want to not have kidney stones and other unpleasant things of that nature, which are reliant on technology and which are reliant on society at large. No individual and no small group is actually capable of getting to the right stuff by itself. That means that we really are all in this together.

Yeah, if you want to listen to what I have to say, I’ll be posting some essays on this and other podcasts that I’m doing and other thoughts plus on what our group is going to be working on, please go to and subscribe to me at sergefaguet.com or on Twitter at @sergew1. I guess we’ll have those in the show notes. Jim, thank you so much for another tremendous episode. I’ve really, really enjoyed it and in fact, learned a few things which are very interesting and important philosophical points by synthesizing your perspective. And I think that this is one of the truly important things we can learn to do is to synthesize multiple perspectives.

Jim: Indeed, real thinking, not simulated thinking. Right?

Serge: Yep.