Transcript of EP 306 – Anders Indset on The Singularity Paradox

The following is a rough transcript which has not been revised by The Jim Rutt Show or Anders Indset. Please check with us before using any quotations from this transcript. Thank you.

Jim: Before we get into today’s episode, I have an announcement. As regular listeners know, I am fairly obsessed with the topic of the science of consciousness and what is this thing we call consciousness? I have just accepted a new position as the chairman of the California Institute for Machine Consciousness, led by the great thinker and researcher and at least two-time Jim Rutt Show guest, Joscha Bach. The Institute will integrate insights from philosophy, psychology, neuroscience, the arts, mathematics, and AI into what we hope to be a unified framework breaking down the conceptual barriers that have hindered progress in consciousness research to date. We imagine this institute as a hub where academics, AI industry experts, and independent scholars can conduct practical research in-house.

One of the ways I describe why I’m so interested in the institute and its work around machine consciousness is Richard Feynman’s famous statement: “What I cannot create, I do not understand.” And as regular listeners know, I kind of rave regularly about a lot of the ungrounded horseshit in the domain of consciousness. And to my mind, if we can build one, instrument the hell out of it, see what it does step by step by step at the microsecond level, we’ll actually be able to say something meaningful about what this beast is. So anyway, I’m looking forward to this new adventure in my life. And as it turns out, it’s extremely relevant to today’s podcast.

So let me introduce our guest Anders Indset. Anders is a Norwegian-born philosopher, author, and tech investor. He’s the author of multiple German bestsellers, including “Wild Knowledge,” “The Quantum Economy,” “The Viking Code,” and “Ex Machina.” His philosophy centers on anticipated future leadership and the coming final narcissistic injury. That’s a scary topic – final narcissistic injury in an AI-driven world and the emergence of a quantum economy emphasizing that tomorrow’s leaders must combine philosophical depth with technological savvy to navigate an exponentially changing landscape. Welcome, Anders.

Anders: Jim, it’s a pleasure to be with you. Thank you for having me on your show.

Jim: Yeah, I really enjoyed your book. I mean, it is definitely a different take on this whole thing, but extraordinarily interesting. I was quite engrossed in it. My wife goes, “What the hell are you reading?” You know? I was so very intent, and I was giving her a little running commentary on it. She goes, “Shit, that sounds pretty interesting.” And so today, we’re gonna jump into his newest book, “The Singularity Paradox: Bridging the Gap Between Humanity and AI,” which he coauthored with Florian Neutkart. So welcome, Anders.

Anders: Thank you so much, Jim.

Jim: So let’s start out with that final narcissistic injury of humankind. What the hell is that?

Anders: Yeah. So many years ago when I started to play with the intuition of the implications of quantum, even going into the whole microtubule in the brain and the different challenges on building quantum minds and all the various views that you have out there, I was looking into the challenges that we play with something at the fundamental level that we don’t really grasp.

So we could, like Josh Kobach talks about, detach from the metaphysical level to build, as you say, a machine consciousness – you add the word “machine” to it. Right? And when I started to play with this, I looked at Freud’s three narcissistic injuries of mankind.

The first one was basically when we looked at Copernicus and the discovery that we are not some central point of the universe where everything revolves around, but we are an insignificant small planet at the outskirts of an infinite expanding universe. When we discovered that, that was the first narcissistic blow, and it led to a lot of progress in science and in general societal change.

The second one was, as Freud describes, Darwin. Not saying we are part of God’s creation, but we are part of some kind of evolutionary chain stemming from animals, and we are not the God-created creature, but we are a part of that Darwinistic chain of evolution. That was the second part that, again, led to a lot of progress.

The third one is what Freud described as a psychological blow, where we discovered that we are not the owners of our own thoughts. We are a part of that entity and the “I,” and then the whole essence around free will and the will that wants something. And then we discover that subjective part of it is our psychological blow, which led to a lot of progress in neuroscience and eventually into moving it from psychology into the technological use of neuroscience.

And that now to me comes to a point where we try to build the God from the machine, the Deus Ex Machina, where we strive for bliss, immortality, and divinity out of the machine. And we as a species, regardless of if we hold onto that physicalistic belief or any other views for that matter, we are trying to build something superior to ourselves.

So the whole essence of that final narcissistic injury was that we would move from that endeavor of divine creationism to human creationism, so that we could create anything we can imagine – there are no borders to creation, but we would do that by creating something superior to ourselves. And in that process, at least on a philosophical level, there is a potential that we could overwrite the most essential function of what it means to be a human being. A mensch, as I call it, the active agent that acts into a perceived reality.

And that is, if you take an example that we would rebuild the brain and build a technologically created externalized brain where we understand the functions of each and every neuron. So we start with one neuron, we take 10 neurons, 100,000 neurons, all the way up to 85 billion neurons or whatever. Then is there a point where this conscious experience or the essence of a soul or a mind or whatever spiritual terminology you would like to apply to it – is there a point where we risk overwriting that?

So basically my reflection is here that you would create an entity externalized that is capable of doing anything that we can, only better and more efficient, obviously, and to simulate any feeling and conscious experience so that we cannot distinguish between the two entities. So we could hack biology and chemistry and create a body, a physical entity. We create a brain. But do we have a human being?

So on this externalized endeavor, there are multiple dimensions on how that might occur. But I look at how to avoid a kind of zombie apocalypse. You can envision from Greek mythology, Narcissus looking into that wonderful water. And you have that aesthetic, that beauty, that wonder, that mirrors perfectly in the water, and the lights are on, but there is no perceiver to perceive that reflection.

So we would have a kind of zombie reality where we had a lot of agents walking around having the same conversations, but not experiencing the wonders of life itself or nature or our surroundings built on an assumption that this reality is not a simulation, but a reality we can perceive, and that there is something as a perception of the things that we have.

So the final narcissistic injury is that journey that after we have created that, there is no room for progress. It’s a unification of enlightenment and methodology, and the story that anything you can dream up and talk about is already created or we know how to create it. And that kind of world could lead to a lot of zombies.

Jim: And I go, it’s interesting. You have this idea of narcissism. I think about it very differently and maybe in a way that’s more empowering, which is historically humans have developed stories that went all the way down. Right? The Judeo-Christian one is “let there be light, and there was light.” And six days later, there was all kinds of shit. And seven days later, there was financial derivatives. Right? But in reality, I think what you described as or Freud described as these narcissistic injuries was actually the realization that we don’t have any foundations. At least we’re not even close to understanding them.

People like Aristotle thought he had it all figured out. Why are orbits perfectly circular? Because a circle is a more perfect form than anything else. Well, turns out he was driven by metaphysics rather than physics, and he was just wrong. Even as brilliant a guy as Newton came very close to getting it right, but not quite right because he had this framework of solidity and explaining it all. Kant made the same mistake, sort of, but he saw just a little bit of the horizon beyond Newton, which was finally then broken open in the late nineteenth, early twentieth century with the work that led to general relativity and quantum perspective on reality. And all those things keep telling us that we understand less and less than we thought of what was below us and that we are in this mesoscale between the universe and whatever is happening at the Planck level, which we may never understand. And who’s responsible is it to manage this? Ours. Right? So I don’t consider it an injury. I consider it a liberation, but I know that’s a somewhat minority view.

Anders: I don’t think it is. I think there is a very clear argument for that. That’s kind of like also the arguments we get into in the singularity paradox – that you’re playing between unlimited predictability and complete uncertainty. You are playing with human empowerment and what I call the homo obsolescence. So the injury part is here as long as there is agency. Right? So the realization, and I think this is fundamental to what it means to be a human being, or mensch as I referred to – it is not that we can experience, it is that “now-ism” of that subjective experience.

One of the most fundamental things that I believe in is that it is not so minor about finding the answers. It is the actual path of the realization that you have learned something, that you have experienced progress. This is where you give a purpose to life as in comparison to arriving at the answer. So the homeostasis, the static, the understanding of the fundamental reality that we perceive, that is an end game that to me seems like a creationism game from a human perspective where we could create everything. So we take away that story. We take away that experience of progress because everything is given. And I think that that’s essential to see that as a narcissistic injury because here, and David Deutsch wrote a perfect book about this with “The Beginning of Infinity,” as you probably have read. I think that this is our human capability to come up with infinite better explanations. It’s not to solve problems, but to come up with better problems, to experience progress. I think that’s a view that is very plausible, and I think the view that we should have. And it’s not about dancing between the dystopian negativism and the utopian optimism. It’s about being at a society of possibilism where we look at understanding as the fundamental and not knowledge. I’m a possibilist to that extent.

Jim: As we often say in Santa Fe Institute language, what we’re constantly doing is exploring the adjacent possible. Right? And just from a complexity perspective, the possibilities are almost endless. They’re unenumerable in any reasonable period of time. So from any perspective, at the tiny little human scale, we have an infinite universe of possibilities to explore. I personally find that empowering, not injurious.

Anders: As long as we can experience that journey. That’s my only argument. So even an unconscious entity could create as much progress. But what would that mean if there is no one there to perceive them? You know?

Jim: So this is also very important. We talk about this on the show relatively often, which is intelligence and consciousness are not the same thing at all. Right? A bacteria has intelligence. It will steer away from acid and towards sugar, for instance. Right? Probably by many of the theories of consciousness, a bacterium is not conscious. Right? A thermostat is intelligent. You know, when the temperature goes up, it turns down the heat. When the temperature goes down, it turns up the heat, but it’s not conscious. Well, the IIT boys might say it has one bit of consciousness and the consciousness in the form you’re discussing, I call it the tetrapodal consciousness, the one that we’ve evolved from the time the fish came out of the sea probably. It certainly seems to appear been in the early reptiles and then the early mammals, because those are two evolutionary distinct branches, it pretty strongly implies that either consciousness or a strong precursor also existed in amphibians that produces a conscious scene that the animal is part of. And at some point, when probably when the conscious scene reaches sufficient precision and time stability, then we get the phenomenology, the Thomas Nagel idea of a sense of something that it is to be. So I took your argument to be that preserving the sense of something that it is to be to experience this unfolding of the adjacent possible is actually what your motivation is. Is that fair?

Anders: Yeah, that’s a fair point. I mean, I am agnostic to many things. I’m open to various views here, but yes, that very qualia, that phenomenology that basically the “what does it feel like to be a bat,” you know, Thomas Nagel. So this is something that it is very hard to argue against that subjective experience. So, I mean, for the sake of the argument, I don’t know if you’re conscious, and I don’t know if any other entity. So if you play with the simulation hypothesis, I would be the only conscious entity and agent in my simulation, but at a completely different way of looking at it. But I think, you know, if we assume that there are n consciousnesses, and then there is a complexity level of that that gives rise to qualia, then that is a theory that I can very much relate to. I don’t see any evidence for it. I see strong arguments pro this way of looking at it. And therefore, I am very open to learning about the various views. You have Donald Hoffman’s views that says consciousness is fundamental and reality emerges out of that. Are many arguments against that as well. You have the whole panpsychist movement and so on and so forth. So there are views that I sympathize with just from the argument standpoint. But if I take reality as something to perceive and I have that conscious experience, it’s very hard to argue that nowism and that subjectivity out of it that we cannot put in words. And that is a very, I would say, relatable frame of thoughts that I hold.

Jim: Yeah. You know, I would go a step further and say that the consciousness we should care about is this animal one, this mammalian one, and bird one too. Right? Probably birds have consciousness. I have the same tetrapodal consciousness. I worry a little bit about things like IT and panpsychism because if you do actually buy them as morally equivalent to mammalian consciousness, then you could say, alright. A really smart thermostat, well, it’s conscious, so we’re honoring the idea of consciousness by moving on to smarter and smarter thermostats. Right? When I took your argument to say there’s something about human consciousness in particular, which is, I would argue at least, continuous with this animal tetrapodal consciousness that emerged when the fish came out of the sea, and we had to navigate in a different kind of way. And that thinking about where that fits in this unfolding of these new superpowers that we seem to be on the verge of creating. We might be wrong about that, by the way, but we seem to be. Me react to that a little bit on the human consciousness versus other models of consciousness with respect to your concerns about the trajectory of the future.

Anders: I think to your point that we have built a broad, like I said, broad scope of arguments towards externalizing something that we should do from a human standpoint. You know? We have to save the world. So that’s save the planet and tackle climate change or whatever. But isn’t it so that the agency that we have – if we are thrown into this world, Hannah Arendt, the natality of being born into life – we are positive agents that want to create and build. We are not born psychopaths, most of us, and want to kill each other and cause harm. So it has a lot to do with understanding and consciousness and the relatability before we are trained onto ideologies and categories. And therefore, I think taking back that essence that it is fundamentally our task to extend human life or consciousness, whatever that might be. And our challenge is to have future generations and other human beings experience the same wonders that we seem to be privileged enough to experience. So from that standpoint, it’s romanticizing the idea of a human being. It’s like life is a wonderful journey to nowhere, but it is all about that journey of exploration and creation that gives a meaning to the whole. So it’s also blending in some of these Eastern philosophical reflections that I think are very important when it comes to getting along and extending organized human life on this planet is to take back that, that we are the ones that can articulate through words. We are the ones that were gifted with opposable thumbs and to build tools and to make it better. And that seems to be our whole thing – to create progress. And if we try to externalize that, we risk all kinds of stuff happening.

Jim: Alright. Let’s now jump back into the book, into the outline of the book, and you’ve basically contrasted your vision with your version of the AGI story. So why don’t we start with you putting out your counterpoint, the alternative path, which was the one we seem to be on, by the way, of course. To your mind, what is AGI? And you kind of blend the concepts of AGI and ASI together. ASI is artificial superintelligence. That’s where not only is the artificial general intelligence – in short form, where the AI can do everything a human can do – and then ASI is where the artificial thing can do everything a human can do and can be better in almost every category, at least any category that has any real traction in the world. So why don’t you describe your version of where we’re headed in this AGI, ASI world, and then perhaps, what your concerns are about it?

Anders: Yeah. So if we start with the latter, I would say that there are very many definitions of AGI, and we also have increasingly many people holding on to ASI as terminology. And then we blend in the technological singularity, the hypothetical future where it kind of gets out of hand and just skyrockets based on the speed of things.

One could argue that we today have some kind of AGI because some of these models – large language models at the core to the most extent right now – that we are getting bored of because they have their flaws and limitations, and now we’re looking at world models and trying to figure out other types of architecture. You could argue that these machines already, with their increase of speed and precisiveness, are capable of very many cognitive tasks that are way superior to human beings. So the threshold seems to be moving a lot closer than it is moving away if you look at the timeline of things.

If you go back to the GPT moment and see the progress since then, I mean, Joscha Bach and many others would argue that on a technical level, there are many challenging hurdles. But as an average human being, if you have figured out how to tweak and prompt and use these things and build agents today, you have some crazy capabilities that one could argue are close to what the conventional ideas of AGI were.

For me, it’s very hard to define a point where that is given. And if we define that point, it is very hard to see how that would not lead into rapid progress. If you look at the timeline of things right now, we are making progress in all kinds of engineering when it comes to computation. We are making progress in quantum computing, where the error correction is having breakthroughs. We are making rapid theoretical progress on how to get an abundance of energy even. This is not like we’re talking about twenty, thirty years. We are talking about an exponentially increasing speed of progress.

If we take that and project that one, two years into the future and add some kind of humanoid robotics to this, where they are out observing everything in their reality – so you could have a trained dataset from 100,000,000 or 10,000,000 robots filming and taking off the data from their perceived reality all the time – the magnitude of progress would certainly come very close to these terminologies.

My concern is that unless that gives birth to a conscious experience or unless we have an ethical foundation that we think is applicable, which seems to be a very difficult task because we have not defined humanism in our own little humble world, adding that to a superintelligence seems like a challenging task for us. These machines, these humanoid robots that become indistinguishable from us – and the same applies also for any virtual world for that matter – they are perceived as real entities in our interaction, but we as individuals or human beings are then offloading any authority to these algorithms.

One outcome of this would probably be that we could end up in a world where we had nothing to do. We’ll lose that sense of agency. And that to me is the concern about creating those entities just externalized. As long as we don’t know what consciousness is, there is always a risk of overwriting it. So coming back to my previous argument, at 50,000,000 or 50,000,000,000, the lights go off if we program the brain. But reverse engineering this, does that come close to the emergence of life or emergence of thought? That what sparks the magic of the arising of a thought in a human being?

So as long as we don’t know what that is, just externalizing it is a challenging task. This is why Flora Neukert and I said we think the more promising view is to start with a biological entity, with a human being, and then evolve from there. So we hack chemistry and biology and take evolution into our own hands. We start with agency, and we build and replace on top of that. We take away and we build biological substrates or add components to it, building starting from the human being. And that is what we call artificial human intelligence. That is the argument in the book.

As long as we don’t know it, we should start and make sure that we keep whatever that humane aspect is. Because it seems in the history of time that we have been on a good path, or at least the direction is pretty okay when it comes to extending organized human life. So that’s like the simplified argument of the book to move from an AGI, ASI, AI externalized to an artificial human intelligence, where we start from the human being as our entrance point and then take over evolution from there.

Jim: I had an interesting thought when I was reading that idea. That’s a very cool idea. What about animals? Should we create artificially intelligent cows and pigs as well?

Anders: No. But that’s the argument because we could do that, but they would always be externalized. Because if we assume for the argument that any human being has a human-like consciousness, whatever that is, right? We can assume that this consciousness acts similarly, collectively for all of us. And therefore, when we start with a human being, we know there is agency. This is something we cannot relate to in animals. So we would have to know the exact functionality of that. We would not have to find a way to rebuild it in the machine, but we actually have to build it from scratch. So we would have to create life and create these animals from scratch. We would have a proof, you know, a physicalistic, materialistic machine-given proof that we are creating an exact entity that gives birth to these conscious experiences or have that functionality off the bat from scratch. And I think reverse engineering it here, that’s the challenge.

Since we cannot relate to that conscious experience, we can measure, but we cannot get into that subjective experience of the bat. And therefore, it’s very hard to copy it to that extent and then to reverse engineer it. So that’s the argument that we know – or I know that Anders has a sense of experience, what it feels like to be talking to Jim. I have no idea how your consciousness works, but I know mine. So if I’m going to enhance that, I can have a sensory experience of that enhancement. And that’s the theoretical philosophical implication, and there are obviously many technical things that we get into and how that, you know, by applying various technologies to build that. But that’s the argument of that book.

Jim: You do go into a kind of surprising depth into the various technologies that are used or have been used in AGI research, but I think I’m gonna skip over that. We’ve talked about that on the show lots of times, and it is a good introduction, though you do violate Hawking’s law a lot. Stephen Hawking famously said, every equation you add to a book will cut your readership by 50%. So if I actually apply that math to this book, you probably only have one reader, which is me. But fortunately, the equations are not necessary. I’m not quite sure why they’re there, tell you the truth.

Anders: I’ll give you the quick answer. So basically, we talked about this, and this was my background coming to this field does not carry a scientific formal education to that extent. And to add that substance to our arguments, Florian’s precise math on it is something that gives us a substance that we wanna get attacked on the scientific level and have this challenge because we are proposing ideas and thoughts that we wanna discuss. So, in the next books that we will create together and the things that we are working on, we will leave those papers up with the depth. And the books public, we will give access to in a reader’s mind and not to that extent. So I totally resonate with what you’re saying. We’ve done two books, too much math in it. The next one will be content.

Jim: And by the way readers, you know, the math is well compartmentalized. You can just skip over it. I did go into it three or four times just to see if the guys knew what the fuck they were talking about, and the answer is yes. They do.

Anders: We have published two books. We approved that. We also have some math to back it, and it’s not just Anders, the philosopher from Norway who went at it, but it’s just something that we can also resonate, and therefore, we have the credibility to do so. So that’s just a very simple reason for that. I have written other books that are much more simplified and also when it comes to sources, much more about making statements that I can go into depth to, but I don’t do it in the book. So I agree with you.

Jim: Although I will also put in, which my regular listeners know that I do sometimes, is that I’m so much suspicious of the theory that AGI is near at hand. Now clearly, in some domains, LLMs have made gigantic breakthroughs that have pushed them well beyond humans. But then on the other hand, a $1.99 calculator can do multiplication faster than I can, and I’m pretty good at mental math. So we’ve been exceeded for a long time by machines in certain domains, and LLMs add a new one. But here’s the example I often give why I think AGI may not be quite as close as people think. You can take an IQ-80 seventeen-year-old and have their father yell at them for two weeks, and they can drive a car. Drive a car anywhere. Compare that to artificial robotic cars. What’s it? $50 billion or something has been spent on this, and they can only operate fully safely in ring-fenced areas that have been tightly mapped with, you know, unbelievable brute force, billions of simulated miles of driving. There’s something qualitatively different that we so far have not figured out in how the IQ-90 seventeen-year-old learns how to drive in two weeks, while it takes $50 billion of R&D to teach a car to drive. So I suspect that there’s a significant part of general true general intelligence that LLMs in particular are not yet that close to. Now, of course, LLMs are not the only approach, and in fact, many of the leading thinkers believe that while LLMs are a huge breakthrough in both the input and output of language, they are not what we actually need to get all the way there.

Anders: I agree totally.

Jim: And you guys talk about that.

Anders: Yeah. No. I think we’re getting bored of LLMs. I mean, that’s just – I think there are clear signs of that, and there are promising new models and thoughts from experts that are obviously much more competent to speak about it than I am, but this is also my view, and I share that view. We could always argue about that. We could say, well, we’ve been beaten by computers when it comes to playing chess for many, many years, and we will never be able to beat a computer, and we don’t want to see Stockfish and AlphaZero play chess against each other. Right? But still today, there are more people watching chess than ever before. So Magnus Carlsen, my countryman, he has been number one for the past ten, twelve years, and he has built a fascination for chess unlike anything we have seen in history, even though computers are much better. So there are like many appealing arguments. But what I see as a challenge when you also take that example of self-driving cars, once they are there, they are infinitely better than human beings. So they would avoid so many accidents, save so many lives. And therefore, the argument is here that if there is a 0% tolerance, it will take a lot of time. But once these tasks are done, then we’re obsolete.

Jim: Yeah. We don’t need zero tolerance. We just need clearly better than humans, and Waymo is already there. You know? Look at the statistics.

Anders: Yeah, that’s what I’m saying. So we are already there, you know, reducing accidents by 80% with autonomous cars makes an argument. But Waymo in San Francisco, because you have wide streets where you could park on the side, is very different than having narrow streets in some European cities where it would be chaos to have Waymos driving around.

But again, these are limitations to our way of thinking. Why should we have those stupid streets when we can move it up to the air and have autonomous travel in the air? Like, if you have these models and they are linked – and this is also the argument – why have different cars when it’s much more efficient to have the same structure? Because if they have the same algorithms and database and physical structure, it’s much easier to navigate traffic than having a lot of other things. But then you would kill off all the brands, and you would kill off all the industries, and so on.

This is a very good argument for how technology evolves. Once it is there and can do it, it in theory makes everything else obsolete. And this is what I argue – I’m not taking a stance on what we define as AGI and when it will be there. But I’m saying that if we want to have agency in our own future, we have to figure these things out in advance, not if it’s doable. In previous paradigms, we messed up and error corrected, but we don’t want to get to a point where we cannot error correct.

It’s kind of like that Matrix scenario where the last one turns the lights on. You would have 8 billion agents walking around with no conscious experience – we had figured it all out, but there is no one there to experience that figuring out. I’m only placing the argument of the necessity for security and also for keeping that agency. Because when we lose many jobs to automation, and humanoid robotics will be a big topic here, it’s not given that we will have anything else to do. It’s a very simple argument to see it’s empowerment. Now human beings can take care of what it means to be human. But then the question is, what is that? And as long as we don’t define that, then we are at risk of being replaced. This is my whole argument here – that it becomes much more important to define what kind of future is worth striving for and not necessarily just going after the next breakthrough, because that comes with a risk, be it security or even at that conscious level, I think.

Jim: Yeah. And so there it sounds like there’s actually two risks here that you’re laying out. One is that the machines could replace us entirely and continue the exploration of the adjacent possible, but do so in a way without any phenomenology. Then risk two is we may still exist with our phenomenology, but we don’t really drive the show anymore. We’re kind of like ants at the picnic, and the machines are the people that are actually doing the picnic. And hence, that while it wouldn’t eliminate phenomenology, would make phenomenology essentially irrelevant to the unfolding of this exploration of time and space. Are those the two main arguments?

Anders: Yeah. I mean, we could always argue here that we could sit in awe and take psychedelics and be wrapped in some other dimensions or whatever, or merge with the technology and experience some higher state of consciousness. And these arguments are all valid to the point that if you talk about a higher state of consciousness, you’re talking about a higher state of something that you cannot grasp. And that is an impossibility. I said, okay, what do you mean by a higher state? Right? This is something that I think is just very important for us to be aware of.

And if you look at how it has impacted our society – for the past decades, I would say for the past five decades even, we have built a reactive society that is packed into absolutism, a binary way of thinking. We are all about right and wrong, zeros and ones. We created reward mechanisms in our social behavior through social media that is just about that. You know, thumbs up, thumbs down society. It doesn’t matter if you do thumbs up or thumbs down, you just divide. And as long as there is a reaction, you will have impact. You will have economy. You will have followers. You will have scale.

This is where we start to communicate for reaction and not for reflection. Everything becomes a zero and a one, and it’s your opinion against my opinion. And this society, the reactive society, is very easy to see a future where we are wired up to those dopamine structures and seeing some kind of Aldous Huxley, Brave New World type of scenario, where we also have Neil Postman’s “We Are Amusing Ourselves to Death,” but we’re not enjoying the show. We’re not enjoying the ride. There is no agency.

I’ve been pondering a kind of new existentialism here, a state that I call undead. Too alive to be dead, but too dead to experience life itself. And we have a lot of these young people today that are very frustrated, getting worn out, and being reactive, and reacting to externalized impulses in the outside world take away that inner intrinsic motivation to do something. And that is a very dangerous path, I believe, for a lot of problems on a societal level when we take away that agency.

Here, I see technology as a driver of that undead state, that’s also part of that zombie apocalypse that I’ve been pondering. I think just to be aware of it and to describe the problems and to understand the problems puts us in a position to improve them. But being just unconscious about it and just driving progress, I think that is the risk here. And that’s what I worry about, and that’s what I talk about in my work to influence that the future we create is a future worth striving for. I’m all about progress. I wanna understand this shit. So, I mean, I’m all in on that.

Jim: Yeah. The reactive stuff, very top of mind for me. I often talk about tech hygiene, right? How do we work with our technology so it doesn’t dominate us? And a conjecture I put forth is that a significant amount of the unhappiness and zombiness and mental health crisis that we’re seeing is less about the content than just the number of interrupts that are coming in. Humans evolved to deal with a relatively small number of key decisions per day and then a fractal list of decisions at every scale, but they weren’t all at the same level.

Watch kids today, goddamn it, with their phones. And if they’re not practicing good technical hygiene and they have these stupid notifications on, turn your fucking notifications off, people. Right? I have my phone – nothing comes through except an actual phone call from a list of five people. Right? And that’s the only thing that can interrupt me in real time. But if you allow your idiot boss to send you something on Slack at 2:00 in the morning and you get up to look at it, fuck that. Right? Your brain was not designed for that. And if you allow yourself to live in this sea of interrupts, you’re probably getting hundreds or thousands even of interrupts a day, and that alone is probably depleting your attentional subsystem. And you are your attention, essentially. What you pay attention to is who you are. If you pay attention to banal interrupts all day long, then you are a banal person, basically, who is the summation of these interrupts.

Anders: The answer to this or how we are approaching this right now is not to tackle that at a fundamental level. And there is probably no right and wrong here. There are many good implications of this. But it seems that our path here is to increase the capacity of disruption, to increase our stories, to wire up our brain, to be more efficient, to enhance our supplements, to be hyper human beings that evolve into superhuman beings that don’t need sleep and don’t need that. We are more about optimization.

And the more we optimize towards that, the more we put ourselves in an absolute competition with computers, right? So we try to become machines with our monkey brakes. And that is obviously a very, very dystopian path to be on. Once we start comparing our capacities to AI agents that build our marketing campaigns and do our sports and do anything we have done in the past. And this is right now, it seems to be the path that we are on.

And is that the answer to then detach and to go to nature and breathe and do whatever? I don’t know. But it seems as though we need to take these topics seriously and talk both from an engineering and technological standpoint as also from a societal and philosophical level. And this is why, you know, Florian and I, we call it sci-fi with a phi – science philosophy – because I think that’s what we need more of: this conversation with philosophical contemplation from people that are practical in terms of understanding implication of technology, not on that old traditional philosophical theoretic academic level, but more on that practical implication. What does that mean? What are the implications? So yeah, I totally resonate with that, and I think there is a lot to do here.

Jim: Alright. Well, let’s get back onto the main channel of the book. We’ve had some great little asides here, which is always part of what we do on the Jim Rutt Show, which is why people listen, I hope. But we also want to talk about your theories more. And so, clearly, you’re about preserving consciousness – central human consciousness – as central to this journey into the future. But before we go with that next step into your AHI hypothesis, let’s explore what you guys think about what this human consciousness thing is. You have a couple chapters on this. Right? And it’s, again, a pretty good overview, maybe a little overkill for a book of this sort, but probably is useful for people to understand what you mean when you say this human consciousness that’s worth preserving. So why don’t you talk a little bit about what you guys say about understanding consciousness, theory of mind, etcetera?

Anders: Yeah. So, I mean, as you said, I just want to avoid not touching on the variety of various views. And you talked a little bit about the PI and the integrated information theory and various views on consciousness. So to be honest, machine consciousness and human consciousness to me is adding a category to something that I think is much more important to look at. Let’s just strive for what do we mean by consciousness. And here, all the theories, I don’t want to be diving into one theory. So I’m very agnostic to the various views and they have their plausibilities and impossibilities.

I could also easily argue from a simulation standpoint that there is something sympathetic with having consciousness at the foundational level. And we have an emergent reality that is a simulation that evolves out of a conscious experience of playing the game or being in a simulation. So I can easily resonate with that. So even though we go into the book into various views, I don’t have a strong candidate or a strong structure. What I do say – and we say that in the book also – is that the way to figure it out is to start with having it. Right? Having measuring devices and doing it how we are approaching it and externalized does lead to a lot of progress on firing of neurons and reactions and triggering various conscious experiences that we can measure. But does it answer the foundation?

Therefore, I would not argue for one or the other. And I like also what you said about machine consciousness, that that’s an approach to create a machine consciousness. And that is clearly defining a category of something that can be from an architectural level to simulate. Even though it can be completely different, it leads to the same experience that a human consciousness would be. And that from an architectural standpoint, that’s a great thing to be on. But on the other hand, I’ve also sympathized with starting with the consciousness that we are relatable to, namely the human consciousness or whatever. To me, it’s just consciousness, and starting with that and then building on top of that. So although we get into these various views in the book, I wouldn’t lean towards one or the other. And I welcome any views on this topic and any plausible argumentation based on whatever assumption lays at the ground. Therefore, I’m also very interested to see new approaches and new thoughts about this field.

Jim: Yeah. One of the things you do stress repeatedly in the book is because you are aiming to preserve human consciousness, we have to be quite careful that these enhancements that we’re gonna talk about here in a minute don’t in some way overwrite or corrupt it even while we’re trying to preserve it. So at some point, we have to come to some understanding of what this human consciousness is if we’re going to actually carefully preserve it as we enhance it. Talk about that a little bit.

Anders: Yeah. I mean, that was the argument that I said earlier also. Right? So I would start to replace some of your biological substance, sending some nanobots and rebuild some structures. We figure out what Stephen Hawking called the “chemical scum” of our body. So we want to create a body. We have figured out how to create brains and we rebuild that with artificial components. And, you know, we keep asking the question just for the sake of the argument. It was obviously on a technical level, it will go a little bit different than this, but we would just keep asking you, “Hey Jim, are you still conscious?” “Yeah, sure, I’m conscious.” And at that point, you would continue to ask and answer, “Yes, I’m still conscious.” But now you’re just a perfect AI.

And these type of things, we are so bad at distinguishing that from a human level. And unless we can have a very, very clear technological understanding of it, we will not be able to distinguish on a human level.

I’ll give you a personal argument here that I have often had, having spoken to hundreds of thousands of people on stage and having a long day and I’m tired. And I’ve given a keynote and I go off stage and the people come up to me and want to have a conversation. I meet a lot of people that have had traumas or come through some very difficult times in life, and they come up to me and they start to talk to me about energy and feeling me and sensing my empathy and these type of things.

So from a spiritual standpoint, from an argument, I could resonate with that there is something beyond something on an energetic level that they experience that I cannot. But the funny thing is here that these people that have that argument, they argue for me to say that I can feel you’re so empathetic. And I say, okay, that’s nice to hear that. But what is going on in my brain, in my body, is that I’m just exhausted. I’m just saying the right things. So the recipient is persuaded to believe that I am feeling a lot of stuff that I’m not feeling.

And even if I could do it, even though my initials are AI, you just can sense that there is something about this argument that doesn’t hold. And there’s a beautiful book about this from Paul Bloom, “Against Empathy,” where he argues for rational compassion. And he basically says that things like empathy is not a soft skill, it’s a hard skill. It’s a computable skill. And this is something that I find very fascinating. Even at that very simple level, it’s very easy to interpret and to feel stuff that is not felt.

Therefore, I think it’s very hard for us to build a conscious entity without having access to that perception. And therefore, having that and finding ways how to keep it, I think it’s a very fascinating challenge when it comes to technology. And I’m sure Josh Tenenbaum and all these great thinkers and doers and practitioners have some ideas at a fundamental level about how you can do this. But this is something that I’m very fascinated about, and I think it’s very interesting. But I think also that this is what it boils down to – how can we assure that we remain in that state of what we value and perceive as a conscious entity, whatever that might be.

Jim: And that’s gonna require, as you point out, that’s gonna require some serious thinking and serious work, which is not at all complete today. You know? For instance, in the machine consciousness world, we do not know the answer to the obvious question: Well, how do you know you created a machine consciousness? I’m hoping at least that we’re able to get to the point where we can offer a prize, a named prize. So any of you billionaires out there that want to endow a named prize that can detect machine consciousness and also into Anders’ project could detect that yes, it’s still human, still arguably human, is actually a huge contribution to human thinking. It will require us to think even more sharply about what we actually mean by consciousness.

Anders: Let me give you this as a thought, because I’ve written this a couple of times as a description of what I foresee as fundamentally important for our species. I’ve written that as long as there is a conscious entity that can ask a question and there is a perception of the answer that leads to progress, then we’re in a good space. What I mean by this is that there must always be room for a final question, and there must always be room for someone or something to perceive that progress. That to me is probably the most fundamental description of what it means to be alive. So you’re not in that reactive state, you are in an active state. This is the terminology that I use for the mensch, which is a person, a human being of integrity that experienced the vitality. In German, you would say Lebendigkeit, the vitality of life itself. And this is kind of romantic because it romanticizes the human species, but it’s also a very positive view of the world that these wonders that we experience are probably the most fundamental things that we have. So, as long as we have room for one more question and someone to perceive it and the progress in that question, then we’re in a good spot.

Jim: Okay. Now let’s get to really the heart of the book. You guys have really done some careful thinking here on at least a possible trajectory far, far out into the future. So let’s start from the beginning. You know, just assume that everything up to this point in the book was building up to this, which is how I took it. Talk to us about AHI, artificial human intelligence, and what you imagine as the very first baby steps and then lay out some more steps. Stop after five minutes and we can interact a little bit, but then let’s continue on all the way through your vision of this thing.

Anders: Yeah. So the first thing that we talk about is basically that we need to build replacement components and figure out ways that we could measure its totality. We talked about the whole replacing of neurons that Neuralink is also working on to figure out basically how the functionality of each component works.

I think the prerequisite here is to have a very deep understanding of chemistry, physics, and biology. That will not come from our research, but will come alongside research done through AI and potentially with quantum computers. So when we have error correction and stability, and we can run some of the complex things on quantum computers, then we’ll probably have progress on those patterns so that when we have a deeper understanding of the components, we can start to build them.

The first steps are obviously when we have to have some kind of hypothesis and defining qualia and the phenomenology. Then as we start to replace these components gradually, we have definitions that we have very clear understanding of how they function. So we can replace those with a synthetic component without thinking we would replace life or conscious experience or what it means to be human. This obviously goes hand in hand with a lot of things that we have to ponder on enhancing human beings – the identity of the person, the morality, and the very essence of what we want to become.

We could take a function such as seeing. If you’re blind today, and you figure out how to get your sight back by triggering your brain, you would have your glasses on and say, “Okay, now I can see again.” That’s wonderful, and the brain says, “All right, we’re up to 100% of sight as we had before.” Then the obvious question becomes: why stop here? Why not enhance that with infrared and zooming and storage of external impulses? Everything we record, we record under the camera. So you have a complete overview, kind of like the self-driving cars. You have a robotic view of things. You could have access to that memory later on, but everything you see throughout your life is stored and recorded.

We look at the biology, obviously, where we start to enhance our neurobiology and we look at how we can get a better understanding of consciousness by replacing components in our body, in our brain, to get a deeper understanding of that conscious experience. We have to figure out ways to measure that – that is the biological awareness that we talk about.

Then we talk about the artificial components that we introduce, that we enhance to the human biology – the neural implants I mentioned, but also we could look at things in the body. So if there are relationships to a heart or whatever soul or whatever thing we want to figure out, we want to measure that like things are measured now to look for consciousness and relatable things outside of the brain.

You can envision having access to all data, direct access to the superintelligence, and how much access can you give without just having the responses coming out of the machine and still leave room for that conscious experience and reflection. So there’s room for the interpretation and reflection of the data that you process. You could envision having a thought coming from the cloud and one from wherever it emerges today, and you have to be able to distinguish that this was downloaded and this was human-created.

When we talk about thinking, what we do is think about thoughts. We have a model that we think about. We’re not talking about the actual thinking in itself in the Hegelian sense of the emergence of the thought. That is the creational part – to go from the bit to the it and to the zero for the simplicity, the complexity and creating life. That is where it all boils down to, that transcendence to move beyond that biological limitation to a potential immortality or post-biological state or some uploaded sense of state of mind where we have hacked biology and chemistry, taken evolution into our own hands, but we have kept what is most precious to us, namely that experience of what it feels like.

Jim: Yeah. That’s a huge journey, an alternative trajectory rather than building the separate-from-us AGI, ASI. Man, does it beg a lot of questions. Right? You know, I’ve actually done some thinking about the human limitations – and not for this reason, but to understand how much room is there above AGI. Right? That’s one of the early questions. I first got engaged in this conversation around 2008 maybe. And one of the key questions is the so-called FOOM question: How fast will AGI take off?

One group says, well, humans are pretty close to as smart as you can get, and so it’ll be hard work to move above. And the other one says, well, if AGI appears at 11:00 in the morning, by 5 PM that day, the universe will be turned to paperclips. I’ve thought about that a fair bit, and I’ve come to conclusion it’s something in between. But one of the key takeaways is that human cognition is really quite limited. And of all the Ruttian quotes, of which there are many, the one I hope survives the longest is that humans are to the first order the stupidest possible general intelligence.

And, for instance, a key aspect of our cognitive architecture is our working memory. Right? This is absolutely essential, probably to what causes consciousness, how consciousness and cognition are brought together. It’s fundamental in the structure of human languages, etc. And basically it means somewhere around five to seven items could be held at high-speed access in the brain at the same time. Just think about the simple question of language. You know, there’s a reason that sentences aren’t more complex than they are and why there’s punctuation every five to seven words typically, because we can’t keep more than that in our head at one time.

If you envision what would be the appropriate language for a being, artificial or otherwise, that had a working memory size of 100 – I mean, whoa. You know, it’s qualitatively different. Our lame-ass natural languages would not be optimal for such a character. Now let’s imagine a working memory size of a million tokens, i.e. the size of the most advanced language models today. That basically means you could actually have a whole book in your head. You know, we read books all the time, you know, people like you and I. And if we think carefully about it, we do not have that whole book in our head. We have built ourselves a skeleton of understanding while we’re reading so that the part we’re reading right now makes some sort of sense with respect to what we’ve read in the past, but it is not a full model of the book. And so when you change something as, you know, just one piece, working memory size and capacity, the emergent consciousness is likely to be radically different. So how do you think about doing that in a slow enough fashion that you can assert that you haven’t done something you don’t want to do?

Anders: Well, yeah. I think that’s a very complex question. And to be honest, you know, the question is, do we really want to figure it out? Right? Because you can say that you’re in between, right? But what does that even mean to be in between? So it’s a timescale where if there is a potentiality for rapid progress, I’m not in the field of paperclip optimization. I don’t think that’s a likely scenario. But there are other things that, you know, it’s not about all these crazy scenarios, but just take a very simple thing.

So for years and years, I’ve been writing in my own style in the German language. And I used quotation marks that I liked because of the aesthetics of them. And I’ve written sentences where I’ve always used the long dash. That is my way of expressing myself, my identity or my way of thinking. Because to me, writing is thinking. Now all of a sudden, GPT and other language models have come to the conclusion that if they want to do creative and intellectual technological writing, they introduce long dash and they introduce these quotation marks.

So how do I now react to that? I just wrote a simple article last week that said AI killed AI. So basically my agency, my identity, my way of expression is that I have to be unauthentic in order to be authentic. Right? So I have to put a machinery component to myself and change the way I express myself in a way that doesn’t feel like how I would express myself in order to be perceived as real. And that’s very interesting – that’s like the micro level of where we are today. But you could take that argument and scale that and think about completely other ways where you lose that agency.

So I always question, I want to understand what consciousness is, and I want to be a part of the group that figures out how to build it, and I want to be involved with that. But do I want to achieve it? That is always an interesting question. We will always strive for it. And it could be that in this wonderful journey to nowhere, it is all about that journey and that agency. And if we risk losing that, we might risk losing something precious to what it means to be a human being or be humane or be a mensch. And that is something that I like to think about a lot. Obviously, we will push limits and do everything that is possible. But again, do we want to arrive, so to speak? And that is a very interesting question to ponder.

Jim: And now let me throw out another complication that makes it even more difficult, and that is game theory. What we often talk about in the Game B world is the multipolar trap where players in some competitive game, if one player does something that is, say, morally wrong but tactically advantageous, competitive dynamic can force all the other players to do the same or they’ll be undercut in the marketplace. And one can easily see how this could work. If there’s still work for us humans to do under your road, which I think one of the reasons potentially to take this road, there will be inevitably competition for status and resources and mates and stimulation and positional goods, etcetera. Have you guys given any thought to how the dynamics of game theory competition could easily incent people to take the wrong road on this journey to human augmentation?

Anders: Yeah. And obviously, but it doesn’t even mean to be for a bad intent.

Jim: Exactly. And that’s the thing about multipolar traps. People generally do not even realize that they are doing something bad. They are reacting to local signals.

Anders: So a very clear example was found – we talked about chess earlier. Magnus Carlsen – I was watching the world championship in chess, and he plays a pawn move on the h-row. In the middle of the world championship, Stockfish – the most powerful supercomputer humans have created, with all the theories of how to play chess, all the games, all the comparisons, deep analysis of the theory and games of chess – everything that has been done on chess is in that computer. And it tells you with very precise analytics where you are in the game, who’s up on top, who will win, and so on. If you play correctly after that move, there is a predictability with a very clear likelihood who will win the game.

So here you are playing this game and all of a sudden Magnus Carlsen plays this move, and this move shows Stockfish going into madness and saying, “Oh my god, what a big mistake.” And all the commentators are pondering their minds and saying, “Did Magnus…? A player at that level, how can he make such a blunder?” And now he’s lost. And then you see seven, eight moves later, he is back on top of things, and he’s winning the game.

After the game, he was asked, “What happened? We didn’t see where your opponent blundered after you made that blunder on h.” And he said, “No, well, that wasn’t a blunder.” They asked, “What do you mean? What did you play?” And he said, “I played an AlphaZero move.” And then he was asked what that meant. He said, “I don’t know. I just know that it wins.” And here you have AlphaZero that has played 4 million chess games against itself, playing chess in a different way.

So the optimization game here is to say that the engineers don’t get it. The best chess player in the world doesn’t get it. It just wins. And these are the scenarios that we have to be cautious about in that optimization game and in that game theory, that we would base our theories on conventional thinking and our accessible knowledge. And we would draw completely wrong assumptions and conclusions out of that.

This is the risk to that technology. It is not so much about maximizing paper clips, but there are many other implications. And that could be on many scales. So I think there is a lot of thinking to be done here, and there is probably also a lot to be done in the field of regulation, global governance, and so on, because in many markets and segments, it’s a winner-takes-all.

I just came from a conversation where we were thinking about having a democratization of quantum computing access so that a billion minds can do science from home and have access to quantum computers. That’s a beautiful thing. But then coming back to your argument, you could also do that for capitalism – if there is incentive for winning the game, one idea applied is so hyper-efficient and so rapid that it’s always a winner-takes-all game. That’s not an inefficient game. That’s a hyper-efficient game with 100% access to knowledge and ultimate predictability. You will obviously scale that. But then if there’s something missed out, you have complete unpredictability, complete uncertainty.

Therefore, that’s a paradox or challenge that we have to ponder. And we could look at that like this balancing field that we talk about in the book between the clear advantages of empowerment and the clear risk of obsolescence. So you have the homo absolutis on one side, and you have that new emerging species that could take care of the quote-unquote “humane” part and live a wonderful life – those are both plausible scenarios. And we want to be sitting on the decisive side of that to take actions on what we want to do and not to try to see if it’s doable because I think that’s a high risk.

Jim: And, of course, that’s discernment in advance, right, which humans have shown themselves to be terrible at. Look at the history of how Europe stumbled into World War One. Nobody wanted World War One, but this horrific thing that changed history for the worst in a very major way led to communism, led to Nazism, down the list. We stumbled into it one little step at a time, and this is, at least so far, what humans do. So now let’s kick the problem up one level higher.

Anders: But just to comment on that – that’s the birthplace of Silicon Valley, right? So you have war on one hand and then you have progress on the other. You’ve seen the same thing now on the battlefield in Ukraine. Theoretically, if the war would end tomorrow, you would have an insane amount of great engineers that build the most efficient drones in the world. You have technically skilled workforce that are doing rapid prototyping every single day. You send up drones, you get a reaction, you fix them, you build on. That’s kind of the mentality that they are in. And now when the war is done, you will apply that to a completely different field and have a scientific technological, quote unquote, revolution or progress or something like that. And that is a result of a terrible situation.

So I totally resonate with you. We have been able to error correct, and this is also the final narcissistic injury argument to say that as long as we have space for error correction, it will be terrible, but at least we’ll have progress because we will be able to error correct. And the question that we are in now, if we come to that unification of knowledge or methodology and enlightenment, and we have everything is given, then we are not able to error correct. So every move we then make seems to be a very, very decisive one. And that’s what we have to change the game that we are in. And I’m not saying this is tomorrow, but we are certainly, from my viewpoint, getting closer to such a pivotal point where there is no room for error corrections.

Jim: Yeah. Robin Hanson, who’s been on the show a few times, talks about the idea of cultural drift as one of the risks that we don’t understand and underestimate. And I’d suggest that if our rate of change is exponential as it appears to be, and particularly if we were to take this road that you’re suggesting, unless we have built powerful meta methods of discernment on where we want to go, very likely we’re just gonna drift off someplace. And then under the gun of the multipolar trap, game theory dynamics, the places that will drift to multiple places, and those that are better at defeating the others will be the ones that will prevail. So, anyway, now let’s take this game theory trap and move it up one level. We’re now so far, we’ve talked about game theory within the artificial human intelligence vector that you lay out as the key theory in the book. But now let’s consider the game theory of AHI, artificial human intelligence, versus AGI/ASI. If AHI is slower, considerably slower than AGI/ASI, isn’t the most likely scenario that AGI/ASI just blows it away?

Anders: So that’s a very interesting question. So what would be the counterargument then? I mean, first, let’s start with the scenario where they’re not up against each other. So what you’re saying is that if we create AGI from a game theory standpoint, it would just blow humans away. So basically what we’re saying is that even if AHI would be a slower approach, we are just putting all the odds in a less favorable position against the AGI.

Then there is the ontological argument here, basically looking at having that humane part of it, the agency part where we would have an AGI that is almighty in terms of responding to knowledge, but we still have some agency and control to intervene. So we would be a species that has created something way superior to ourselves, but we are smart enough to understand how it might impact the human species and still have access to human agency.

I’m thinking out loud here – I have to think more about it because I like the question. But I think that if we could create an enhancement to ourselves to understand implications of what AGI would do… AGI would strive for equilibrium. It would look for answers and balance and optimization. Whereas human beings, they have a drive towards dynamic equilibrium. It’s never in balance, it’s always a drive towards something. We are still imperfect, iterative and ambiguous and have our reflections, but we are enhanced to the extent to understand possible implications of handing over authority to algorithms.

Another argument would be to look at infinity here – the infinite game that was also, from the theological standpoint, an inspiration for my quantum economy and later on for Simon Sinek’s part of the infinite games. Here, the goal is not to win, but the goal is to continue playing. If AHI has that fundamental evolutionary aspect that we build for society through mutual benefit, and the game is not to win but to stay in the game as long as we can, we distinguish between the finite game in AGI – a zero-sum game – and an infinite game with AHI.

We would have some kind of legacy or higher purpose meaning, while AGI would be solely based on metrics and outcomes and outputs. I think if we could enhance to that extent, then AHI could put us in a position to understand that. There are aspects here around evolutionary game theory where AGI would have that reinforcement learning to optimize for survival and infinity, and AHI would have to look at ethical introspection of the totality of humanity, not just feed into the reward mechanism.

So yeah, it’s like having a value-based evolution that we take on technologically by having AHI, as compared to having a finite game optimization. Not just the survival of the fittest Darwinian style, but more of that aspect of literally taking those ethical groundings and looking at the totality at large, and how we can improve humanity and ensure extended organized human life with what is imperfection. Those are some reflections, but I’m happy to ponder this very good question, Jim, in depth.

Jim: Yeah. This interaction has now caused me to have another thought, which I’m gonna lay out on you. So let’s take your vision, which I like a lot, by the way, and let’s imagine a coevolutionary dynamic between AGI, ASI, and AHI. Now this would probably, I think, certainly require solving a collective action problem. Because if we assume – let’s take the scenario where ASI, AGI, at least for some epoch, could evolve more rapidly, become a dominator, what some people call the AGI singleton or the ASI singleton. But we rule that out through a collective action problem solution. And let’s say the AHI side keeps the agency and says, we may not give machines agency beyond x. Right? And oh, by the way, we’ll kill anybody that does it. How you actually do this, of course, we’ll leave that to the student – be fucking hard. Eliezer Yudkowsky, you know, has produced a lot of scary results, which says, yeah, you may think you can control AGI, but you are probably wrong. But let’s assume we solve those traps. And we somehow build a dynamic where AHI is able to maintain some form of collective action supremacy over the AGI/ASI, but it decides to use the exploration of the unfolding space by AGI/ASI that’s in some ways faster than AHI to coevolve something better than either by itself. Respond to that crazy vision.

Anders: No, it’s not a crazy vision, and I really like what you did with that and how you played with that. A coexistential design to that extent is something that I think is a very good path to be on. I think that is the path where we are forced to think about that fundamental underlying structure. What do we mean by values and ethics and structures? And also, here’s a thought: We were striving for a knowledge society, and that’s the optimization game. So it plays that when it rules and it optimizes within the game. And we look at a society of understanding where we introduce like unknown unknowns, paradoxes. We are the questioners, and the agency of that part, where the AGI is the rational optimizer. Right? So, AGI is like the executor of a task to a much broader extent than the AHI could do. It’s the executor and it’s a perfection optimizer of that game within those boundaries. And we define those because we are the questioners and we are enhanced by AGI to think more about this. So we are enhanced with a co-design to have access to more reflectional points and knowledge. So whereas the AGI plays to win the game, we play to become. So there is a case for becoming, and that is life. That is the mensch, the human being. This is the Lebendigkeit, right? This is the infinite potential for better explanation. So I think there’s a very strong argument for that coexistential design of AGI as an executor, optimized to the utmost and AHI as the questioner. And AGI in the finite game will always win. And in an infinite game, we have to assure at the fundamental level that we play it within that we expand the boundaries of the game itself – that we play to become. And our sole purpose is to extend the game and not to win the game. I think that would be my response to that.

Jim: I love that. That’s a great way to think about that indeed. And I think pretty hopeful. I mean, there’s obviously a whole lot of difficulties getting from here to there, but a fairly hopeful possible destination if we’re smart enough as a species to navigate it.

Anders: I think, you know, you talked a little bit about business and organizational design and building entities. I think this is one of the most challenging tasks that we have, and therefore how we have built this organization and access to capital. One thing that I’m working on right now is how does organization need to change in order to adapt to this way of thinking?

I mean, what is the most lacking skill today that we see in politics, in education, and business? I think that is what I refer to as anticipatory leadership – the capability to understand the potential implications of exponential technology together with human behavior. Those two components need to be triggered and built so that we can have entities where people can work and generate friction and thoughts and reflections to tap into these very complex problems.

I think that has a lot to do with the structure of organizations. Tech companies today are better at this because they create that dynamism, that space of becoming. Therefore I also call this the becoming organization. This is an organization that never “is there” and is always in the making. It’s a dynamism.

How do you do that? How do you create these places? I envision three different pillars within this triangular alchemy. One is that today in business, in order to figure things out, you cannot ask people in the boardroom “what do you think people want?” You have to be there and see how human beings behave. Your business purpose is to forge your clients, to incubate businesses and to uplift others because when everyone else around you is doing better, and you have an aspiration to grow, then you have a better playing field.

The same applies for business. If you’re in the beverages industry, you want your bars and restaurants to be running really well because you don’t sell them stuff, they come and buy stuff from you. Google doesn’t sell licenses – they want companies to be successful so that they come back and buy licenses. So you move from that outbound part to a very dynamic way of looking at forging your clients and your ecosystem. And thereby you grow because you learn.

The second part, I think, is important in sciences and education and business alike. We need to have a VC investment view of things. When you have that, you look for opportunities, you look for new breakthrough technologies, you invest in assets that you’ll have later on or something that can replace your current assets. So you need to have one vehicle operating in the investment space, one vehicle operating in the forge space.

And the third one is efficiency. We live in a world where we strive towards hyper efficiency. Anything that we used to have like in sustainability was an ideology, right? You have to take care of the planet or save the world or whatever. At the end of the day, an efficient usage of resources is what we could refer to as sustainability. So only if you have an efficient organization will you have profit. Efficiency, operational excellence – so the whole part around management moves then into technology.

So you have investment, forging, and efficiency as a triangle that fight for the same resources. If you then have anticipatory leadership, so you build a trusted environment where people can grow and think about these things, you generate friction. And from friction and trust, you will have progress. So I think this is also something for your new initiative – to build a place where there is high trust and high anticipatory leadership, and you will generate friction out of that. So if you have high trust and friction, you will have progress. And fuel that with enough resources, money, so you can play around, then probably you will make very much progress in creating machine consciousness.

Jim: That’s a, again, a good hopeful thought. I really want to thank you Anders Indset with a d, who wrote the very interesting book along with his coauthor Florian Neukart, “The Singularity Paradox: Bridging the Gap Between Humanity and AI.” This was a huge discussion. It’s stimulated some thought in my mind. I hope it’ll stimulate some thought in our audience’s mind. I want to thank you very much for, one, writing the book, and two, having this great conversation.

Anders: Thank you so much, Jim, and thank you for posing all these new thoughts in my mind and the questions that you raised. And I’m going to go down to my drawing board and think so much more about this and hope to see you in the near future. Thank you so much for having me on.