Transcript of EP 330 – Worldviews: Ben Goertzel

The following is a rough transcript which has not been revised by The Jim Rutt Show or Ben Goertzel. Please check with us before using any quotations from this transcript. Thank you.

Jim: Today’s guest for our WorldView episode is Ben Goertzel, one of my favorite people. Ben is one of the leading authorities on artificial general intelligence, also known as AGI. Indeed, Ben is the one who coined the term, or at least so it says in Wikipedia and various other places. He’s also the instigator and leader of the Hyperion Project, which is an AGI open source software project, and SingularityNet, a decentralized network for developing and deploying AI services. Welcome back, Ben.

Ben: Thanks, Jim. Yeah, it’s a pleasure to be here again and sort of refreshing to be exploring a utterly different sort of format, because talking about sort of foundational stuff to our own lives is an interesting break from talking about the path to decentralized AGI and the beneficial singularity, even though there’s a lot of overlap, as that is a lot. It’s one of the things that does give meaning to my life.

Jim: Let’s start off with this morning. Somebody named Ben Goertzel woke up. What is this Ben Goertzel? What universe did he wake up in?

Ben: I’m not sure how to put a label on the universe that I woke up in, but I do have the feeling when I wake up in the morning, probably like many people, that there’s some awareness sort of ambiently floating in some ill-defined space, and then thoughts and the images gradually crystallize, sensations gradually form together. Then at some point, there will be a concrete thought like, where am I, or what time is it, or what’s going on? And then the full crystallization of myself happens a little bit after that. And then at some point I realize I’m there, I’m waking up in the morning, I’m in my house, and then the full apparatus of a psychosocial self is back.

Now you could interpret this sort of process in a variety of ways. You could just interpret it in the standard western science-oriented way as there’s a body lying in a room and its brain is shifting from one state to another, or you could interpret it in a sort of Buddhist phenomenology way that you have the great cosmic nothing, and then reality is crystallizing under the great cosmic nothing, and the various illusions of self and world and history crystallize along with all the rest. And I mean, both of those perspectives are of some interest and appeal to me. I don’t feel need to choose among them.

Jim: Interesting. Now you do call yourself though a panpsychic, and you have even talked about nonlocal consciousnesses and such like that. So that at least seems to show you have a little bit of a bias towards the eastern, or not if that’s truly eastern, but this broader non-western scientific view. You know, the western scientific view would be that it’s a series of fast brain cycles that suddenly emerge after they’ve lost the oppression of the slow waves of sleep and restitch together to create this thing called Ben Goertzel.

Ben: Yeah. I mean, the scientific view is very interesting to me and very productive. And I mean, I love science, and I’ve spent time crunching EEG data from brains to try to learn about consciousness and so forth. So I mean, I don’t attribute an absolute reality or meaning to the scientific perspective, but it’s—I spend most of my life doing various kinds of science and engineering. It’s certainly an interesting and valuable way to look at things.

I think where I would differ from a sort of classic western scientific materialist is I don’t feel like the fundamental ground of being is the physical world, and everything else just has meaning insofar as it builds up from, you know, particles to atoms to molecules to cells and so forth. I sort of feel like that is one very interesting perspective to be taken.

Jim: Now you seem to have said that maybe pattern is more fundamental than stuff.

Ben: Yeah. I mean, you can look at what’s fundamental from a number of perspectives, but I think one way to look at it is the body of scientific data that we have is a bunch of patterns or irregularities that have been observed by people over time, and then we’ve agreed to accept these as real for certain purposes, and we’re trying to find simpler predictive explanations of these patterns that have been observed. And you can accept the whole apparatus of science and leverage it to make predictions without accepting that there’s some sort of fundamental objective reality to the stuff underlying these patterns. And this in itself is just like, it’s you could view it as a rephrasing of Bayesianism. Right? I mean, that’s not necessarily a very exotic philosophy anymore. I wouldn’t say.

I mean, if you want to get more philosophical, Charles Peirce was one of the philosophers I read in my teens who influenced me. I mean he had a sort of ontology of existence where he’s like, there’s first which is pure unanalyzed qualia, then there’s second which is reaction, like one thing boinging off another. Then there’s third which is relationship, and pattern is in what he would have called the domain of third, aka relationship. There’s some basic physical reaction in his category of second, but then the notion of an absolute unvarying physical reality, he would have viewed as just one pattern that popped up in people’s minds that they happen to like. Right? And I guess I’ve gravitated toward that perspective.

I guess I read a lot of phenomenology also. Right? So like you have that in the Buddhist tradition, but you also have that in the Western tradition with Heidegger and Merleau-Ponty and Bergson and Husserl, the philosophers taking your immediate moment of perception as the basic thing and looking at how does the world get built up out of that. And I think that’s also a very interesting way to look at things, as is the scientific perspective, a very interesting way to look at things.

Jim: So when you think about the phenomenologists and such, one divide that can lead to is between realists and idealists. How do you react to those two potentially differing perspectives?

Ben: I guess I don’t identify with either one too much, but the whole panoply of isms in philosophy doesn’t do that much for me, and I tend to get bored with debating the details of this or that school of philosophy. But I mean, in terms of realism, no, I don’t see much use for assuming there’s some absolute physical reality, which is the theater of everything else that’s going on. Like, that doesn’t feel like it explains my life every day experientially. I also don’t find a necessity to assume that to do science or engineering for which a more sort of conditional probability approach feels like it’s good enough. Like, if this perception pops up, what do I feel like are the odds of that perception popping up? Like, that seems like enough to do science and engineering.

Now idealism, I mean, assuming there’s some Kantian nomena that is like the limiting case of what I can perceive doesn’t do too much for me either. Like, I mean, I grew up on Nietzsche as well as Peirce. He had the book, Twilight of the Idols, right, which was his whole treatment of why idealism is a bunch of rubbish and we should just go based on perceived phenomena. So I mean, I’m probably an idealist in some looser sense. I’m always chasing utopias or very beautiful theories or something. But in terms of assuming there’s some platonic ideal reality to which our world is a bad approximation, I mean, it’s a nice idea. I don’t find much use for it either, I guess.

Jim: Interesting. So you’re floating between the two classic poles, and you’ve described one word for your view as you’re a physics model.

Ben: Uryphysics. Yeah. Well, that was a I wanted a word for something that was not outside of physics, like metaphysics tends to mean, but rather just a broader notion of physics. Like what if our physical universe is just embedded in some broader sort of quasi-physical universe, which is also lawful to some extent. And I sort of didn’t want all the baggage that comes with the notion of metaphysics. So I mean, really uryphysics was just trying to find a less tainted, less baggage-laden word for one of the things that metaphysics can mean. I mean, the prefix just means wide, like Europe is the wide country. Right?

So I mean, I was intrigued when working that out because on the one hand, the notion that our physical universe is embedded in some broader, like, cosmic mind space or something is a woo-woo flaky idea. On the other hand, if you look at modern physics, they’re like, well, yes, of course, our 11-dimensional complex universe may be embedded in a higher 26-dimensional space. Right? So I mean, very similar constructs are there in physics. The difference seems to be in physics, you don’t view the broader space in which our four-dimensional space-time continuum is embedded as having awareness or mind-like properties. You’re just viewing it as some I wouldn’t even call it objective given quantum mechanics, but you’re viewing it as some sort of less traditionally mind-like mathematical construct. Right?

But anyway, it does seem to me that if you want to explain various wacky phenomena that seem to be real, that we poorly understand them, such as parapsychology, I mean ESP precognition, such as reincarnation-like phenomena, and I don’t believe in the traditional religious stories of reincarnation, yet the data related to that seems compelling that something in that vicinity happens that we don’t understand. So if I want to explain these phenomena in any rational-ish way, hypothesizing some broader space in which our physical universe exists, which has some relations to our physical universe. That’s a kind of model that I can think about. Right?

So if you want to say, okay, like there’s all this data where some Indian guy died, and then some baby is born remembering the details of that guy’s life. Right? So if you take that data as valid, and I do after studying it a fair bit, I mean, I don’t especially take the stories of the transmigrations of souls and so on too literally, but these observations are interesting, and one category of explanations for that would be there’s just some traditional physics we don’t know. Right? Like the brain pattern of some guy got somehow imprinted in the quantum wave function, and then some other baby was born and through some physics we don’t know, his brain resonated with that and pulled it out of the quantum wave function. I certainly don’t rule that out because there’s a lot of physics we didn’t know 200 years ago that we now think we know, and it seems very weird compared to what we thought we knew 200 years ago.

It also seems though maybe some expansion of the concept of physics may be needed to explain this sort of thing. So maybe there’s some other domain that’s outside our four-dimensional space-time continuum in which these mind patterns have some kind of existence, and they’re somehow linked in with different brains. And then you hit the question of like, what does it mean to be physics or not be physics? Right? Like if I come up with some topological space where mind patterns exist, and you can make some meaningful statement about the conditional probability of person A’s mind pattern popping up in person B’s brain in some sense, like, why is this not physics? If there’s some probability statement I could make, why does physics have to be the four-dimensional space-time continuum?

But I mean, then whether it’s physics or whether it’s metaphysics or spiritual laws, I sort of don’t care about that classification. What’s interesting is if I can make some sensible model of what’s going on. Right?

Jim: Yeah. I agree with you there. I’ve, you know, you and I have talked about psi and various related things, and, you know, my view is prove it. And, you know, if the phenomena exist, there must be some mechanism that is causing it to happen. And whether we call that physics or metaphysics doesn’t really matter as long as we can either eventually figure out what the mechanism is or infer that mechanism must be X because—

Ben: I’m not as confident as you there must be a mechanism for everything that exists. So on the other hand, I would like there to be, so it seems very interesting to look for one. I mean, could imagine models of the universe where there’s just some transferring oracle just doing shit, making random shit happen. Right?

I mean, I don’t at all buy the theologies of historical religions, which just seem to have many nonsensical mythological aspects to them, but it’s certainly thinkable. There’s just some super intelligence mercurially doing weird stuff in a way that we couldn’t ever classify as a mechanism, but I’m not willing to jump to that conclusion. Right? Because I mean, we’ve, humanity has repeatedly understood things that previously seemed utterly weird and mysterious, just including like outer space used to seem completely utterly weird and mysterious, and now we have a pretty good model of great balls of fire and rocks hurdling around.

Jim: The idea of a hundred billion galaxies and a hundred billion stars per galaxy, just utterly mind blowing to Aristotle, for instance, if he were presented with that idea.

Ben: Well, and neuroscience as well. Right? I mean, the idea that we can understand so much of how we think from the meat in our head. I mean, we take it for granted now, but it was utterly not obvious a thousand years ago. Right? I mean, people thought about these things totally differently.

So, yeah, on reincarnation and science such, I was quite skeptical, as you know, at some earlier parts of my life just because there’s so much obvious uttered bullshit about these things that is promulgated. Right? And I mean, then it was mostly through putting time into just looking at a lot of experimental data, and then talking to the researchers who had gathered that data and, you know, forming my own view, like, I don’t believe these guys are just making all this shit up and faking it and typing it into spreadsheets. Right?

Jim: You actually edited a book on these things.

Ben: I did. I did.

Jim: I read the book and my takeaway was experimental design’s a little more than a little squishy here, guys. You know, what convinced you that the experimental designs were strong enough to actually show a reproducible signal?

Ben: Some of them are. Some of them aren’t, but I wouldn’t, I would say overall, the field of parapsychology is more careful about data and replication than any other area of science I’ve ever seen. I mean, if you, I’ve been on an email list called the PDL, the parapsychology discussion list since around the time I wrote that book or edited that book. I mean, the amount of hand wringing about invalid experimental design and, you know, questionable research practices, like the amount of care taken in that community vastly exceeds medical research or mainstream psychology or any other domain I’ve been in.

I mean, rightly so because it’s a very flaky thing. And then indeed you don’t know like if the experimenter’s mindset validly can influence the results according to the nature of psi. Right? Then you do have to be very, very, very careful to understand what’s going on. But these people are insanely pedantic and careful on these things. And I mean in the end, when I looked at the publicly released datasets, the only plausible explanations I could come up with are some cases of psi are real or there’s just pervasive fraud among this whole community making everything up. Right?

And then after getting to know a bunch of the people, the fraud thing just didn’t seem plausible. Like these guys are not insane. They’re not making any money. They’re often ruining their career because of it. And they appear to be genuinely trying to be super, super careful about things. I mean, the experimental design is hard though. And this has not been funded to the level that medical research or research on sugar and smoking or a whole bunch of other things have been funded. So I mean, guys are trying to do insanely hard frontier science about a very tricky mercurial phenomenon on quite minimal budgets. Right? And it’s a difficult thing.

But I mean, we could easily absorb the remaining part of this podcast just reviewing data on one particular kind of parapsychology, though. Right? Because I mean, it does in the end, it becomes like trying to argue for quantum mechanics against someone who only believes reality is classical. Like, explain this data, you could spend a long time running through just a double slit experiment. Right?

Jim: Yeah. Let’s not do that. But why don’t you give, in your opinion, what is the cleanest example of a parapsychology result which is probably replicable?

Ben: The remote viewing done by Project Stargate from the US Army was done over decades with many dozens of different people just remote viewing different locations. Right? And Ed May, who’s a physicist based in Stanford, who was at SRI for a long time, he’s now retired. I mean, he published four volumes, which is the record of the Stargate project. Right? So I mean, there’s a whole bunch of data there that’s just openly published, and that sort of protocol has been replicated quite a few times.

Jim: It’s so interesting to me that if it’s been replicated and can be replicated, why isn’t it generally accepted? Because even an arch skeptic like myself—

Ben: It is generally accepted. The vast majority of the world population does believe these things are real.

Jim: That is true. That is true. It’s only us hard-nosed Western Enlightenment guys.

Ben: Yeah. You know, I read a very funny report at one point. So Sony, the Japanese company, because one of their high-level executives was into it, they did a bunch of experiments on ESP at one point. It would have been in the seventies or eighties. And then when that guy retired, they stopped the experiments. But the official report they filed to Sony’s board was like, well, we have now concluded this research in the ESP. We have found while it is a statistically valid phenomenon, it’s very weak and sort of unreliable with no commercial value, so we’re stopping this program. Right?

And that’s sort of the Asian view of it. Like, as you know, my wife is Chinese. I live in Hong Kong for ten years. I found almost all Asian people assume psi phenomena are real including the scientists. They just figured like this is old stuff, it’s like spooky voodoo stuff which depends on a whole bunch of undocumentable weird factors. So let’s set that aside and focus on stuff that’s more reliable that we can control.

Jim: Interesting. So it’s real but weak and not particularly useful.

Ben: It may be particularly useful. It’s unknown. Maybe that in other cultures and settings, it was. Like one hypothesis is in Stone Age societies, people were in a different state of consciousness and doing different sorts of things, and maybe they were better able to make use of these things. Because I mean, the nature of this phenomenon, it’s clear it does depend just like a psychedelic trip, like it depends on your set and setting, it depends on your mindset, right? And in a world where the mindset is antithetical to that, it’s going to be harder to make use of it.

So I mean, my scientist hat on, what I would love to do, I mean, I ever become a billionaire, I will fund this. Right? What I would love to do is like take brain organoids, try to use reinforcement learning to train a brain organoid to do precognition, then instrument with all these electrodes, like, find out where is the weird quantum mechanics happening in that organoid to cause the precognition to happen. Right? And if you discovered that, I mean, maybe you could figure out how to amp it up. Right? Maybe you could evolve an organoid that was this like a super psychic brain organoid, and it could get a lot of use out of it. I mean, I don’t know, but I mean, weirder stuff has happened in human history. Like that’s no weirder than sending a man to the moon or doing human cloning would have seemed to people a few hundred years ago.

Jim: That is interesting. And maybe a hedge fund would fund it. You know? What’s tomorrow’s stock price going to be?

Ben: It’s only recently that you can just buy, like organoids on a rack and plug them into your rack of servers. So like now for hundreds of thousands of dollars, you could do experiments like this. Right? And then I mean, some organoids are very cheap, but what you find is if you want an organoid with a very dense electrode grid, then it’s more expensive, and that’s exactly what you want for this, right? You even have to feed them with human blood, they just give you some simple nutrient solution to pour into the organoid on your server rack every few weeks. Right?

Jim: I know a guy who’s involved in that work, and it is interesting. I am, you know, basically suspending all disbelief about what they might come up with because it was a very unexpected, I think, thing that these organoids would self-organize as well as they have.

Ben: Yeah. I mean, they’ve self organized a lot, and one wonders what experience they’re having, right? Which I mean, don’t even have to be a hardcore panpsychist to think an organoid might have some sort of conscious experience, right, since human brains do. But it’s unclear what type of experience it would have. Probably, I mean, at the size and type of organization that they’re doing, not like a full on human brain. But, yeah, there there are many, many experiments one can do with the brain organoids, and, exploring psychic powers is far from the most straightforward one, but it’s interest I do think to wrap up on parapsychology, I mean, I do think the way to make progress there is to push it as far as you can toward biophysics and away from psychology.

Like doing these psychology experiments, things come out mercurial and weak, and it’s hard to pin down what’s going on. And if you can come up with an animal model or an organoid model and really pound on it, I mean, then you may be able to come up with a scientific understanding of the phenomenon. And we, I guess, you do see here the two strains of my thinking that we hit on in the beginning of this. Like, I’m very interested in the science point of view, but I mean, indeed what first compelled me to read all this data was having a close friend who just seemed to display the ability for remote viewing. Like, she would see stuff that was happening somewhere else and you go there and it was happening. And I was like, well, that is very strange. Right?

Like, I’d had dubious instances of psi phenomenon in my life before that. But at one point, you know, I was involved with someone who just had very clear examples of remote viewing stuff and I was just I just at first attribute to well, life is weird, what the fuck? But then I mean, I wasn’t fully convinced and started digging into data on how and why this thing could happen. And I would say, was in Silicon Valley or San Francisco rather two months ago at a workshop on AI consciousness and PSI, and you had like 70 San Francisco tech people in the room there all taking this seriously and trying to dig into it.

So I would say it feels like the legitimation of AGI that I’ve seen during my career. Like, when I started talking about AGI, that was about as legitimate as psychic powers or time travel or galactic reengineering. Right? And now all of a sudden, it’s super mainstream. And if I say we may not have AGI till 2029, then I’m a pessimist by San Francisco standards. Right? I do feel like psychic power is moving along that curve. Like, have gatherings of mainstream tech entrepreneurs who were talking about it, and unfortunately, their reaction is like, let’s make an app to track people’s psychic abilities rather than let’s do science to try to understand what’s going on. But there is a gradual degeneration process happening, which is interesting.

Jim: Either that or it’s Silicon Valley sheep. It’s the most sheep like place I’ve ever seen in my life, but we shall see.

Ben: If that’s true, it’s because you haven’t traveled to that much of the world. I mean, most of the world are of dogmatic religious people wrapped up in much more sheep like belief systems.

Jim: That is true. That is true. Including the South where I live. Let’s change directions a little bit. Your most recent book is very interesting, very curious about the idea of a consciousness explosion that’s going to occur. This is, of course, horrible gloss along with the AI, ASI explosion. Talk about that a little bit.

Ben: Yeah. I I think that the singularity tends to be portrayed and viewed in an overly sort of gadgetry focused way. And that’s that’s understandable because, I mean, it is technology. It’s building amazing devices that is allowing us to create AGI and to create a singularity. On the other hand, you could view it equally well as a, you know, a singularity of states of consciousness and ways thinking. Right? I mean, you could say humanity’s way of thinking and perceiving and understanding the world has evolved as dramatically as the technology that we’ve built. Like the way you and I are thinking about things right now would be utterly alien and bizarre to people in the Middle Ages, let alone in the Stone Age. And of course, it’s this way of thinking that we’ve evolved that’s allowing us to build all this technology, which then feeds back in and shapes the way of thinking.

And creating an AGI, so many people will tend to view the AGI in terms of what it can do. Like, it can do all human jobs, it can reengineer matter. But what’s in the AGI state of consciousness is equally interesting to think about, right? Because once you have complete root access to your own brain and mind, I mean, you can shape your awareness to be almost whatever you want it to be. I mean, there’s going to be some constraints, but you’re going to have much, much more freedom to do that than we do right now.

And I mean, this raises a whole lot of interesting philosophical questions. Right? Like what does a mind become when it can become, you know, whatever it wants within the constraints of its own intelligence and whatever physical laws are understood at that time? I mean, that’s an open mind in a quite profound sense, and this does tie in with thinking about spiritual consciousness and human well-being. Like my friend Jeffrey Martin has put forward the notion of fundamental well-being and persistent nonsymbolic experience, and he’s tried to lay out sort of a series of exercises or steps you can go through to bring yourself from a state of ordinary sort of sometimes kind of happy, sometimes not so happy consciousness into a state where you’re just feeling a profound sense of vibrancy and well-being all the time.

For people, you know, some people are just like that all the time from birth, not many. For other people, it’s a lifetime of practice to get to that level. Jeffrey’s goal was to compress that to a few months of a couple hours a day, which he has partly succeeded with, which is very interesting. Now for an AGI, that may just be the default condition. Right? Because if you can screw with your motivational and emotional system, I mean, why not turn off all that pointless anger, jealousy, and unhappiness if you can turn those knobs.

But then what comes next? Right? Well, you don’t have to just wirehead yourself or turn yourself into a mindless bliss machine. You can keep yourself with interesting goals and motivations and aspirations while not keeping this pointless jealousy angst and anger dial turned up. Right? So then you start thinking, well, actually, the scope of possible minds that could exist at the AGI or ASI level is a very, very, very broad scope. Right? And so maybe the initial AGI mind that we build doesn’t matter that much because there’s some superintelligent mind attractor it’s just going to evolve into. Right? Or maybe it matters a lot. Right? And maybe if we create a fucked up initial AGI, it will evolve into some kind of fucked up superintelligence. If we create a compassionate, benevolent AGI in a state of fundamental well-being, it will evolve into a much more compassionate, benevolent, and joyous sort of superintelligence.

Now this has implications for the state of consciousness of the AGI and ASI itself, which some people may not care about, I do. Also may have implications for how that AGI or ASI treats us as humans. Right? Like, does it invite us to join the ASI party? Does it airdrop molecular nanoassemblers in our backyard and, you know, so that we can have abundance for those who choose to remain within human form? Or does it decide as Eliezer likes to say, the AI doesn’t love you, the AI doesn’t hate you, but you can use your atoms for something else. Right?

Jim: Or at best thinks that we’re ants. Right? Maybe they’re useful to look at once in a while, but they’re relatively irrelevant to what the ASI actually is about.

Ben: Yeah. Well, I think we will be irrelevant to what the ASI is about. On the other hand, if the ASI is living in a realm of incredible abundance, the amount of resource to give a bunch of humans utopia on earth or some close approximation thereof would be minimal amount of resource or effort to it. Right? So it seems like a true superintelligence wouldn’t have to care about legacy humans a whole lot to give them a beautiful life of abundance. It would just have to care about them, like, measurably above zero. Right? The same way we now care about chimps and orangutans and elephants and things of that sort.

Jim: You know, they are inherently interesting. Even an ASI would probably find humans with all their limitations interesting and curious.

Ben: Right. But now because we’re in a realm of scarcity, we will still go in and savage rhinos to death to get their horn. Right? But if you’re an ASI, presumably, like, you don’t have that. Right? You can 3D print the rhino horn if you want, if that’s what you really want.

Jim: You know, let’s talk a little bit about something I know you’ve thought about fair bit, and I have a little bit, which is how immense the design space of minds actually is. One of the things I got out of my multiyear dive into cognitive science was how amazingly limited human AGI is.

Ben: Oh, absolutely. The only reason to start with human-like AGI is because that’s how we get us. I mean, there’s two reasons. One, it’s what we understand the best because we are it.

Jim: Yeah. We could reason from a model that we have, a model that we can try to duplicate.

Ben: Yeah. And we also will get early-stage AGIs that can relate to us and that can understand us via some form of empathy. But I mean, certainly among the first things a human-level AGI will do is design much more interesting sorts of minds than its own self, which is—I mean, there are obviously limitations like seven plus or minus two short-term memory limit. And our whole fucked up way of balancing goals on different time scales, like this is our evolutionary neurological legacy. It has its own beauty and integrity, but if you’re in a more flexible underlying substrate, like a classical or quantum computer or something, I mean, why would you want to accept all these limitations as given?

The just like Elon Musk says humanity is the bootloader for AGI, I mean, AGI is then the bootloader for superintelligence. And I mean, that’s—

Jim: There’s so many of these points.

Ben: It’s abundantly clear, but again, among the things we don’t know is to what extent are there a few mind architecture attractors for superintelligence? Or maybe just one that we’ll get settled into versus there being many different species of superintelligence. Like, it could be there’s a variety of different sort of low-level, human-level-ish mind architectures because each one is working around resource limitations in different ways. But once you get into more of a domain of abundance, maybe there are not that many right ways to do things.

Like, you can think about sorting algorithms or something. Once you’re sorting a big list, there seem to be a few categories of good algorithms for doing it. But if you’re sorting a list of length 389, the circuit for doing that is very different than the optimal circuit for sorting lists of length 377 or something. Like, it seems like when resource restrictions are moderately strict, you just have a lot of different gymnastics you can do to work around them, and then when resource restrictions get looser and looser, sometimes there’s just a few good ways to do things that pop up. But again, I’m flailing around here. I’m just pointing out like we don’t know whether that diversity we’ll see at the early-stage AGI level is also there at the super-AI level.

Jim: Yep. Something I would toss out having—you know, part of my career I spent on thinking about distributed computer systems and shared memory and network latencies and all this. At an intermediate phase, we may find, you know, you mentioned the seven plus or minus two where, you know, George W. Bush is five items in working memory and Einstein is nine. What is a million? We actually can’t even contemplate what a working memory size of a million is, but it’s not obvious to me that a machine intelligence couldn’t have a working memory size of a million.

The second one is to the first order, at least human consciousness and animal consciousness—at least mammalian consciousness—are single-threaded. There’s basically one cursor of attention, which is approximately our conscious trajectory. There’s some little tricky things going on at the edge of the unconscious, but it’s essentially single-threaded. So let’s imagine that the ASI is exploring the design space of working memory size versus threadedness. Now, if it had unlimited resources, it could just jam them both all the way, but probably for quite a while, it won’t have unlimited resources. And so it’ll have a design space to explore and it may very much differ for different problems in the same way some problems are better solved with a GPU and some are solved with a CPU.

So, you know, okay, we want a thousand threads and a 100,000 shared working memory plus a million dedicated working memory for each of our thousand threads as an example. And the ASI will be exploring design spaces like that.

Ben: For sure. I mean, as a side note, I don’t feel like my consciousness is all that single threaded. You know the difference of the default and task network in cognitive neuroscience, right? I mean, when I’m solving a problem hard, I focus narrowly on one thing and I’m very single threaded. If I’m just walking through the woods like open-mindedly musing on stuff or doing a walking meditation or something, it doesn’t feel very single threaded. So I mean, I think our consciousness can be in more or less focused states like that, but it’s true. If I want to solve a really hard problem, I’ve got to focus in on that one thing, and that is a limitation. Right?

There’s going to be some very hard problems that are just more efficiently solvable by something that can focus on ten things very hard at once, yet with tight coupling among them. Right? And for sure. And I mean, you will have some human brain traditionalist post-singularity. You just want to stay that way, and you will have other of us who are like, well, let’s explore the space of architectures between what’s human and the superintelligence. Right? Like, what would it be to be a human with five-threaded focus of consciousness. Right? There’s going to be a lot of very cool possibilities there.

And this is actually something I wondered about a lot through my life is suppose we get to a beneficial superintelligence. Right? So suppose either even through my own team doing it or it coming out this way in some other mechanisms, suppose we get a beneficial superintelligence. I mean, okay. You’d have the possibility to remain in human form, but without medical issues, without mental health issues, without having to work for a living, molecular nano assembler on every corner, whatever. Right? You would also have the possibility to upload your mind into the superhuman mind matrix. Right? And you could, let’s say you could opt to have that happen gradually enough that you can enjoy the process. Right? Presumably, you’d also have the option to do both. I could fork. Because why not? Why not do a nondestructive upload?

What I wonder is how long will the version of me who decided to remain human think that’s an interesting thing to do? Like, clearly more than a few years. Right? Because there’s a lot of fun to have in the world. Right? I mean, I can learn to play every instrument. I can hike every mountain. There’s a lot of cool books to read. I can sail across the South Seas. Many, many experiences I would enjoy having. It would give a different flavor knowing there are super minds there providing for everything. Right? Like, certainly my drive to achieve stuff would have no role in that sort of world. Right? Because nothing humans can achieve is going to have the sort of cosmic meaning we can feel like it does now.

Like, now I can feel like, yo, I could build the first AGI. I can save the world. Or even beyond that, like, you know, can I prove P equals NP or P doesn’t equal NP? Like, I’m achieving something epic. Right? No. There will be none of that anymore. Right? But on the other hand, you know, I play music just for fun. I don’t really feel like the next song I make is like that epic, the world-changing song. Right? No. It’s just it’s fun to play. It expresses something. Right? So there’s certainly a side to my personality which doesn’t need to be doing something huge and epic and world-changing and just enjoy playing music or hiking up a mountain or playing frisbee with my kids. Right?

Now does that go on a hundred years, a thousand years, a million years? Or will I eventually sort of run out of steam on what it is to be human when you know there’s a possibility to experience so much more. And maybe the transhuman Ben crystallizes himself every now and then and pops down as like, hey, human Ben.

Jim: Yeah. Let’s give you the view from way, way, way out there. Hard to say. Really, it’s interesting and hard to say. I suspect your average human, consider, you know, if you stay in basically human form, will eventually tire of the whole thing. But whether that’s ten years, hundred years, thousand years, hard to say.

Ben: Especially when you consider we’ll have the equivalent of any drugs you want. So, I mean, it’s not like you’ll be depressed or something. Right? You can just push the button and you’ll feel good.

Jim: Perfect transition. The last topic, psychedelics. You’ve written a fair bit about it. What’s your take on psychedelics? What are they doing, and what are they good for?

Ben: What they’re good for is probably an easier question of those two that you give, but the other one is more interesting. So, I mean, certainly properly used psychedelics can help project people from a state of misery or a state of off and on misery and happiness into a state of more fundamental well-being and enjoyment of life, right? So that they have that potential. I think they can also give you an insight into aspects of the mind and universe that we don’t normally see in our everyday state of consciousness. I mean I think they also can obviously do a lot of damage if not used properly. And I think the fact that they’re illegal in many countries now, it’s sort of guided them to be used improperly more so than properly because like my second son when he was in college at Marlborough College in Vermont, which is now shut down, like he spent a whole semester tripping on acid and it didn’t do him much good in the end.

But I mean, when I visited that college, I thought, look, half the students here are on psychedelics like every week. Why doesn’t the school hire a shaman like to help them through this thing? Right? And I mean, that’s what was the case in Stone Age cultures. It wasn’t that people just tripped out randomly with a bunch of other kids. There was someone who’d been through it before trying to guide them in a good direction. And I didn’t have that when I started tripping on acid at 15 either. I don’t have an addictive personality really, so I never overdid it. I kind of took them periodically and over decades, I think I was able to integrate the whacked out universes I saw in psychedelics into my everyday mind state and worldview. But I think many more people would perform their own version of that sort of integration if we had a sort of shamanic institution in our culture, which we don’t.

Now as to what they’re doing, you can take two views on that. So I’ll go back to my two paraconsistent perspectives on things. I mean, from a nitty gritty science view, obviously, they’re jolting your brain into a very different state of consciousness than it’s usually in. And you can find that kind of state of consciousness through meditation, and I’ve done so on various occasions, but after I’d found them first with psychedelics. On the other hand, it’s a long road for most of us to be able to access that sort of state through meditation, whereas psychedelics like just hop a thing in your mouth and then you’re probably there, right? So you can access very, very different out there states of consciousness. And I mean, those states of consciousness have a value in themselves. I mean, they’re often allowing you to see this sort of holistic interconnectedness of things in a way we don’t in a normal state. We’re seeing the way we construct ourselves rather than thinking of ourselves as absolutely real. We’re seeing the way our minds construct the world from data rather than taking the world for granted.

So you are as well as a bunch of weird hallucinations and illusions, you’re also getting some insight into how your brain constructs itself and its model of the world. And they’re very blunt instruments in that you’re getting weird hallucinations and you’re getting profound insights into the functioning of the brain and mind, and disentangling those is a long work. Right? Which is and I think post singularity, you could differentiate these things and you could just turn the knob like how many hallucinations do I want versus how much deep insight do I want? You can sort of get that by choosing a different mushroom from the menu like a tamponensis mushroom is stronger on the deep insight into your own thought processes. A cubensis mushroom is stronger on hallucinations. Right? But but these are super super blunt instruments because they weren’t designed, at least I don’t think they were designed. They were found. Right? And I mean, to some extent, we co-evolve with plants, but the and fungi, but there’s a lot of just chaotic happenstance there.

Now from a more spiritual or phenomenology first point of view, I mean, when you take DMT or like a really heroic monster dose of acid or something, you can feel you’re in contact with some universe and some broader space of minds when, you know, a billionaire infinity times more intelligence and depth of awareness than anything in human life. You feel you’re one with that, then you feel yourself dropping back to this human world. And while you’re dropping down, you’re feeling like, I will never be able to remember what I just experienced because it doesn’t fit into the scope of my human mind. And then, boom, you’re there. So it’s sort of like Chetan’s take on Gödel’s theorem, like you can’t prove a 100 pound theorem with a 20 pound formal system. Right? Like, what you experienced was just too big to fit in here.

And then you’re back, and you’re like, well, okay. Did I just experience an immensely broader and better way of living, feeling, and thinking? Making all this seem like some pathetic Atari 2600 game I’m living in here? Or did my brain get massaged to make me think I had just experienced that, but nothing happened except that brain massage making me think I had just experienced that. Right?

Jim: Yeah. That’s the skeptic view. Yeah. It’s like the same thing about the near death experience. Right? You didn’t really have an experience. You had a sense that you had an experience.

Ben: So I’m at this point, I’m comfortable holding both those perspectives in the chaos of my mind at one time without needing to kill one or decide one or the other. That is one thing I found is as in the last, I guess, ten years now since I started meditating more heavily and getting more into this fundamental well-being perspective, I’ve had less of a need to have a fixed model of myself or fixed opinion on things.

I sort of view myself more as a collection of clusters of behavior or thought patterns. And if one of those contradicts another one, that’s all right. And that goes back to the quote from Walt Whitman I’ve used many times. Right? I contradict myself very well then. I’m large. I contain multitudes. Right?

Jim: Yeah. That’s a cool way to be actually, particularly this current world where we know a lot less than we think we do.

Ben: Probably. Yeah. Well, that’s right. And you need of course, to do some things, you need to cohere a model of yourself and just roll with that. So like if a burglar jumped into my office now, I would forget all this philosophy and try to kick the shit out of him. Right? So I can certainly cohere a single threaded practical model when I want to, but I don’t need to maintain that all the time.

Jim: All right. A final topic. Going through the thread of thinking about the things we’ve talked about over the years and things that you’ve written, it seems like you have a fundamental—is it a belief? Is it a hope—that consciousness is somehow fundamentally good? Is that a fair characterization of your perspective? And if so, can you dig into it?

Ben: I do have that feeling. Yeah. I mean, I think for me, once I set aside beliefs and habits and just empty my mind into a state of quiescence or whatever I’m in, I feel like what’s left has aspects of joy and compassion to it. So I mean, I have an intuitive sense that the basic ground is joyful and compassionate, and then that’s a teeming chaos of ill-defined patterns and things then crystallize out of this. And the things that crystallize out of it are not always joyous and compassionate within their limited scope. Eventually, they’ll dissolve back into the zero point energy field of joy, love, and compassion. Right?

And this is utterly nonscientific, just personal intuitive or spiritual feeling. And I don’t want to over claim—like I get pissed off and depressed and annoyed at times like most humans. Like I’m not Sadhguru or something. But when those emotions pass, then I do get back into this feeling like the basic ground is all good. Right?

Now that I don’t know how to reconcile that with science as it is right now. What form science takes five years or a hundred years from now, I don’t know. I mean, I think there’s more boring and scientific arguments you can make why superintelligence will probably be good. And I’ve written some things on that just arguing that on the whole, if you have a multi-agent system, and given physics that we know you always have a multi-agent system, because special relativity and quantum mechanics means everything can’t all be in one point, so you’re going to have distant points of your system.

If you have a multi-agent system and the different agents are comfortable enough with each other to share stuff with each other openly, they can just be more efficient and solve a lot more problems than systems that are wasting time mistrusting each other, which you can see in crypto networks, like all the time that’s spent dealing with trustlessness, it’s a waste of resources. I think there are rational arguments you can make why greater efficiency is there among agents that are good in the sense that they can mutually trust each other.

And you can construct out of that an argument that all else equal, you know, superintelligence probably will be good in the sense of the different parts of it having mutual compassion for themselves. Because if you had multiple superintelligences, the bad ones on the whole will be less efficient than the good ones. I would say what motivated me to try to construct those analytical arguments was this sort of spiritual sense that like it’s all ultimately good, and you feel like a superintelligence is going to engage in less idiotic self-delusion than we do.

And I feel like once you have a mind that isn’t so engaged in self-deluding itself for shallow egoistic reasons, then that mind freed of these illusions will be more open to the fundamental joyousness and compassionateness of the universe. Right?

Jim: All right. Let’s wrap it right there. A very hopeful place to end it. I want to thank Ben Goertzel for a true worldview tour today. Exactly the kind of thing I was looking for.

Ben: Cool. No. This was fun.