The following is a rough transcript which has not been revised by The Jim Rutt Show or Jordan Hall. Please check with us before using any quotations from this transcript. Thank you.
Jim: Today’s guest is Jordan Hall. Jordan’s a longtime friend and collaborator, and he’s a successful entrepreneur, thinker, essayist, and talker on many YouTube videos and podcasts. Welcome back, Jordan.
Jordan: Thank you, Jim. It’s nice to be here.
Jim: Yeah, Jordan’s been on the show lots of times. As always, links to those previous episodes will be on, and anything else we talk about will be on the episode page at JimRuttShow.com. So check it out. The trigger of this convo was a tweet Jordan made recently about, I guess at the largest level, you could say, how does the encounter between humanity and advanced AI play out? Is that a fair way to frame it at the highest level of abstraction?
Jordan: Maybe. I can see two or three other ways of framing it that might be different, but I can’t tell whether the difference matters just yet.
Jim: Okay. So with that framing as okay, good enough, but maybe not quite exact, why don’t you briefly summarize what you were getting at with your tweet. Put as much substance in as you feel like.
Jordan: Well, there were two points. One was I was noticing that there, as is commonly case in our current culture, a false dichotomy and what kinds of institutional structures should be responsible for managing humanity’s relationship with AI or humanity in the context of its relationship with AI. That’s that difference makes a difference. The false dichotomy, of course, is should this be a market-driven event or a state-driven event, or what are the relationships between state and market? Assuming that that exhausts the potential of all the different kind of cards we have to play.
The point actually goes through two stages. So the first stage is, hey, there’s a third fundamental mode, and it actually turns out to be more fundamental than either the state or the market, which in many languages is known as the commons. So that’s like move number one is say, hey, the commons is available, and assertion, the commons is in fact the proper location for endeavoring to figure out how to govern AI and or govern humanity’s relationship with AI.
And then the second move was to say, and when we say the commons, we need to understand that we also mean the church, that the category, that third category, the languaging of it as the commons is a romp or a leftover as the church had been evaporating over a long time and, by the way, becoming sclerotic. And so we began to notice some of the remnants of it as the commons, as those themselves were being destroyed and converted into the state or the market dichotomy.
And then Roman numeral two was that an argument, an assertion, that the concept of alignment, when we say to say AI alignment, you cannot align, I’ll just say it and then we’ll have to unpack it, You cannot align AI with humanity because the concept of alignment can only be done vis-a-vis something that has a soul. And that humanity, which is an abstraction, has no soul. Therefore, you cannot align AI with humanity. But since given individuals do have souls, you can align AI-wise. By the way, this is also an argument for personal or highly decentralized AI with humans. And these are related. These two major points are related in lots of complex ways, but they’re also independent.
Jim: Before you jump into those complex ways, let me ask a couple of clarifying questions. The commons is a concept that’s been around since the dawn of humanity, as far as we know, right? At the forager stage, the natural world was the commons. Typically, the campfire was a commons. We managed lots of things together as a commons, and only later with settled agriculture, and then later still with state, and later still with things like the closures in Western Europe and the very early modern and pre-modern epoch did the commons shrink as a category of life. So, you know, the idea of commons goes way back and humans clearly have a lot of ability in managing a commons.
Jordan: I want to make a distinction because I think the distinction matters. The larger category that is that third thing of which the concept of the commons and the church are names for it, would never have been called the commons by an indigenous aborigine.
Jim: That’s just the way it is, right?
Jordan: Exactly. And that’s the key, right? So we receive it at the point at which we’ve already lost contact with that earlier way of being in relationship, where it becomes, this is why I was saying it’s like the remainder. So in England, in the 13th to 15th century, as the category of just nature or just the way it is, has begun to move into the background, we noticed that there’s, and I think of it like pools of water that have been sort of like, it’s drying up, but there’s still a pool. There’s a remaining pool that is, oh, farmland that we share in common or land in the middle of Boston where people are not, it’s not owned by anybody in particular, which is a commons, right?
So the notion of commons, as we say that phrase, that word, the commons, we’re importing already something which is on the it’s in the mirror image of the thing that it used to be it’s already living as a remainder in a context where private property is the dominant theme and nature is no longer a deeply like base state that you’re just in and is sort of the commanding set of relationships that human beings interact with the civilization is the base state that you’re in is the commanding context that human beings are in relationship with. And that shift matters a lot in terms of the thought process.
Jim: Yeah. Let me put a little bit of my own language on that. Initially, nature was its thing and was forever. And humanity, to the degree, it was organized, say at the level of the forager band, was a little membrane inside of the open natural realm. By the time you were talking about during the high medieval period, we’d already reached the point where humanity was the controlling membrane, at least in certain parts of the world, certainly Western Europe, and nature, to the degree that it existed, lived in membranes that were inside. So kind of like the capture of the mitochondria by the eukaryotic cell, we had kind of reversed the nature of things with nature now being inside human structures rather than humans being inside natural structures.
Jordan: Or so we sort of psychologically, in terms of the adaptive landscape, technically, it is always a pyramid, meaning that nature is the substrate and civilization, humanity sits on top of it. But we had reached a particular point where the relevant aspects of our lives, the things that we needed to do to achieve our ends and survive and thrive, were happening at the civilization layer. And so we were no longer thinking of nature as the base layer. We’re beginning to think of nature as a special case inside the civilization layer.
Jim: Yeah, I think we’re saying about the same thing. As usual, you having provided more and useful nuance. Now, the second one, this issue of alignment with the soul. I’m going to take Aristotle’s definition of the soul and say that maybe you’re not right about this, right? Aristotle thinks of the soul as essentially the organizing principle of the entity. You know, your and my soul is that set of homeostatic loops that make us who we are. And couldn’t one argue that society also has those sets of homeostatic loops that make a society a standing wave, and there are some so far indiscernible principles that hold society together, just as we have greater knowledge, actually, of the homeostatic loops that keep us as a biological organism together. So we know more about the human soul, Aristotelian sense, we do about the cultural soul. But there’s no real reason to say that, at least if you use Aristotle’s lens on the soul, that a society doesn’t have a soul.
Jordan: You could say that, but you’d be wrong. However, let me do the nuance thing again. Make a distinction between community and society. And so community is a group of human beings who have come together in a fashion that has a soul. And society is a group of human beings that have come together in a fashion that doesn’t. We typically live in society. And society is strictly parasitic on community, by the way. So society is a degenerate, parasitic collapse of community, largely because that community has lost its soul. So the language actually all fits together quite nicely.
In principle, one could imagine a circumstance where one could align AI with a community. But that requires that you be very careful about the distinction between community and society. Whereas you don’t have to be as careful with the distinction between human and society because humans are a little bit more obvious.
Jim: Oh, I love that. That’s perfect. Exactly right. Fits perfectly with the Game B synthesis also, by the way, that what we have done is lost our culture to society. So, all right. So let’s go on from there. Tell me about what you mean when you use the C word in this context, not the commons, but church.
Jordan: We can actually just combine these two concepts. So when you take a look at the word church, so church is an English word, maybe German. I don’t know. Okay, both Anglo-Saxon. Greek is Ecclesia. So what we’re talking about is a group of people who have come together and are entering into communion, which is the process whereby a soul is brought into a group of people, which enables them to be a community. So to use our distinction, and that becomes a tristinction, society, community, communion. So communion is the generator that is oriented around the ensoulment of a community. And the church is that. The church is nothing more or less than the body of the soul of a community.
And in the context of what we might say like capital T, capital C Church, you would then say the body of Christ. So the body of Christ would be the soul, in this case, the Holy Spirit, of all those who are coming into communion under the Holy Spirit, and therefore, of course, would also be in community. And so when Christians come to church, broadly speaking, and I’m not going to quibble with theological distinctions of high church and low church right now, what they are doing is they are reentering into the set of cultural practices and spiritual practices that inform the communion that grounds the community.
Jim: Okay, that’s a specific flavor of church. They call it the Abrahamic church, right? To your original definition is that it is the coming together or the organizing principle of a community. There could be lots of different kinds of churches.
Jordan: Well, no, they would all have to basically do some variation on this theme. The theme is they’re engaging in a set of cultural practices and spiritual practices that are what gives rise to the ability for the group to become, to have a soul, right? To come under a particular organizing principle that is real and binds them in a fashion that makes them a whole.
Jim: Yeah. But it could be, you know, belief in Zeus or something, right?
Jordan: Yeah. Yeah. Not just in principle, in practice, the actual Athenians, right? The church in Athens was that set of cultural practices and spiritual practices that bound the individuals in Athens as Athenians under the spirit of Athena, like very specifically.
Jim: And then, of course, the same thing would apply, say, to a Tibetan village that’s operating in support of a monastery, right? In that case, the organizing principle would be Tibetan Buddhism.
Jordan: Probably. I mean, I don’t know enough about that to say in particular. I’m guessing the organizing principle might be actually even more specific, but sitting as a sort of a white guy far away from Tibet with a broad brush, maybe, close enough.
Jim: Yeah, interesting.
Jordan: But the point is the particulars matter, right? It actually, a big mistake that happened in the era of modernity was to assume that the particulars don’t matter and the particulars do matter.
Jim: So why don’t you unpack that a little bit with respect to, let’s say, the Abrahamic set of churches in one sense, in one area that are organized around the principle of Yahweh and all that stuff, versus the abstract concept of the church, which could apply to Athens under the Olympiad or a Tibetan village under a non-theistic organizing principle such as Buddhism.
Jordan: Well, what I was just trying to do is make the distinction between modernity and church. To make the distinction between, say, Athens and Athena and Christ, that’s very specific. In the most general level, it is in fact the case that if a community comes into communion under Athena, then they will be a community in the form, and therefore a church in the sense that I’m talking about. And will then also, by the way, have a commons, right? The polis will be a commons, a proper commons.
I think we may even have talked about this at some point in the past. Like there’s historically a huge inversion that has taken place. And this is exactly this move from community to society. So first order, the Athenians recognize the polis as belonging to Athena, which is to say belonging to the soul of Athens, not to the individuals of Athens or even the collective group of Athenians taken as a group, as a society.
If and when that soul is lost, it can be difficult to notice that fact. And so it can end up happening. And as you would say, no, no, no, Athens belongs to some kind of formal organizing governance structure that brings together a collection of individuals, no longer with a soul, using our language, but now as a society or as a collection. And that transition is a very fundamental transition that occurs all the time and is extremely characteristic of what happened with modernity.
Jim: Well, let’s drill into this because I think really important. So we have two examples that you acknowledge as communities with souls, one with the Abrahamic symbol and the other with the Athena symbol, which they organize around. You’ll probably agree that the Athena symbol is a fiction, right? You may or may not agree that the Yahweh symbol is a fiction, but we now know that one could have a fictional symbol as the operant around which a community forms.
Jordan: I don’t agree that the Athena symbol is a fiction. No more than I think that the Jim Rutt symbol is a fiction. That’s a weird thing to say.
Jim: Okay. Well, I actually exist. Athena doesn’t, right? Athena is a meme space entity rather than a physical space entity. I hope I’m a meat. I am definitely meat, lots of meat on these bones.
Jordan: There is some portion of you that is physical space. And there’s some portion of Athena that is physical space, known as the Athenians. And when the Athenians as a body come together under the spirit of Athena and now have a soul, they are a community. The physical space is the physical buildings and the choices that the individuals make. The physical body is just like your organs and cells are organized in a fashion that comes up to be a Jim. And the spiritual space or the meme space, and we can like for the moment not slice too much on that distinction, is the set of organizing principles that are not instantiated in any one of the given or any even subset of the union of the collection of the physical pieces, but are both supervenient and real, but not physical.
Jim: Okay, well, that’s good. That takes me exactly to where I want to go next, which is you mentioned the society as capturing the role of these symbols. What happens, and does it fit in your model, for there to be a group of humans still at the culture level who choose to abandon external abstract symbols like Athena or Yahweh, etc., and say, we are self-governing. We are a secular, sovereign collective. Let’s keep it nice and simple at the Dunbar number or below, let’s say a hippie commune or something, that takes a community very seriously but doesn’t index on an external symbol and claims for itself sovereignty for its own soulfulness.
Jordan: Well, there’s roughly three possibilities. One possibility, which is the most common, is that they are in fact worshipping something. They just aren’t being clear or honest about it. Maybe they’re worshipping money, or they’re worshipping reason, or they’re worshipping the science, or they’re worshipping some national character, or they’re worshipping race, or they’re worshipping a given individual guru. This is why so many of these groups end up collapsing, because there’s going to be some unifying structure that orders the hierarchy of values and identities into and enables, just even enables the ability to convert something from a collection into a wholeness. And so if you haven’t got a conscious, explicit sense of that, then you have an unconscious, random sense of that.
The second possibility is that they’re just out of integrity, meaning that they aren’t actually really a whole, but they are operating by something like inertia. And they will discover over time, as we have in the West, that the wholeness that they had previously been connected with. Remember the, what’s that game with the—curling?
Jim: Oh, yeah.
Jordan: Right. The puck is still moving, but it is no longer connected to an actual generator that can make it move more. So you look at it and if you’re naive about it, you’re like, yeah, the puck is moving. It has an animating force. It’s an animate function. Like, no, no, there was an animating force that was pushing the puck. What you’re seeing is inertia.
Same thing. One is they are in fact worshiping something and they may or may not be aware of it. The second is they are worshiping something or some collection of things, but they’re not even aware of what they’re doing. And the third is that they are in fact not coherent and they’re in the process of falling apart, but there’s inertia that has been binding them together.
Jim: I’m going to add a fourth one, and I call this the so-called, at the present cutting edge of evolving Game B thinking, which is a set of accords, which the group has agreed to after due consideration and by consensus, that something, an example of one, and under coherent pluralism of Game B, there could be alternative ones in different membranes, but it’s one that Megan and I find very useful is the purpose or the organizing principle, to use your word, of our membrane. And let’s keep it simple. At Dunbar, how to scale it above 150 is a much more difficult question. Put at the top of our list of accords, we are in the organized business of increasing human well-being while maintaining levels of extraction that allow for a healthy and flourishing natural ecosystem.
Jordan: That’s actually not the fourth, that’s the first.
Jim: Okay. So that’s a religion with a sort of mini doctrine, you know, a long sentence for its doctrine.
Jordan: Oh, I mean, the doctrine will expand over time. Doctrines start small and get bigger as they encounter reality. But the point is that you said organizing principle. And as an FYI, I learned this over only in the past 16 months in the Christian story. And I don’t know if it goes all the way back to the Jewish, but certainly in the Christian, they have a term, Paul explicitly calls it powers and principalities. And a principality and a principle are very much akin, more or less the same thing.
So if you say that you are the sort of the apex principle, the organizing principle of which everything else is downstream, for example, I’m not saying you said this, but as an example, is coherent pluralism, what that means. So if I take snapshot now, now, let’s go back to Athens. I’m sitting in Athens and somebody says, well, who do you worship? So we worship Athena. Well, who’s Athena? Well, Athena is wisdom. The word Athena, wisdom, that’s what we mean. We worship wisdom.
And sorry, fast forward to the present, you know, a thousand years from now. They’re like, well, who did those Game B guys worship? They worshiped coherent pluralism. Oh, you mean the God coherent pluralism? Yeah, the God coherent pluralism. It’s the same thing. You’re saying that there’s some set of supervenient, non-physical, but real constraints that are the governing principles that orient and organize, perhaps efficiently, maybe not, that’s an empirical question, into an actual coherence, or they convert a society into a community that has a soul, that has an embodiment, that has a core reality that is more than the sum of the parts. That’s that. Yes, these are the same thing.
And there is, in fact, a set of cultural practices and spiritual practices that are in fact actually sufficient to produce and maintain this ensoulment or this, this actual being community.
Jim: Just for clarity, I would like to say that I would not consider coherent pluralism itself as the equivalent of one of these principles. That’s essentially a technique to get there. So I would suggest the example I gave earlier is closer to what I have in mind of such a principle, which is a community of 150 practices, human well-being within planetary limits, such as to enable and encourage natural ecological flourishing is closer to things that I would consider a principle around what you’re communicating.
Jordan: Diagnostically, what you’d want to do is actually say, okay, are there any axiomatics, underlying presuppositions or frames that are what produce or from which this set of propositions is derived? Like, is there something deeper? Any kind of, are there assumptions or expectations or even models of reality that are what drive this? And, or are there methodologies or techniques in the process of discursively trying to explicate what is meant by that set of assertions that are dispositive, right?
So do you have a juridical method, for example, that ultimately is the decider between different variations on the theme? If so, that’s actually higher in the stack because it’s the more organizing principle. Or do you have theoretic models or metaphysical assumptions that are ultimately dispositive in what these things mean, these assertions mean, in which case those end up being the organizing principles?
Jim: Well, this is a little bit of a detour, but I’m going to do it anyway, because it’s, I think, near and dear to our 20-year conversation that we’ve been having almost, which is, as is well known, I’m an anti-foundationalist when it comes to philosophy. I believe there are no foundations, there are no metaphysical certainties that we can rely upon when organizing our cultural and practical lives, that we are in the air, so to speak, doing the best we can.
And I think that’s where coherent pluralism comes in, which is that we acknowledge there are no foundations. And when somebody declares this is the truth in lots of controlling detail. Therefore, let us establish the shortest possible list of agreed principles, maybe four or five points, and then allow intelligent collections of humans who are in coherence with each other and also modeling individual sovereignty to experiment in the high-dimensional design space of human collective well-being, and then communicate horizontally their learnings, such that either over time, people cohere towards a single or a small set of local operating systems, or they discover there are lots of ways to be a happy and flourishing human.
Jordan: I think you just described a foundation. So I’m not quite sure what you mean by foundation, because everything that came after the phrase, I don’t believe in foundations, look to me like a foundation. I mean, you may flip it upside down. Maybe you could say it is a core with a derivative process, but those two basically, in my mind, are the same thing.
Jim: I would say that they are a blimp in the air that is a statement of a hypothesis of how one might reasonably create a good society.
Jordan: No, no, no. The proposition that foundational reality has certain characteristics and that therefore a set of choices are proper, that’s the actual foundation. So the set of the frame that you’re applying, the basic frame that you’re operating within that you’re using to then orient all the rest of your choices, that is the more fundamental. And the word more fundamental means foundation. And even if, even if your hypothesis is that my foundation is unstable, that instability is the foundation. Does that make sense? It’s the core assumption.
Jim: Yeah. Okay. Let me reframe it so that, so that it agrees with you, but I think makes the distinction, which is one of the things in our, I would say, current cutting-edge work in Game B, is that we want to make sure that we frame it, this is what I said previously, as provisional and subject to empirical verification and adjustment over time. So we’ve explicitly cut off any metaphysical claim. Instead, we have stated a proposed mechanism that could be wrong and may well need major interinjustment or abandonment that may be a good search algorithm in the space of how humans can successfully live together. So if you want us to call that a foundation, then yeah, I’m a foundationalist.
Jordan: And by the way, you’re also a metaphysician. I mean, you imported about 17 metaphysical concepts in the set you made, right? You have the metaphysical concept of perception, metaphysical concept of communication, metaphysical concept of time. I mean, that’s a pretty robust metaphysical toolkit that you’re importing into your completely non-metaphysical structure.
Jim: Of course, this always comes down to the fact that I use metaphysical in the old-fashioned way that Aristotle or Kant would have recognized, not in this newfangled way that things that are natural are metaphysical. To my mind, all the things we talked about are natural. And I frame my world in the natural world that includes emergence, includes patterns, includes physical reality, such as time, et cetera.
Now, of course, there is a difference between whatever the reality of time is, which you don’t really know, and our human perception of time. I will grant you that. And that we’re really operating on is the human perception of time. But the human perception of time is something filtered by the long chain of emergence that led to our brains that are thrown into the world, interact with whatever the underlying physical time is, and produce a human perception of time.
Jordan: So what I would propose or suggest so that you could be a higher likelihood of being successful is simply accept the fact that you are currently articulating a very robust metaphysical model that includes a huge number of axiomatics, and that simply being clear on those would allow anybody else who chooses to enter into that metaphysical model to be aware of what it is they’re getting themselves into without running into, oh, we had different ideas about these very, very fundamental notions.
And I’ll give you an example. It is by no means, it is impossible to demonstrate in any fashion that you have a perception of time, right?
Jim: It’s impossible to prove anything. As we know, brain in a bottle can’t be refuted. We also know, and it’s my favorite, to shoot down foundationalism, I can’t prove that the universe didn’t pop into existence two seconds ago with all our memories in place and all ballistic objects in motion. And two seconds from now, it flips out. Can’t disprove that. So there’s a whole, we have to be heuristic when we’re dealing with reality, that attempting to be, I got to be able to prove this all the way down to some axiomatic set of statements, I believe is a fool’s errand.
Jordan: Well, I mean, the heuristic is an axiom at that point, right? The heuristic is…
Jim: No, no, no, no, no, no, not at all. The heuristic is very important.
Jordan: So the axiom for you is the axiom of heuristics.
Jim: Well, the axiom for me is that there are no axioms, at least ones that we don’t yet have access to. Maybe there are, right? Because I’m also an empiricist. Maybe we do find a bottom someday, but we haven’t even come close to finding a bottom yet. And so the best we have is heuristics.
Jordan: So in that case, the god, the principle, a core principle, a very fundamental principle around which things are organized and how you would, you know, it’s upstream of many other ways that you would ultimately choose to design things is this axiom of heuristics, the notion that heuristics are themselves the most foundational thing that we can orient the best, the best way that we can orient our way of making sense of reality and choosing to navigate reality is in the basis of heuristics. And then that would be effectively Athens or Athena in your religion. Or part of Athena, right? There’d be other ones.
Jim: Yeah, that would work. I mean, I could buy that. But anyway, so this is very useful. Good. I’m glad we haven’t recorded a conversation, which will give me something to think about. Maybe one day I’ll write down all my metaphysical assumptions, metaphysical in the new sense of the term, metaphysical assumptions. Let’s get back to what does this mean for this cusp that we’re coming to, to steal some language from our favorite Stranger in a Strange Land, where humanity and AI are approaching each other in some fashion that seems like it’ll be transformational.
Jordan: Yeah. So going back to the original tweet, we are, I would say that this now, this again, this is an assertion. We are entering a point where the lack of fittedness between the various ways that we go about trying to hold ourselves together as something more than just pure chaos. So society or community with the set of potencies, the set of actions and choices that people are capable of making actuated by these new technologies is becoming untenable. So that lack of fitness is becoming untenable. We’re now in a situation where we really have to get our shit together.
I’ll move it from the highly abstract language to the very, very vernacular. We really seriously have to get our shit together and stop dorking around with a bunch of bullshit because the time is not. So when I look at the conversation happening among the people who ostensibly are responsible for figuring out how we’re going to navigate AI, I am shocked by the fact that they are operating with a level of a toolkit for thinking. And by the way, an embodiment of character that seems roughly the equivalent of maybe a bright 11th grader who read a handful of very mediocre books. And that’s not the right toolkit for addressing this.
Now, part of the problem, again, is this thing that has happened over the past 500 years, in particular over the past 250 years, where the third member of the team, the commons, the church, the sacred, has evaporated consciously. Like we don’t even have a perception of it anymore. And we do really think that this false dichotomy of state and market is the only solution. And so everybody’s basically just beating each other with sticks over which version of that theme is the one that wins.
And in the classic model of insight, I’m using the nine dot problem with the lines.
Jim: Oh, yeah, my favorite.
Jordan: If you’re stuck in the frame, right? If you’re stuck in the frame that it’s the state in the market and there’s the only solutions, you’re going to find yourself having a really hard time because the solution is not found inside that frame. So the first move is to say, hey, there actually is a much bigger frame. And it’s only by virtue of opening up that much bigger frame that we can find the possible location where the solution is.
And as it turns out, this third term, as I need to recapitulate this third term, let’s call it the commons for now, is simultaneously more fundamental than the state in the market, which is a whole different assertion, but it’s important. And is exactly the right location to begin trying to figure out how do we navigate this AI thing.
Jordan: And, for example, I think we can import a lot of the conversations we were just having. So if you use this distinction between society and community, this distinction between a disaggregated collection that is bound together by some set of what are ultimately procedural and largely algorithmic processes and something that actually has an integral wholeness, that’s a very good way of thinking about the problem of alignment. And when we say alignment, what we want to do is we want to achieve a relationship between AI and humans that is akin to community. And the effort to do it by means of the thing that produces society is why it has thus far failed, because that’s actually a category error. You can’t do it. It’s impossible. No more so than you could do it with society itself. So society itself is not aligned with itself. And trying to align AI with society by means of the technique of society, which is to say technique in general, is also a category error and will fail.
Jim: Okay, very good. This is a good step. Let’s do two things. First, our friend and collaborator, Daniel Schmachtenberger, often draws a line under August 1945, when the nukes were dropped on Japan, as when humanity crossed some line where it has the ability in a relatively short period of time to do itself really serious harm. And he and other folks have also talked about other near-imminent risks like CRISPR in your basement, where you take the common cold and a sample of Ebola, you got somebody sent to you from Africa, and you blend the two together, right? And then forever chemicals. We don’t know what the hell we have just done to everything about humanity, introducing these forever chemicals into our ecosystem. Maybe they’re the cause of this precipitous drop in testosterone levels, for instance. I think there’s a fair chance they are, right. So what distinguishes, if anything, AI from these other the world existential is a little too far, too far gone, unless you’re talking about advanced civilization, because humanity will survive a nuclear war, probably. But advanced civilization won’t. So what distinguishes AI to make it even more critical than these other three? or isn’t there any distinction? And we really need to learn how to deal with all of this existential class of technologies that we have unleashed on the world since August 1945.
Jordan: Well, in point of fact, when Forrest, Daniel, and I were doing our sort of deep dive together, we made a distinction between what we called catastrophic and existential, or the black line and the red line. So we literally drew the two different ones. And so many of the events that you just described are of the catastrophic category. And very few gray goo AI are of the existential category. And the difference is the fact that AI is a self-levering accelerator. Forever chemicals are an output of a particular level of capacity. AI is an output that becomes an input. And that fact produces a completely different set of characteristics.
Jim: Yeah, so this is a classic Werner Vinge, Kurzweil singularity argument. This is probably boring to most of the audience, but for a few of you, who don’t know this, I will say it again. The hypothesis from Werner Vinge originally, and then probably somebody else earlier, but then picked up by Kurzweil and popularized this, once AI gets to the level, let’s say human 1.1, so 10% better than a human, give it the job of designing its successor. And its successor is 1.5, and it’s just iterative on that same thing until a week later, it’s three. Six weeks later, it’s 100. Two months later, it’s a million, and then we don’t know what the fuck we’re dealing with.
Jordan: And by the way, since we’re in the middle of it, we don’t even have to use the toy model anymore because the feedback loop happens well in advance of that point. And you even described it in the preamble, which is when AI gives a software developer a 3x increase in their capacity, meaning what would have taken 3,000 hours now takes 1,000 hours, then the net consequence is a generalized increase in the output capacity of the society that’s connected to that technology, of which one derivative consequence is that if they put any portion of their intelligence on increasing the effectiveness of AI, the next iteration, which we practically have just encountered, right? You just talked about using the pro version, now is a 10x increase. And so even if the AI itself is still well below the level of being able to be self-feedbacking, the fact that it’s plugged into a larger collective intelligence system, that larger collective intelligence system has as one consequence, the output of intelligence increases, becomes an input to increase the intelligence of that system. The meta system is following that same Kurzweil curve. That’s the difference.
Jim: Yeah, that’s hugely important. I made that distinction in my last conversation with Forrest on the podcast about AI risk, that one of the really important things that most people have missed is that AI in general, even not just talking about singularity, is clearly accelerating Game A. So if Game A is heading for a cliff, the introduction of these inexpensive and quite efficacious AI models has got to be accelerating Game A in general.
Jordan: Yes. Yeah. And then of course, you can just, you can be practical and enumerate and say many of the things that are quite positive about AI, the ability to innovate a very large number of new things, produce stuff like forever chemicals, right? And so, and it’s funny because think about the like forever, like what do you call it, Teflon. So what’s the risk? Well, we don’t know. Maybe maybe none, maybe we’re going to collapse testosterone in every single biological male. What do we get out of it? Well, our eggs don’t stick quite as badly as they used to. Fuck it, yeah, let’s do that shit. Let’s go.
Jim: Yeah, talk about lack of omni consideration, holy shit. Right now actually it’s a perfect transition back to the bigger conversation we’re having which is let’s take that perfect example because it applies exactly to AI. If we look at society at large, let’s say late stage, hyper-financialized capitalism as one shorthand, there’s some other aspects to it, of course, but that’s, I’ve always said, the inner engine, the relentless algorithmic search for medium-term money-on-money return as the organizing principle, the soul, shall we say, of the society that many of us live in. That is the thing that is leading the encounter to AI with the predictable result. What do you see?
Jordan: Yeah, well, I would say that there’s two major… Well, the technical, again, to use the language, the term I would use would be principalities at this point. And the name of the principality of capitalism is Mammon. And the name of the principality on the other side of the equation is Moloch. And both are called… They tend to collaborate quite nicely. They hang out often at parties.
Jordan: And both are driving the structure, right? So when you look at the language of what’s happening, just think about just the language in the past month. Why is it very important that the United States takes a lead and invests $500 billion in AI? Well, because if we don’t, China wins. Well, that’s Moloch. That’s Moloch.
Jim: And that’s the classic Moloch, which is known as the multipolar trap. Moloch is now the instantiation or symbol of the multipolar trap.
Jordan: Yeah. So Moloch and Mammon are splitting it. And roughly speaking, you’d say that Mammon is what happens when the market becomes disconnected and Moloch is what happens when the state becomes disconnected, roughly speaking. And just to kind of give, just to make, to give a salve, we could use the term free enterprise as what happens when the market is being used properly in well-integrated context.
So if we say we have society and we have community, society maps to Mammon, which we could say capitalism if we want to be mean, like if we mean capitalism in a negative sense. And then community maps to free enterprise and capitalism in the positive sense, if you have a hard time splitting things in your head. And that’s important, right? Because unfortunately, we often, again, because everything has been locked into this battle between a state-oriented system, socialism, which is, you know, worships a whole bunch of values that are things like people should be fair and nice to each other and justice should be enforced broadly and those kinds of things, which are all sort of values. And then a market system, which is no, no, the values of people should be free and independent and should be able to choose their own things.
So you have a set of values that rise to the top of each one, but because they’re disconnected from a more fundamental or a broader set of core principles that allows them to be grounded and oriented into a well-integrated whole, then when they disconnect and they become idols that one worships for their own sake, you get your Moloch and Mammon.
Jim: All right. So here we are approaching the cusp, ever stronger AI to your point. I like the nice point, but there’s the social mechanism to the singularity, or at least to a very rapid acceleration in AI capacity that we’re already in. And oh, by the way, I posted this on Twitter the other day. The AI is already out of control of our genome via the dating apps, right? For the longer term. Not that all couples lead to babies, but most babies come from couples, right? With a very few exceptions. So, but anyway, so.
Jordan: Actually, you just pause there for a second, because I think, I mean, you’re, I think it’s useful to double click on that. Because we can say, look, we’ve been entrained in this kind of a system. So the system of AI produces 3x faster or better software, which then has one consequence, and input that produces AI that produces 10X, all of which is ultimately governed by, for example, or not governed by, but connected with the system that holds this larger collective intelligence together, which is largely the market and the state, which is to say, okay, I subscribe to ChatGPT, you subscribe to ChatGPT Pro, you have more powerful Moloch, you have more powerful software development than I do, so I’m extinguished from the marketplace, so my only best choice is I have to subscribe to Pro, which means I send more money to them, which means they then have the resources to deploy to increase better AI. Like that loop where the money piece, which is part of the fabric that enables the collective intelligence to operate, is critical.
The reason why I point that out is just take a look at what our mating choices have looked like in the last, say, 70 years. What happened when under the influence of society and under the influence of market incentives, people started, the very, very high IQ people started being selected out of their embodied context in Oklahoma and all ended up in San Francisco mating with each other and producing offspring that are somewhat non-trivially selected because of just the sheer fact of who was mating with each other and producing what is actually going to look like a different variation on the human genotype.
So the AI has already been at work selectively breeding humans to produce humans who are more capable of producing AI for at least two and a half generations.
Jordan: Even before there was AI, actually.
Jordan: In the way that we talk about it, exactly.
Jim: Financialized capitalism, late-stage financialized capitalism actually as a side effect produced that, which has accelerated the road to AI to some degree.
Jordan: All right, please continue from where you were. I apologize for the di—
Jim: Okay, this is all perfect. So I’d like to get you to wrap on what might happen, what might the trajectories look like as humanity and high-powered AI actually make their encounter, barring some change in how we choose to manage it, i.e. that if it’s managed by the state and the market, what might happen? And no other considerations are brought in.
Jordan: It just becomes an entropy machine. So you will have an evaporation. There might be moments where there’s local concentration. So if you zoom in and say, okay, I want to look at what happens at one level of analysis. Oh, there will be hyper concentration of power in those locations that are closest to the feedback loop on the accelerating curve of intelligence. So to be very, very specific, Elon Musk, Sam Altman, you know, that those clusters, right? Those kingdoms will have a hyper concentration of power. And there will be, of course, therefore, a hyper evaporation of power as you move further from that center.
And just the example we were talking about, if I’m a software developer and I am able to afford and pay and use the 100x AI backpack and you’re using the 3x, you’re basically either dead or doing a very, very niche function until you’re dead. Okay, cool. So it’s a hyperconcentration. That’s one.
And then the second—and there will be a really significant degree of willingness, and this is why the entropy begins to go up, willingness to dispense with all other values downstream of what are actually the core values at the top of this principality, right? So if you take a look at the principality, we are already in the process of being part of something that worships something. And what we’re worshiping is effectively the feedback loop between intelligence and power. And between these two.
So as people become moved off the board because they fell too far away, what ends up happening is that everybody who is now at the furthest edge is in this accelerating cycle will, by necessity, have to streamline their choice-making landscape, orienting more and more and more towards just the pure feedback loop between power and intelligence.
Jim: I would put a potential label on that for at least that interim period. Neofeudalism, right? There will be lords who control resources. There’ll be knights who are able to afford the 100x tools. And then there’ll be yeoman farmers who have the 3x tools. And then everybody else will be on welfare, essentially. So that’s one.
Jordan: With a big, big difference, by the way, and it’s very important to make this difference. In the context of feudalism, the vertical religion, so fealty, an oath of fealty, went both ways. And the top of the stack was not the king. The top of the stack was God. And so you actually had the king was king not only because he had local power but also because he had sworn a reciprocal oath of fealty up and down and he owes his fealty up to God which is one of the reasons why his values of don’t rape and pillage the peasants are if he does that everybody recognizes that he has violated his responsibility as a king he may get away with it right that’s not the same thing, but it’s clear that he’s violating a core responsibility.
And the neo-feudalism that we’re looking at, because we are now drifted into society and we don’t have anything that is organized in that hierarchy of values such that the lower level values have anything other than instrumental participation in the value at the very top, which we just described, they will begin to become dispensed with. And so the yeoman will be useful precisely to the degree which the yeoman is useful. And when he is no longer useful, he will be dispensed with.
Jim: That is a distinction because there was a often violated, but nonetheless existing moral framework for feudalism. And in this case, as you say, it’s purely game theory dynamics, which gets me to my next point. I suspect that while I am a little bit less optimistic than many of the short-term existent arrival at AGI/ASI, Artificial General Intelligence, i.e. Human Level or a little higher, and ASI, Artificial Super Intelligence. I do believe they’re coming. The question is when. And I can give the counter example why I don’t believe they’re close, if that makes any sense. But that probably doesn’t matter. But let’s assume that they are going to arrive, and if not in 2027 or 2028, but by 2035 or 2045, we then have the possibility of an AI singleton, as some people call it, what Eliezer Yudkowsky warns about, about the first person, the first entity, it won’t be a person, the first entity that gets true artificial super intelligence will be able to dominate and kill off, or if they want, or subjugate all the other lords. So you’ll get the thing that never happened in Europe, at least not for long, which is a top level, stable emperor that can dominate everybody else. How does that fit into the story?
Jordan: It’s a plausible endpoint, right? Because if you just look at this accelerating concentration, if there is not a stable equilibrium at what would be a global oligarchy, then there would be a global empire, right? And the difference, of course, is that the global empire would have access to the recursive competence of accelerating intelligence to be able to do things that no previous empire has ever been able to do. So it strikes me as being a hypothetical, like plausible. In other words, I mean, yeah, that could be, that could be an end state.
Jim: That doesn’t really matter for the story.
Jordan: No, it doesn’t. Because at the end state, all of those themes, so whether it’s moves into an oligarchical or moves into an imperial, ultimately it actually degenerates into pure entropy. Like that actually is never stable. It just becomes pure entropy. And by this case, what I mean is all things that are properly oriented values will ultimately evaporate. And we will find ourselves, well, we will find ourselves no longer being ourselves. We will no longer, we’ll either be not existent in the Jim Rutt physical sense or not existent in the Jim Rutt spiritual sense.
Jim: And of course, that’s the next step on the possible Singleton argument is that it’s no longer an emperor, no longer a human emperor. The AI takes over.
Jordan: Yeah. Yeah. Which ultimately is unless the human AI hybrid is able to achieve physical immortality, then that’s just a matter of time. Or if the human AI hybrid becomes an actual cybermatic hybrid, which would be a blend between the two. And by the way, what I want to point is that’s still not stable. Like none of those scenarios are stable. What ends up happening is that once it moves into the algorithmic, once the algorithmic becomes the dominant force, the algorithmic is society. And society, that entire construct, that ontological construct has an end state. If it is not connected to communion, then it will end up as pure entropy in a finite time, which in this case would be pretty quick, actually.
Jim: Let’s drill into a little bit when you say entropy, what specifically do you mean? And if you could land some tangible examples that would make sense to the audience, that would help.
Jordan: Oh, sure. Yeah, here’s a very nice example. So let us begin in a really nice small town where there’s a strong sense of community. You know your neighbors, your neighbors know you, maybe multi-generationally. And there’s a local coffee shop that is run by a local woman who just has a cheery, rosy personality and is everybody by name. Okay. And you go in, you say hi to your neighbors, you have a cup of coffee, you talk to people. It’s a very pleasant experience. Good stuff happens. All kinds of cool things happen. Not the least of which is some young people have met each other and are likely end up becoming a couple and having kids that are part of the community. So that’s the rich, well-integrated, whole sense.
Now, let’s migrate that over. I’m going to skip a few steps and just I’m going to go into Starbucks in let’s go with just like, I don’t know, Manhattan. So I go there. There’s a bunch of other people, but nobody knows anybody else. And to interact with anybody else would be definitely a violation of a basic norm. To even smile or make eye contact would probably invite some sort of terrible thing. The person behind the bar satisfies the minimum viable necessity of the function of providing you with a beverage in exchange for a token of control known as money. Asks your name for the purposes of writing your name on the cup because that individual does not know your name and will forget it immediately and probably writes it down wrong on the cup.
And here’s the best part. They don’t even put coffee in your cup anymore. They now put some kind of high fructose slurry, which is designed to maximally destroy your body’s feedback loop systems and your homeostasis in order to get you to come back again, right? That’s entropy, right? There’s this movement from, and the image, it actually was adrift, right? It started here. It started at the level of the actual coffee shop, embedded in a lived context and part of a vital whole wholesomeness and under a series of algorithmic feedback loops drifted towards a simulacrum of that which was separated from all that from its soul right separated from its soul and had nothing but the core raw minimum viable elements in inertia that kind of look like the thing but until you reach a point where like holy shit that’s not even all the thing and then it just evaporates.
Jim: All right that makes it very clear so they uh a bit late play back to tell me if I got it. The referent for the thing that is undergoing entropy is community.
Jordan: In general. Yeah. Yeah. It’s not, you know, metal or water, you know, things we think about from the physics perspective.
Jim: Yes, yes, yes.
Jordan: Yeah. This is the entropy of culture, essentially, or community.
Jim: Community. Yes, that’s nice. I think is what you’re talking about.
Jordan: Okay. So that makes perfect sense. I like that. That’s entropy, I would say in quotes. I’m not sure it’s a classic canonical example of entropy, but I get what you’re pointing at. Makes good sense.
Jim: All right. Now let’s make the turn. We have this possible trajectory or trajectories of the encounter between humanity and advanced AI, where the only operants that could have any influence on its trajectory are the state and the market and their perspective, live players. What is the alternative? What’s it actually look like?
Jordan: Yeah. So going back to the beginning of the two-step process, well, I guess it’s two steps here and then a triangle here. So we have, yeah, so don’t forget the soul side of the tweet. So part two of the tweet and part one of the tweet. So part one is we awaken to the reality that the mode or the domain or the territory of the commons, the church, is available and is, in fact, the proper location for this kind of work. What I mean is the proper location for the kind of work I’m calling communion or the kind of work whereby a multiplicity are brought together into a well-integrated whole. And to repeat the language, where you’re using some set of cultural practices and spiritual practices as part of that process. And it’s funny, like that seems wildly simplistic, but that is exactly what we’re talking about.
Now, there’s a seriousness to that that is not available in modernity. I’ll give you an example. And when I was spending time in Hawaii, one of the Hawaiian, an actual kahuna mentioned to me that back in the day in the Hawaiian religious context, there was a particular kind of king where if your shadow fell on him, you would have to be killed immediately. That’s just serious. And then they did it. It wasn’t just like the hand wavy kind of thing. No, that was actually how it worked.
Or you think about the samurai. Remember the story of the, I think it’s called the 47 Ronin. There’s a bunch of Ronin. Let’s just say there’s 47. And they’re at war. And they’re supposed to be protecting their daimyo. And some sort of betrayal happens where one of his great allies betrays him and is able to kill the daimyo. So now they’re in a conundrum. Because Honor commands that because they are samurai and they no longer have a daimyo, they have to commit seppuku. But Honor also demands that they have to get revenge on the betrayal. Like he wasn’t actually killed on the battlefield, he was killed in betrayal and that they’re in a paradox, which is in fact a religious paradox.
And so what they do is they commit to a lifetime of living as Ronin, which is to say living as dishonored samurai and accepting the sociocultural consequences of being dishonored samurai, going from the top of the hierarchy to the bottom of the hierarchy, being the lowest of the low, like not even being allowed to like clean people’s toilets and shit like that. So that up into the point where they would successfully infiltrate the betrayer’s castle, enact justice, kill him, and then all commit seppuku, which takes like 20 years, right? So a commitment to 20 years of living this life of utter commitment to being and wearing the mantle of being utterly dishonored in pursuit of potentially achieving the goal of being able to exact justice and then living up to honor, right?
And what I’m saying is that’s the kind of seriousness that actually lives in this category. We should be very clear on that, right? This of it. There’s a thing that’s happening here that I’m not necessarily indicating that we should be engaging in a large amount of arbitrary killing. But what I’m saying is that questions of life and death live at this level. They don’t live at the level of the market and the state properly. So that’s one piece. So that’s just to kind of brush it out. What it looks like is it looks like church. It looks like what happens when a group of people are deeply, seriously committed to coming together and engaging in cultural practices that include the most humble. In fact, orienting themselves in the direction of humility, the most humble basic things. And also all of that is ordered by a vertical set of values that takes it up to the highest.
That’s what it looks like. And that, by the way, many people talk about not wanting to have a priestly class around AI. What I would say is that’s actually just a really great way to get a bad priestly class because you’re going to have a priestly class. It is unavoidable. You’re going to have some group of people who are uniquely focused on the most critical questions and are uniquely capable of supporting other people in doing it. The question is whether or not you’ve got bad priests or good priests. And so at least being honest about it allows you to try to figure out what would it look like to have good priests. And I can tell you right now, we have a shitty priestly class in AI.
Jim: Okay, now let’s look on the right-hand side. Now the right-hand side is what happens when you actually do this work.
Jordan: So alignment. Okay. Hypothesis. Alignment can’t happen at the level of humanity because humanity, qua humanity, at least so far, does not yet have a soul. And we’re not going to solve that problem anytime soon. If we could maybe by fiat say, well, the first thing we’ve got to do is we’ve actually got to bring the whole of humanity into communion. And then we can align AI with that. Okay. That’s at least.
Jim: Maybe in a hundred years, if you’re lucky, right?
Jordan: Right, that at least in principle could work, but in practice, not likely to happen in the time frame we need. However, we can begin the process of constructing AIs that are coming into communion with individual humans.
Now, this has three elements to it, one element. As we have seen, and we’ve now seen it a number of times, Michelle Bowens actually recently, like in the last 48 hours, has written a sub-stack. He and Venkatesh Rao pointing this out, that it does not appear to be the case that compute is an unassailable moat. We’re seeing more and more evidence that while it takes a substantial amount of compute to discover something, once it’s been discovered, it can be recapitulated and leveraged at three orders of magnitude, even more, less expense with almost no change in capacity.
As we’re going to see, we’re going to see a set of surges where innovations are going and then they’re going to propagate out very rapidly. What that means then is that the opportunity to produce what would be a perfectly personal AI, and by that I mean it’s literally the hardware is in your control and in your physical location, and it is biometrically bound to you at a hardware level, is theoretically possible economically. It’s practical. It’s practical right now. I have friends who’ve put up the R1 DeepSeek on a big-ass computer.
And that AI can have a, I’m going to make two steps. One, in principle, can continue to maintain something like symmetry with what’s happening in the big centralized compute clusters. And two, this is the next move. Think about this. To the degree to which I’m going to make a move from personal AI now to intimate AI. It also, I will make an assertion, it also is increasingly appearing to be the case that one of the major differentiators in AI usefulness, let’s say, is the training data. Better training data produces a better AI. And generally speaking, with the very specific limited exception of things that live on the other side of the great world of China, the great Chinese firewall, most objective training data will be homogenous. It’ll be a commodity. It’ll just be part of a marketplace of data that you can train on.
But that incredibly intimate training data having to do with you specifically will be not. And so the distributed training data that lives in the field of intimacy and the field of relationships between Jim and his personal AI, which is trained on him specifically and broadly holistically across him in his relationships and has that as the organizing principle by hypothesis produces a more functional, more effective AI than can be available under any circumstances by a generalized AI. And what that means is, go back to our free enterprise versus Mammon problem, Jim would choose to pay his money for that AI over the centralized AI because it would literally be more functional and useful for him. It could do his software coding better. It could do the other things that he’d like to do better, makes less mistakes, is more helpful, more useful. And he has the feeling of safety that it’s not going to be screwing him or betraying him or selling him out because it’s designed specifically for the purposes of being his.
And now one more piece. And as we are moving more and more and more into an environment of extremely high information risk, I mean, think about what’s happening with phishing attacks and with pseudo AI people that are calling you and the ability to glue together larger and larger data sets. Your intimate AI also becomes the boundary. It’s your fortress between you and the infosphere. So nobody ever calls you. Everybody only calls your personal AI, which is now able to act as effectively a version of your Gmail spam bot. But now it’s yours. And it’s operating at the symmetric level of capacity to what it’s attacking, and even better usually. So this produces a different topology. Notice now the topological characteristic has shifted completely from the story that I told earlier. So that’s all at the strategic level. So strategically, this provides an alternative path. This is different from the oligarchy, techno-feudalist or techno-imperial path. This actually looks like something where there’s a very large number of most, if not all people.
Venkatesh talked about this as being the rifle and the printing press of this revolution. And I think he’s right, by the way. I was really happy to see that because I’ve been working with the team on exactly this thesis now for a little while, and he and Venkatesh and I don’t talk at all. We haven’t talked in like seven years, but they’re completely convergent.
What happens now, I’m gonna move from two back to one. So moving back to one, what we discover is, well, what does it look like for your intimate AI to actually be able to be aligned with you? Well, what that means is that you have to be aligned with you. You have to be coherent with yourself. You have to have a soul. You have to recover your soul. When that means very practically, that means things like you have to achieve an ability to have a clarity on what your values are, what your value hierarchies are, and notice the degree to which you live according to your own values.
So this is one of the primary functions of necessity of your intimate AI would actually be in acting as something like a wisdom coach, something that would help you by necessity. In other words, for it to do its job properly, you have to have your shit together well enough for it to be able to align with something. But if you do, but you could, right? you can actually have integrity. You can actually behave in a fashion that indicates that you believe the things that you claim to believe. And you can over time have a degree of wisdom and maturity so that you can actually orient your values and your purposes properly.
Obviously, you won’t have that either initially, and certainly not when you’re young, but that is a possibility. In the landscape of reality, it is possible for a human being to have a coherent and integrous set of values and purposes. And so as the essence of the intimate AI is a responsibility for supporting you and coming into that state and then to support you in living in that way. And this is what allows it to ultimately be governed by your soul. And that’s what church looks like.
Jim: I like that. I got a whole bunch of comments, as you can imagine. First, I don’t know if you’ve looked into what Joe Edelman and friends have been up to the last few years, but they’ve actually built…
Jordan: Not in the last year, but before that, a lot.
Jim: Yeah, they’ve made quite a bit of progress in their value card system and some automated and ethical, it seems to me, ways to help people discover what it is that they value, which is, I think, and it’s officially operationalizable that it could actually be clicked into your personal intimate AI today. It would add some value to it.
Second, I’ll remind you, we had this conversation in 2013. We talked about the info agents sitting on each other’s shoulder that would talk to each other and that would mediate the discussion of all humans in some useful fashion.
And third, I’ll add that people listen to the podcast. No, I refer to this every third podcast or so as the trillion dollar opportunity. I keep saying, if I was 45, this is what I’d be working on right now, which is the thing that insulates us from the shit of the infosphere. And, oh, by the way, and this is absolutely critical, this is what the idea of the intimate AI doesn’t fully encapture, is that my info agent aligns itself with other info agents to produce a reinforcing network, a meta network, which is almost identical to your idea of the Civitas, right? And so I think I’m less enamored now that I’m thinking about it more of the individual AI, but rather a node of AINess that’s associated with me and tightly bound maybe by Joe’s value cards, but also intimately linked to an ethical network of other AIs and people behind the AIs that I trust.
Jordan: Well, just that you have to start at the intimate one with you, because the notion of ethical implies clarity of values and purposes and integrity, right? So to get to an ethical AI, it has to actually be an ethical person. Then they can begin to connect with each other. And in fact, by the way, here’s the beautiful part of reality. And they will, because collaboration in service to mutually beneficial behaviors is an intrinsic aspect of ethical people operating in reality.
Jim: Okay, that’s good. I like that. All right, now let’s flip back the other way and go, okay, that’s a great idea, Jordan. But in reality, this question will be decided. Let’s assume you’re right that this is the best way forward. It might be. It may be the only way forward. But now it’s an empirical question, does this come, can this come together quick enough to avoid the oligarchy or imperial singleton?
Jordan: Yeah, let’s just leave that right there.
Well, I mean, it can. So this, I mean, you said two phrases and I think they’re proper. So can and does. It can. And what I mean is this, the mobilizing capacity of the global economy and the global infosphere is such that this kind of idea can propagate rapidly. Just think of how rapidly certain ideas propagate across the infosphere. You know, just pick your favorite. The one that just popped in my head was like the Ukraine flag. The Ukraine flag went from being in zero bios to who knows, a billion bios in, I don’t know, a second.
So ideas, good and bad, right, can propagate rapidly through the infosphere. So good ideas can propagate rapidly. And the global economy can actually assemble and produce very sophisticated things very rapidly and deliver them very efficiently. So this is the sort of the benefit of all the stuff that’s been going on for the last X years is that the ability, if we were to choose to produce something like this, we could design it. We could develop probably two or three variations on the theme with some, maybe some core design principles that help them be able to be part of the same consortium.
The idea that that’s a good idea could be disseminated very rapidly. The you could begin the process of iterating on it in actual implementation space. I’m talking like by the end of the year, like in 12 months, you could get a product in the field that would be colorable. And then you’re off to the races. So the answer is it can.
The question of whether it will is interestingly enough, going all the way back to the beginning, right, is ultimately ends up being a spiritual question. Meaning, do people have the ability to choose on the basis of what is good and true and beautiful and to slow down enough to actually notice whether or not their choices are oriented towards the things that they value the most highly? Or do they choose on the basis of expediency, strategy, power, and fear? And that is a spiritual question. And that I think is the decisive question, ultimately at the end of the day.
Jim: It is indeed. It is. It absolutely is. And this is the goddamn thing that you and I have been talking about since 2008. How do people, not, we’ve sort of escaped, but how do we get the most talented people to escape the clutches of Mammon and Moloch and to resonate with some other higher values in the decisions they make and the way they live?
Suppose this podcast goes ultra viral. It won’t. I’ve got a midsize audience, but for whatever reason, very seldom do these things go viral. But let’s say it does. And suddenly all the really smartest AI people go, yeah, Jordan’s fucking right. We all have to abandon our capture by Moloch and Mammon and organize as ethical communities. And we will drive the AI wave forward in this way. And we will stop working for the oligarchs. That could do it.
Jordan: Yeah, it’s interesting because it’s something like a proper priestly class, meaning right now the priestly class, and I’ll use Sam because I don’t know him at all. And I could just sort of make him a caricature. Right now, our priestly class is a bad priestly class, of which he is the caricature in my model. But the proper priests are, in fact, the ones who joined OpenAI in the beginning because they really, really wanted to build something good. They knew that they were going to be building something powerful, and they really, really wanted to build something good. And their intent and hope and aspiration has been thoroughly betrayed. And they’re now being deployed to create something which is not good and increasingly not good.
The power lives in them, not in capital, not in strategy, but in their capacity. They have to go through something that allows them to choose to shift over to a different place. Yeah, it could happen very quickly. And by the way, this is like you and me, and we’re talking. And maybe on the order of like, I don’t know, I’ve had this conversation with five people. It sounds like Venkatesh is thinking about it and maybe having his own set of conversations. Michelle Bowens, you know, saw his stuff, published it. So we’re like in the, this is a seniors of like 30 people.
But the point is this, the more people who are operating in a, from a place of virtue, that happened from a place of orienting towards the highest values and endeavoring always to live their lives in accord with those values, right? Their purposes and values are aligned. The more they can add to that conversation and where there are questions that are not well considered. And where there are answers that are poor, those can be resolved relatively quickly.
We saw this in the Game B days. Remember, particularly Stanton three and Stanton five, we went through a process where we had enough people that we went from, what was it like eight to 12 people to like 40 or 50, all working together with a complete focus on a higher goal and just giving up everything. But what does this collaboration look like? And we went through an exponential teleport. We just rapidly increased our perspective on reality.
And again, that’s the thing, right? If just enough of the right people chose to come together with a dedicated focus on this problem, we’re able to operate in an orderly fashion. Again, this is, again, this is what church is, is are you willing to commit your time and orient towards the highest thoughtfully, carefully? And by the way, in the Christian vernacular that I’ve become used to, there’s a word called being convicted. And I learned from John Vervaeke that this may have negative connotations for some people, but here’s my experience of it.
My experience of it is somebody who is in my church, somebody who I am in communion with points out to me where I am acting in a way that is not in alignment with the values that I express to believe, right? Which is to say that I’m engaging in some form of sin. The most recent example is right before I got married, you know Cassandra, don’t you? Have you met Cassandra? So Cassandra’s a friend of ours from San Diego. She’s moved to Nashville. So Vanessa and I get baptized. Vanessa, later, Vanessa calls Cassandra and says, hey, I got baptized. Cassandra, that’s wonderful. Now I got to tell y’all, y’all are living in sin. Like, what do you mean? You’re not married properly. Oh, shit, right? Convicted.
Like when I hear that, what I hear is she is not, there’s nothing about her trying to manipulate me, right? She is trying to tell me the truth for the purposes of helping me live the values that I have now expressly decided that I am going to orient towards properly and fully. So what happens to me? Interestingly enough, I go, yeah, you’re fucking right. I am not living in alignment with the values that I am professing to believe. That’s convicted.
Yeah. And then what do I do? Well, you got to do two things. First, you got to be celibate until you get married and you got to get married properly. Go. Let’s do this thing. So notice what happens there in terms of collaboration. If you’re in that experience where you’re like, yeah, I actually trust that the people around me have not just my best interest in mind, but this notion of good faith, big picture, meaning they actually have the things that I have at the highest level of my values. They share those at a level that is beyond my ability to hold them.
And this is, you know, soldiers do this, like when soldiers are ready and willing and able to die and to support the thing that we’re all willing to die for, that thing that’s bigger than we are. Like, okay, I am orienting myself towards something that is bigger than me, that I value. This is, by the way, what the word sacred means, which therefore allows me to deal with my own bullshit with enough rigor and commitment that I can navigate towards being more and more capable of living the life that I aspire to.
And I can help other people because now when somebody pokes me, while my ego might be a little bit wounded by that, my ego is not in charge. And so my ego has to say, yeah, you’re right. System update. I was wrong. Thank you.
Jordan: I’m So that’s, again, that’s when we say church, when I say church, that’s what I mean, or one of the things that I mean.
Jim: Yeah, very cool. And it makes total sense to me. As we know, us humans are very imperfect, right? We are just barely over the line to general intelligence, if we’re actually even over the line, right? We have, you know, I had a really good conversation with who the hell was it? Oh, yeah, Jeff Hawkins, the guy who wrote On Intelligence. He’s got a new book, The Thousand Brain Project. And his argument is that AI is probably safer than we think, so long as we don’t build the reptilian brain in. But we all have reptilian brains, right? So this concept that it’s perfectly the right thing for us to be embedded in community that mutually self-corrects each other is, to me, an obviously good idea.
Jordan: Yes, obvious, absolutely necessary, and shockingly not present in almost any location.
Jim: Yep. If you moved in, let’s say you moved into an apartment building in New York City, right? Do you think anybody is going to help form your relationship with the doorman?
Jordan: Probably not. And no, not probably not. And here’s the worst part. So this is like, you know, the notion of. So what happens is when society, when community converts into society, the version of what we’re talking about becomes ideology. So ideology is a simulacrum of this more real wholesome thing, which is a properly ordered actual live set of values, because you can’t actually live any ideology. It’s literally impossible, but you can browbeat other people into be changing their behavior on the basis of an ideology, which is why it exists.
Jim: All right. Any final thoughts? Otherwise, I think this has been a fucking amazing conversation.
Jordan: Yeah, I feel really good. It feels like we did something quite beautiful. So thank you for forcing me to have a conversation.
Jim: Well, thank you for the thought, as always, that you put into these conversations.