Transcript of EP 337 – Philip Rosedale on Emergent Worlds, Localism, and What Building Second Life Taught Him About Humanity

The following is a rough transcript which has not been revised by The Jim Rutt Show or Philip Rosedale. Please check with us before using any quotations from this transcript. Thank you.

Jim: Today’s guest is Philip Rosedale, a longtime tech entrepreneur. Back to the days of being the CTO of RealNetworks, which I was an early user of their Rhapsody product. Way back yonder, I really liked it. But he’s best known as the founder and CEO of Linden Labs, who were the builders and still are the builders and operators of Second Life, the legendary and still running to this day, massive scale, open ended virtual reality platform. I was a member—I think I probably still have an avatar on there somewhere. But anyway, it was a really interesting thing. I first met Philip at the Santa Fe Institute back in the double odds. I’m not quite sure what year. Do you remember what year you came out to Santa Fe maybe? I bet it was probably 2006 or ‘7. That’s what my memory would say, 2006. He gave an amazing talk about the Second Life economy and especially the operation of their equivalent of the central bank, which managed the Linden dollar supply, and it was really interesting and made me think a whole bunch. And we reconnected fairly recently. We were both members of the board of the California Institute for Machine Consciousness, where we discovered we had many shared interests. So we stayed in contact. It’s been fun, and I thought it would make a lot of sense to invite Philip on for one of our worldview episodes. So I’m going to start with the usual thing, which is Philip Rosedale woke up this morning, whether rapidly or slowly—kind of everybody’s different. Who is this? What is that? What is this Philip Rosedale that woke up this morning from being not awake?

Philip: We’re obviously all talking right now with AI about that. What does it mean to become yourself in the morning? And are you a new person every time? And what does this tell us about consciousness? It’s funny you say that because I think—I don’t know whether it was reading some of your stuff—I’ve been kind of trying to notice that lately when I wake up. How do the thoughts come together? I think I’ve been working on this app that we both have been talking about a lot, and I think I came to the surface this morning imagining something working or not working in that app. That’s my memory that I had. And kind of becoming myself in light of what I had done last night, I guess, which was working on this thing. I was using Claude to build it.

But lately, it’s just been a heck of a lot of this bottom up, top down. How are we going to fix the incredibly weird world we’re in? I think that feeling like it’s some sort of a knee in the curve—you always say that, I guess. You always figure you’re at the knee in the curve, and then if you look at the exponential, you know, rescale it, you find maybe you’re not. But it does feel to me like we’re—I guess “polycrisis” is a term that people have used, but it’s a complicated world right now. We have a lot of stuff going on, and I think when I wake up every morning, I’m kind of thinking about what I’m working on right now that relates to all of this craziness.

And I was thinking about our conversations and reflecting on how the troubles with little tiny things like neural networks in some ways—powers of ten, you know—they look like the troubles that groups of people have figuring out how to do things together, and they look like the troubles when you almost look at a whole world as an ecosystem and you look at these problems. It seems like there are kind of similar problems at all those scales. So I know that’s something that I find myself flickering back and forth between. Do I worry about playing around with AI and trying to make AI better, or do I back up, pull the camera back and say, hey, wait a second—my best purpose, my greatest utility, the particular things I know, it might be better to just work on software that helps people get along better.

Jim: Sounds like your sense of yourself is very role based. Right? That you think of yourself when you wake up in the morning as deeply embedded in your projects.

Philip: I think so, and I also feel as if that’s changed over the years. I think I’ve mellowed. I guess that’s what we all do as we get older, but I was messianic when Second Life happened. I felt like I really had been called in some sense by the need to build a virtual world and that virtual worlds would be absolutely transformative to human life. And you’re right, I had a very role based sense of myself because I felt certainly that I was the person that was supposed to be doing this.

And now I think—I hope on the best days at least—I can take a more Zen perspective and just say, isn’t this amazing? I think right now, with all the AI stuff, if you can distance yourself from whether you’re the breakfast or the breakfast eater, if you can distance yourself from all the complicated questions that it raises, you just have to look back and go, wow. We’re really turning over some interesting rocks here as humans, in terms of what we’ve been able to do lately.

Jim: Yeah. Remember, for most of history, there wasn’t even life. Since the Big Bang, then maybe three and a half billion years ago, a little longer, life came along. The hominid line maybe six million years ago. Homo sapiens, 300,000 years. What is our place and role in this universe of ours?

Philip: Well, of course, there’s that kind of techie utopian perspective that I deeply pull back from. It just seems gross, which is to say that we’re but an unimportant stepping stone on the path toward the universe realizing itself. And of course, great big data center driven AIs or something is the next step on that thing. I don’t go that far, but it certainly does feel like by being able to rebuild some of the things that make us human—some of the mental activity so far, I guess, that we know does make us human—that’s pretty amazing.

I think it does suggest that the complexity of living things, like you said, has shifted. What’s interesting about living stuff on Earth has shifted dramatically and exponentially over time. And so I suspect that what we’re seeing right now gives me more confidence at least that we’re going to see more of the same thing. There is not a special magic that we enjoy that no amount of AI work is going to wrest from us. I don’t think that’s true.

Jim: So should humans’ roles be to stabilize humanity as what goes forward, or should humans consider it okay if our silicon descendants become what perseveres in the universe?

Philip: Oh boy. That’s a whopper, isn’t it? Well, I’m going to respond with that and say, think local. That is to say, I think our duty, if we have one, is to just do our best to align with each other at the scale of our immediate neighborhood—families and groups and people working together and people building things. I think our role is to do that as best we can given the local circumstances that we find ourselves in.

I think where we get ourselves in trouble is where we try to make all statements. Like, we need the world to work the following way for everyone. That strikes me as incorrect. I don’t think we have a big role in that sense, but I do think we have a local role, and maybe we’re going to learn more about that in the coming years as some of our larger scale structures kind of melt away and we have to build something else. It’ll be an interesting opportunity. It’ll also be scary as hell.

Jim: From a more enlightened perspective, whatever you can get to moment to moment, it’s just amazing. Let me come back to my question again, which is, should our worldview about what we’re up to be about preserving the human line, Homo sapiens, going forward, or does that really matter?

Philip: Gosh. I mean, as an individual Homo sapiens, I feel like there’s a dynamic equilibrium that is struck between us and whatever comes next and that we can hang around and do that. For the love of my friends and family, I feel like I’d like to stick around and see what they serve for breakfast at this bar, as an old investor of mine once said. I think we have a duty to each other to try to be relevant and enjoy our lives. I don’t think we’re going to be the most intelligent thing in the universe. Like I said, that’s probably already over, even here in 2026.

Jim: It’s something I’ve been thinking about more and more lately, that this is a key line. And in fact, I’m considering calling my friend Doug Rushkoff up and saying, hey, I think I want to borrow your team human concept and extend it, because he’s used team human as a very smart lens on the Internet world—what we’re doing wrong, what we’re doing right with it, etcetera. But as I think a little bit more deeply, we have much deeper issues now. It’s funny—now the Internet is kind of okay. It’s annoying, it’s messed up. But compared to nuclear war or artificial superintelligence or bioweapons built in people’s basements using AI, it’s almost laughably trivial in some sense.

Philip: I must say that team human and Douglas Rushkoff are—he’s a friend, and I have tremendous respect for his work as well. In fact, yesterday I was talking to him and team human about an essay that I wrote a few days ago called “Awakening the Angels” about AI. I think his worldview is the answer to your question. He has a wonderful balance struck between humility and then embracing the uniqueness and the teamness, as he would say, of who we are.

So let’s get back to the more basic Philip’s worldview. This universe—is it real? Are you a brain in a vat? Are we inside a simulation?

Jim: I think the fact that we’re experiencing a similar—we use the word “objective”—we seem to be in a universe in which there are a lot of things we’d agree on. There are a lot of measurements we can make with great accuracy that seem to be the same outcomes for the two of us.

Philip: And that feels to me like a good thing, and there are shades of even Second Life in there, which we can certainly come back to. But the fact that we enjoy a shared environment seems to be one of the things that keeps us coherent and supporting each other and kind of living together in a way that’s even vaguely positive. So I would say, yeah, I think it’ll turn out that there’s something objectively sound we’re living in. Big thinkers like Wolfram and folks that are kind of looking at metaphysics certainly have a lot to say about this. But as a kid who grew up with physics and erector sets and building things, it feels like there’s an objectively real world out there, at least some of it.

Jim: I also use that as my operational metaphysics, but I acknowledge we can’t prove it. We can’t prove that we’re not in a simulation, but the perspective I put out in my minimum viable metaphysics essay on Substack—which I’m, by the way, going to put out a version two of in the next week or so—I essentially say it doesn’t actually matter. It turns out you need to make the reality assumption if you’re going to get any real traction in the world.

Philip: Yeah. And that idea that physics is somehow what we all agree on and has no underlying reality to it is unsettling. As you say, it’s possible that we can prove no more, that we can’t disprove that that could be true. But it does feel like the idea of the consensual hallucination in which physics itself is defined by the average of all our perspectives or something—it feels like a stretch to me, although it could be.

Jim: Yeah. I’ve long resisted that attempt to overapply quantum mechanics and come up with a view like that. Don’t get me started. I think it mostly comes from people not understanding the measurement problem. The measurement problem isn’t what you think it is, people. I think I’m about to write another Substack article on that. The moon was there before there were any humans to look at it.

Philip: Exactly. I agree with you. As a person with an undergraduate physics degree—I’m not a specialist, but as a person who really loves physics and moved to Northern California in 1994—yeah, I’ve had to ride out a lot of conversations that didn’t seem very physical to me about quantum mechanics.

Jim: If you look carefully, often too much LSD was involved.

Philip: And I’m just that guy at the party who’s like, well, okay, I’ve had enough of this. I’m going to have to rain on a little bit of your party ideas here.

Jim: Yeah. I like that. I’ve done that a lot, especially with the real woo woo crowd. I go, I don’t know about that. Let’s talk that through. Alright. Let’s go back to something else you said, and I think this is hugely important, which is that while we can’t prove any of these metaphysical assumptions—we could be a brain in a vat, we could be in a simulation—this idea of being in conversation with other people and that in our minds we see that we’re all, or most of us at least, the sane ones, are seeing things sort of the same way. We trip over the same bump in the rug, for instance.

Philip: Yeah. Exactly. There’s an alignment. There’s a greatness to the fact that we agree on reality. And I think you and I have touched on this before, that there is a real danger that the progress of technology, as demonstrated by the last 20 years, has been to reduce the coherence of that worldview. When you call somebody on Zoom and you see a different room behind them, you’re looking through the prison glass. I always say it’s like that experience in a movie when they put somebody behind the prison window, and it just creates this incredibly visceral sense of separation between you and them in the film.

I think that Zoom and the Internet more broadly has, as a negative impact, done that to us—that yes, we can see all the way across the world, but no, we don’t think we’re there with those people. And virtual worlds are, of course, a twiddle that plays with that idea differently, because you can get a very different sense of whether you’re there with people when you’re in a virtual world.

Jim: Yeah. Let’s switch directions a little bit. Uniquely among the people I’ve talked to in these worldview conversations, you’ve actually created a universe, and at least implicitly, it had a worldview. Talk a little bit about your very earliest formative thoughts about worldview and about Second Life—what it was and what was real, what wasn’t, and what were the perspectives. It’s just amazing. You literally played god, which I love.

Philip: Well, to tell that story, let me back up a little bit. I think you’ve heard some of these stories, but I’ll say there’s a tattoo on my forearm here that people can probably see. And that tattoo says “cohesion, separation, alignment.” That’s what my tattoo says. For those who know that expression, cohesion, separation, and alignment are the rules of flocking—birds and fish and stuff. They were guessed at in an experiment called Boids by a guy at Pixar in the 1980s. I bet, Jim, you’ve probably met him. Fellow Craig Reynolds, I think was the guy’s name, who just guessed that, well, maybe if you just do these simple rules—stay together, fly in the same direction, and don’t run into other birds—his guess was if you only did those three rules, you might get very beautiful computer simulations of birds. And he did this in the late eighties for Pixar, for a film, I’m sure, and it worked. And they flew that way.

As a young kid learning to program, I wrote that same code because I saw it somewhere and tried it. I probably had 20 little dots on the screen that were birds, and I hit go, and they moved like birds. And that was a religious experience for me, or whatever. It was a life changing sense that there was complexity and interesting stuff to be seen and found inside of computers even though they were these kind of simple things at face value. They could create complexity.

I back up and say that story because that, and doing cellular automata in a similar way when I was about the same age—kind of junior high and high school learning to program—made me think that there’s got to be a way to define some kind of rules of physics on a computer and then make a big world that’s like some gigantic erector set or something and let people go in that world and do stuff.

So the formative concept behind Second Life was that it’s an emergent system. Crucially, there were things like World of Warcraft at that time. And the people building World of Warcraft and many of the amazing artists that have built video games historically viewed it as a very Old Testament kind of a God activity, where they say, well, these shall be the rules by which your God will rule you. The physics were the proclamations of the rules of the game, or your interactions with the gods of the game. And by comparison, I wanted to do something different. I was enamored with this idea that if you just built some sort of a canvas that had sufficiently interesting low level rules, you’d get emergence—the game of life and all this stuff would come out of that.

So that was very much what I wanted to do as the creator of the thing: create some sort of unstoppable thing. We used to talk about it in the early years of the company—how can we make this so that you can’t turn it off? That was our obsession. It was the exact opposite of, say, Sony with EverQuest, which was a popular multiplayer role playing game at the time. They wanted to turn everything off. When people first introduced money, which is an emergent idea, into EverQuest, Sony sued them all. When people first started trading Linden dollars, which were the built in currency of Second Life, I was delighted. I reached out to all of them and said, oh my gosh, let’s see how much people will pay for these magic tokens in a virtual world.

So my worldview in building it was weird in that I was very passionate about the idea of it being an emergent system with fixed simple rules. And then, as it took off and became a phenomenon with millions of people trying it out, things totally changed about that. There’s the founder’s intentions, and then there’s the reality of what the thing became, which was something much more about human beings and about how human beings relate to each other, and it broke a lot of those game of life-ish rules that had been my dream at the beginning.

Jim: Well, let’s drill into that. That’s just amazingly interesting. Because you think about the eighteenth century deists, for instance, who—not at all like the Christian, Abrahamic, Islamic, Judaic religions—don’t imagine that their god gave a bunch of rules and regulations about whether you should eat oysters or work on Saturday or anything like that. He basically created a universe with some clockwork rules, still very Newtonian in its fundamental physics, and then just let it be. And what happened is what happened. And that essentially was your original goal. As we would say in complexity science, you wanted complexity from simplicity.

Philip: Perfect. And those clockwork rules were colliding objects. It was things of a meter or so in size that could bump into each other and move as the laws of hard scattering and physics dictate. So yeah, exactly. That was very much my idea—world from small simple rules.

And then so where did that start to break down, and what did you do about it? Well, people wanted to have bodies. That’s the first thing. It was funny. At the very beginning in Second Life, before we launched anything, just what we were all using in the office—there were probably 10 of us sitting in a space in Hayes Valley in San Francisco looking at each other over the desks. Our initial avatars were big flaming eyeballs. We thought it would be like a 1950s car idea, like it would have flames on the side of the eyeball. So we had these eyeballs that were just big horrifying floating eyeballs like a monster movie. But we put flames on the side because we thought that was even funnier.

The idea there was to orient the eyeball to look at where the person was looking. So the center of your screen was exactly where that eyeball of yours was looking. And by doing that, you were seeing the attention of the people in the world in real time. You could see that somebody was looking over there because maybe they were going to build something over there, because you could build everything from day one. You could just make—or “rez,” as we called it—things into existence in the world. So the avatars were initially just an indication of attention.

But as soon as people got in there and there were other people in there, they said, hey, I want to have a beautiful avatar. I want to have flowing hair, which by the way is about a $200,000,000 a year industry in Second Life today. So hair was something that everybody wanted. Beauty. Everybody wanted to be attractive to each other. They wanted to say something about who they were. They wanted to have cool clothes, all that stuff. So that is where things diverged from the deterministic substrate idea that I had been so enamored with.

But of course, how the real world got from, say, physics to hair salons was 13 billion years of evolution.

Jim: But you didn’t have 13 billion years.

Philip: And I had it imposed on me, I guess. It kind of felt a little bit like all that evolution showed up on day one and said, hey, ground rules, Rosedale. We need avatars. Go figure that out. And so we made a bunch of hacks. We did a bunch of stuff that I still regret. In fact, today Second Life is very hard to get into because it is so beautiful and so complicated. Putting something like wearing high heeled shoes on an avatar involves moving the avatar’s collision body up a little bit off the ground because you’re now standing on heels that have some height. Little things like that became all these weird exceptions and hacks that we did to make people able to really create those avatars.

But then, of course, there’s a psychology layer which emerged above that—how people’s lives were affected by the nature of what it was like to be in a virtual world as avatars. And of course, what’s interesting about that is that they had a completely different set of interpersonal experiences compared to, like, Twitter, compared to the way things were going on the broader Internet. Second Life didn’t become as big as the Internet. It got big enough to be an interesting A/B comparison in some of that stuff.

Jim: That’s also very interesting. We have maybe—contrary to what you might have thought when you got started—the kind of low fidelity Twitters and Reddits of the world became huge, while Second Life became big but never became huge, even though it’s much higher dimensional and much richer in context. Thoughts on that?

Philip: So many thoughts on that. But one is that text is pretty powerful. Everybody—you and me especially, by that I mean we got started doing a lot of online interaction with people in the nineties, even perhaps before the Internet—we all know that text is a powerful medium. And that’s what we’re looking at with AI right now. The power of people to communicate with text is quite good. I always say you can fall in love with somebody and get married to them in Second Life over text alone. And when you first meet them in person—I’ve seen this happen in the real world, by the way, which is pretty cool—amazingly, the people that have fallen in love and gotten married in Second Life, when I’ve been there when they met each other as humans for the first time, it’s amazing because you’d think that the Hollywood story would be that they wouldn’t recognize each other or whatever. The reality is they instantly do. It’s weird. I’ve seen it happen, as if they sort of knew each other’s movements or something even though they’d never met. I don’t know how that happens, but it happens somehow over text. Second Life was all text in the beginning, by the way. We didn’t even have voice.

But I do think that there is an ability that people have to communicate over low bandwidth mediums that dominated everything, and it comes with its pros and cons. You can make mistakes about understanding somebody, especially early on in text. And Second Life kind of leaned into the other way, and the problem with that other way is the uncanny valley. If I start being literal and saying, okay, I’m going to give you a body, and other people can see that body—now that body is not expressing your nonverbal communication like I’m doing right now in the video. That body that you have in a virtual world is plastic. It can’t move the way yours does, and it doesn’t convey the same information. So it’s just a trade off between those two extremes, I guess.

Jim: Yeah. Uncanny valley—for listeners, that’s something Jaron Lanier may have come up with. I don’t remember, but he was the first one I heard it from.

Philip: I’m sure that Jaron, who’s fantastic, has talked a lot about the uncanny valley. But the idea actually, I think, goes back to the beginning of the twentieth century, where the thing that was noted was that when you’re presenting a human likeness to someone, errors in that presentation become very, very upsetting as it gets closer to reality. And of course, the classic example of this is why Disney made animated films. They didn’t try to draw real people or real faces, especially, because it’s too hard. And so even on the Internet, there’s a phenomenon of using nonhuman avatars and stuff. A nonhuman avatar is much more believable to you because you don’t have the fine grained memory and prediction of what somebody’s face should do. But when you show a real face—don’t know if you remember The Polar Express—that thing creeped me out something fierce.

The Polar Express was the one everybody in 3D is like, oh god, Polar Express. And of course, some people are like, good for them, they were really pushing the edge. But there’s something wrong with trying to animate Tom Hanks, I think, in The Polar Express as the train conductor. You can’t get it quite right because we know what Tom Hanks looks like. And so when you try to animate Tom Hanks, it just doesn’t work yet.

By the way, I think AI is going to do that. And by the way, I think once AI does that well—and by well, I mean can present an emotionally believable version of someone’s face to somebody else—it is actually going to be a breakthrough for virtual worlds as well. Because I think the lack of nonverbal communication is what is ultimately the reason that Second Life is half a million people hanging out together and not, as you said, something huge.

Jim: Where did you guys come out? Because I remember being on it in the very early days—the avatars didn’t look anything at all particularly like humans other than in the most rudimentary sense.

Philip: You should see them now. I was going to say, where did you guys come out on how lifelike the avatars are? This is one of those podcast moments where we could put up a little picture behind me or something if I could do it. They are now—when they are not in motion, for a static image—photorealistic and in many ways almost even beyond that. There are avatars that people create in Second Life that are just really works of art. However, as I said, it remains true that their faces and their bodies don’t move in a way that in real time communicates my state, and that’s the big problem. And that’s the thing that I think can change with AI.

We have this word called “puppeteering” in virtual worlds, where we talk about trying to move an avatar to match the way your real body is moving, say, because we can see it with a camera or something. It turns out that just doesn’t work. It’s a dead end. A lot of fun science and geeking out has been wasted on it, but it doesn’t generate anything that’s even close to believable because you’re kind of—you’re almost literally taking one person’s body with strings and puppeteering it to match another person’s body. That just never works, I guess, unless you’re one of those unbelievably skilled puppet handlers that actually do it in shows and stuff.

Jim: Interesting. Because that is going to always be an issue, though, as you point out. I have looked at some of the most recent AI video generation tools, and at least for a short scene, they are now believable.

Philip: And that is exactly what I’m saying. There’s a product—I put a little video of an avatar of me animated in exactly that way. It’s a little bit offline. As you know, these things are not yet real time, but they’re very close. And when the latest AI crosses over to be real time so that you could do the—you know, you hamming around and then reprojecting that onto an avatar, which is what I think you’re talking about—once that’s real time, we’re actually going to see something really interesting because people will start using it in virtual worlds.

Jim: Yeah. Or the even—I don’t know about more interesting, but—I have my avatar, but I have my camera on, and I’m actually driving my avatar by going—I don’t know, maybe having controllers or something. And it’s also seeing my face, seeing my body language. That’ll be a learned skill, how to perform in avatar space.

Philip: There’s a nerdy little aside that perhaps some of your listeners here will enjoy. I know you will, which is that if you can track where somebody is in front of their screen really accurately, you can do another neat trick, which is you can create a three dimensional view through a normal screen. You’ve probably seen demos of this, where as you move your head from side to side, I am moving the virtual camera that is looking from your side of the screen into the three dimensional world. So I can move my head to the left and see what was around the corner on the left. And that actually works. A guy—I think it was Jimmy, I can’t remember his last name—a guy at Microsoft Research did this with a Nintendo controller. He stuck it on a hat like you have on, put a Nintendo controller on his hat, and then moved in front of his screen and shifted the three dimensional view on the screen, and it was just electrifying.

But what that will allow is eye contact, which is, by the way, one of the biggest things that makes virtual worlds also not work—you can’t tell who’s looking at who, and we have this internal mental map of that, which is incredibly important, particularly when you’re with strangers. And it’s the reason why we all hate Zoom calls with more than two people.

Jim: Interesting. Now also a nerd aside here—humans are one of the very few species that have whites around the irises of their eyes, and it’s thought that that is for social communications about co-attention. So if you have just dark eyes, I can’t actually see where the other animal is looking. But if I have the whites of the eyes, I can.

Philip: And then there’s an evolutionary thing, which you mentioned earlier, which is the whites of the eyes evolved despite the vulnerability. For example, if I can see the whites of your eyes—which as you said, we all can, and we have a great cognitive map of that—when you look away, because I can see your eyes glance just down to the right, I can take the food you have on your left and keep it for myself, put it in my pocket. And so I love the fact that not only do we use our eyes to signal, but we do so with vulnerability. There is a downside to knowing where the person across from you is looking precisely. And yet evolution has preferred it, which I believe is an optimistic thing. We are evolved to cooperate, not to have conflict—more to cooperate than conflict—and that’s why the whites of the eyes are there.

Jim: Yeah. And I remember Sherry Turkle had an early—it wasn’t really AI, but sort of computer driven little robot. And one of the things that she discovered was just this, and she added really big eyes to it that were very emotive with big eyelashes and big whites and all that sort of thing.

Philip: Yeah. It gives you more angular accuracy, which is why it feels good. It’s like a caricature of taking those kinds of attention navigation features and blowing them up so they were three or four times as salient as they otherwise would be.

Jim: Right. Let’s switch to one of the other aspects of any worldview, which is ethics. What is right and what is wrong? How did that evolve in Second Life?

Philip: I think the thing that most affected Second Life’s growth was the existence of groups, and I mean that broadly in a number of different ways. But from the very beginning, people in Second Life wanted to assemble into small groups as a means of regulating a small neighborhood, because people would own land, and the land would be adjacent to each other in Second Life, which you can do because Second Life is just a big—literally a big Los Angeles sized piece of open real estate that’s parceled up into small pieces. And so people from the very beginning started negotiating what we traditionally kind of mean when we say ethics. What are we all doing together, and what are the rules that are going to go with that?

In fact, in the early days of Second Life, it was quite fun because there was a lot of—it kind of felt like the Hippies or something in the sixties. There was a lot of experimentation in co-living. There was a lot of experimentation in what those rules should be. Should you have a deeply redistributive society? Should you not use money? Should you follow some political system? Should you mirror a political system from the real world? I remember there was a group called Route 66 that was trying to kind of relive America’s interstate age or whatever feel you’d have for that. And then there were people that were trying to be communists. So there were all these great social experiments that went on.

And so I think what emerged from that—you’re asking overall—the problem with Second Life is it’s got a lot of different little communities.

Jim: Well, that’s okay. I mean, that’s kind of the meta ethics—that there could be many ethics.

Philip: Exactly. So there was a kind of a meta view to the place. But I think what emerged from it is that actions have consequences in these groups. There was a respect for the authority vested in a collection of people. And I think that by comparison, in the ethics of, say, Internet entrepreneurship, we went in this other direction, which was toward this kind of intensely individualistic sort of Ayn Rand—I alone am the source of truth. I am sovereign as an individual. I am a mountain. I am an island.

And I think when I look at Second Life, maybe what’s interesting to talk about is that there was this collectivist experimentation that became ultimately a kind of a structural norm—the participation in groups. And I think that makes Second Life even today feel very different. People getting in your face and saying, well, this is what you should do, in a way that feels refreshing and not like the Internet that we—not like Twitter, not to pick on Twitter or X or whatever.

Jim: But you know, one of the things we know from the real world is that there are always deviants. There’s the dark triad—sociopaths, narcissists, and Machiavellians. There’s always about one, two, three percent of people who don’t want to play by the rules, won’t play by the rules if they see it to their advantage not to play by the rules. How did emergent ethics of that sort evolve in Second Life? How does the equivalent of police and justice and that kind of stuff work?

Philip: Well, first of all, large groups—for example, large land masses in Second Life—do have very real police forces, and those police forces have the threat of eviction. That is, they can kick you out of the group, which in some cases could mean that you won’t be able to access that space at all. So you literally are cast to the edge of the city gates by action. So I think there’s a function that comes out of that.

Now, one of the interesting observations about Second Life—which, as you said, even though every community in Second Life has this in spades, as you might imagine, has deviant individuals who have a very different mindset—well, there’s trolling in Second Life, which did become a little bit of a thing and did color Second Life. And this has happened in other virtual worlds as well, where the negative behavior of the trolls actually defines the culture of the space in some sense, which in many cases is problematic, but it’s kind of scientifically interesting to look at.

I think what I was going to observe—and maybe this is interesting about Second Life—is that no one deviant has had a particularly large impact on Second Life. Now maybe that’s because there’s some dimensionality or some connectivity that’s missing. Again, Second Life is much more of a local neighborhood than it is a global phenomenon, which by comparison—I’m going to say, like X or Twitter—is dominated by power law champions. You’ve got these very, very strong compounding effects. Second Life didn’t actually do that, and I must say I’m not exactly sure why. There’s no deviant in Second Life that we all remember over these last 20 years.

But there are lots of little differences in how people interact, more so than in the real world. You have very unusual—Second Life is a collision of different groups which need or want to be in Second Life, but in many cases come from totally different backgrounds. Like, for example, crypto utopian sort of people and stay at home moms who have three kids and are basically going to spend the next two or three years doing absolutely nothing on the social side after 5 PM and have found Second Life and are like, well, this is cool. I’m just going to live in here while my kids sleep in the other room, and then I’ll go back to work after that. So you have very, very different types of people that got stuck into this space together. Feels like San Francisco or something in that regard.

Jim: I think this is actually very interesting. What you’ve accidentally discovered is there’s something about the topology of connectivity that produces a certain kind of social emergence.

Philip: Well said. I think that’s right. A more local topology for a virtual space—another way to say this—results in substantial differences in the kind of culture and ethical outcome.

Jim: Yep. Very similar to our Game B concept of the membrane, where you can create membranes of any scale. You can make them as strong or as weak as you want, but they do exist. And at least it’s our hypothesis that that kind of membranics gives a couple of opportunities. One, that you can have very strong sauce—you can actually stand for something—because if it’s a small, voluntary group, I don’t see any ethical problem with having as strong an agreement as all the people agreed to. And at the same time, you can have pluralism at a large scale, or as you point out in Second Life, people are doing all kinds of different things there. And because the topology is mostly local, they don’t have a huge impact on each other.

Philip: Yes. And the concept of the membrane in Game B is, I think, what people demanded and then successfully built for themselves inside Second Life. And maybe it was just that they weren’t dissuaded by the nature of the system from being able to do it. If you imagine a membrane in Twitter, it’s really hard to find. There’s not really a structure that I can think of in there because you’ve got the one directed follow, which is obviously a very porous kind of—it’s not a membrane. It’s just a field. Literally. That’s a good word—a field. That’s exactly what it is.

And so membranes—people built them in Second Life. And so I think to your point, it’s proof that the membrane is a vital organizing principle, and we’ve got to really reflect on that and think about Second Life being an example of something where locality and these well defined groups were just implicit in the system from the beginning. And so it gave you a very different outcome. And I know you and I are in the same space thinking about this. But yeah, obviously it’s a better outcome—by any definition of better—for people to let that happen.

Jim: Yeah. And now it’s interesting to compare—we talked about Twitter with its sort of scale free networks, where you have everything from people with three followers to people with what, 150,000,000 followers. Well, you look at Facebook—they made some different design choices. I think you could only have 5,000 followers, something like that. Further, they clearly have the concept of the membrane, which is their groups. I’ve used Facebook groups multiple times quite successfully. Now they have their issues, but there’s a different model which has done quite a bit better than Twitter. Doesn’t it feel a little bit like—I bet you’ve reflected on this—that the Facebook groups were a kind of an early moment of rightness?

Philip: That would have been 2005 to 2014 or something. Right? Somewhere in there.

Jim: Yeah. Maybe a little later. Up till—I wish I had some good ones—2017, 2018, 2019. But after that, there are still apparently some good ones, but an awful lot of them are just clickbait.

Philip: Yeah. And it feels like there was a moment there where the right kind of percolation was happening. You had groups and groups and groups in the way that you’ve described in Game B, and then there was a bleaching phenomenon. These lower quality, longer range connections just totally dominated discourse. We all wanted to be the top post on Reddit or the YouTube—I guess YouTube would be the most impactful. We all wanted to be the YouTube that everybody watched. And that broke away from the idea of membranes and well defined groups and kind of washed us out into the mess that we’ve got right now.

Jim: Yeah. I haven’t used a Facebook group for anything serious in quite a while. Still have a pretty nice Game B Facebook group that’s active, but most of the ones I used to be involved with have been invaded by crypto peddlers and things of that sort. It’s really quite annoying. And isn’t that a question for you? What happened there? What was it about crypto that seemed to have just kind of taken some good ideas and run really fast in the wrong direction?

Philip: Oh my god. Was that a curious thing?

Jim: I was involved a little bit in the early, super early crypto, though I did read Satoshi’s white paper about a month after it was published. And I actually still have some little tiny fractions of Bitcoins on a computer in a closet someplace. In the early days, you could send an email to somebody, they’d send you back a little bit of Bitcoin. It’d be worth like a millionth of a cent, probably worth a couple of cents today. But anyway, so I had been thinking about it for a long time, and I actually helped launch a crypto project in 2017. Then I got sort of known for that, and everybody’s brother wanted me to help him with their crypto project. And I remember the emergence of cocaine dealers in the late seventies, and they were far nicer, more ethical people than many—not all—of the people launching crypto projects in 2018, 2019. There’s something about it that attracted some of the worst people I’ve ever seen. Not to say there haven’t been some great projects, because there are, and there are ones that I like and support. But 99 percent seem to have been projects envisioned by clear sociopaths looking to rip people off.

It’s something I think is an important warning, actually, that when—certainly Satoshi was thinking what he was doing, and the guy who did Ethereum was thinking that what he was doing was for the good, as us early Internet, pre-Internet people did. We were sure the online networks would make everybody better citizens, better informed, less craziness. We all had crazy uncles even back in 1981, but I knew they wouldn’t dominate this new network thing. Unfortunately, they do. The same was true for crypto. Unanticipated consequences from what seemed like a brilliant design.

Philip: There’s an essay, I think, from last week or maybe the week before from Vitalik, who is the guy that, as we know, built Ethereum. And he just made a short essay, and he said exactly what you just said. He said, hey, what the hell happened here? He basically said, we all intended for this to be of service to humanity, and I’m just going to say the quiet part out loud—he said, it’s not. I built it, and it’s not. Statistically speaking, it’s a giant hustle, and it’s a negative sum game within that hustle. And it’s just terrible.

Jim: Could you send me a link to that? I’ll publish it alongside our conversation.

Philip: I sure will. Yeah. It’s a good few pages. Vitalik writes stuff infrequently, but it’s wonderful stuff. So I’ve been meaning to have him on the show. Maybe this gives me an excuse to reach out to him.

But going back to what you said, you’re right. It’s curious and a warning. The way I’d try to put it is, the mindset of some of the crypto folks—not all of them, like you said, but some of them—was that a decentralized system can and should be used in the most aggressive way possible to get ahead, and there isn’t any greater truth or law than that. There’s just this idea that a distributed system can just be mechanically used to maximize one’s position, and nothing else need be true. And that’s wonderful somehow. Like, yay, that’s the way the world should work. And it just absolutely isn’t.

And I think you’re right, it’s a warning. Crypto has been—hopefully we’ll look back and go, oh yeah, that helped us realize that this probably isn’t a sufficient design. It’s not an MVP. The idea of every man for himself with a Bitcoin address is just not an MVP for a stable society.

Jim: Now I also take away—and I know this will be more controversial, and this is a position I’ve held for 42 years since I first started building online products—that anonymity is generally not a good thing. That if you’re not accountable at some level in the real world for what you do—not always, there are a couple of exceptions—but it tends to head for the trash can. I remember this from the very early days. We had on The Source back in 1982 something called Participate, by a guy named Harry Stevens who was a professor at New Jersey Institute of Technology, and it was the precursor of social media essentially. And it was really good. It was extremely clever design, which nobody has ever duplicated. I’ve been tempted more than once to just duplicate it. But anyway, we ran it on The Source, had thousands of users, of course lots of fights and trolling, all this good stuff before we knew how to even do any policing in that culture.

But as an experiment, we decided to make a clone, but you had to use your real Source ID, which was attached to your name. We then tried as an experiment: what would happen if we did one with pure anonymity where you just got a generated random number? And it became a dumpster fire and shit show within two weeks, and we shut it down.

Philip: Boy, that sounds like one you should have a video of for people to see. But yeah, exactly. Anonymity—here’s the way I’d put it. The ability to be anonymous should be at layer zero of the meta. You also probably can’t build a society where someone knows everyone’s real identity, and if they want to exploit that for themselves over time, of course that’s going to happen as well. So I always think the layer zero of the network needs to be anonymous. But immediately above that—and this is very much Game B—you build groups and enclosures and membranes that immediately create identity through belonging. And then, of course, that becomes—to put it in simple terms—when you’re posting crazy stuff on the Internet, it’s your groupmates that are saying, hey, Jim, whoa, whoa, whoa, this is going way outside of the bounds. We’re going to have to take away your tweeting privileges. That’s the thing we need.

But so I think its lowest level is anonymous, but then what we missed was that is not at all stable. You don’t just run anonymous. Humans don’t operate anonymously.

Jim: Right. Famously, Wall Street—even run by sociopaths—your word is your bond, and if you break your word on a transaction on Wall Street, you are banned from Wall Street for life. And that’s because they know who you are, and you’re accountable for what you do. And that has, at least in that aspect of Wall Street trading, produced a pretty high fidelity system.

Philip: And what we’re about to see is a bunch of OpenClaw agents that are day traders. And I’m wondering—because I think you just put that so well—what are they going to do when their lunch buddies say, hey, you’re out of line there? We’re going to have an enormous problem. If crypto was a warning, I think millions and millions of OpenClaw instances that are trying to be personal assistants for millions of people worldwide—we know what’s going to happen. The statistically most common instruction is going to be: go hustle people for money and then bring it to me and stay anonymous. I mean, that’s what we’re about to see go down, and we don’t even see the movie to know how it ends.

Jim: Though, of course, listeners know—I haven’t ranted about it recently—but there is a trillion dollar opportunity out there. One of you young folks, not an old geezer like me, should go do this, which is the personal agent to defend you from all that. I’ve been saying this for three or four years, that we need a personal agent that does all kinds of things. But most importantly, it doesn’t allow stuff through our membrane so that we become more or less immune to these kinds of plays. And, oh, by the way, it networks to other people who are thinking like us, and we do, say, share our own curations with each other. We can, if we want, poll our close associates and say, what do you think about this stuff? And then post the results back to some self forming network node. Very obviously not well thought out, but there’s something like that that this demands.

Philip: I wonder if that idea isn’t—I want to restate it because I think it is such a tractable idea for a youngster out there, like you said—which is just to configure something like OpenClaw as an information filter, an agent for you. That’s a great idea. And then the thing I’m struck by there is that the incentive for OpenClaw is to just make you happy, peaceful, satisfied, willing to keep running it. Which is an individual edge relationship that is healthy. It’s literally trying to help you be happy with your information consumption, which we know Twitter isn’t doing. Their incentive is not for you to be happy.

Jim: And interestingly, when we think about topologies and network flows, one of the things that OpenClaw—and hopefully we’re talking about things less insane than OpenClaw, something that fits the needs of a super powerful personal agent—is that it is yours and it is operating for your benefit. I don’t remember who coined this, but it’s just so obvious in retrospect: if you’re not paying for a service, you are the product. And yeah, Facebook isn’t in the Facebook business. Facebook’s in the advertising business. And of course, they have network effects such that the thing that baits you in is of some value to you, but they’re the ones who are monetizing it. They’re the ones who twist all the features to maximize monetization, not the benefit to you. They have to give you just enough benefit that you don’t leave. That’s basically all they have to do.

But if the topology of a personal agent is that it’s on your computer and you grew it essentially by tweaking it gradually over time, the only relationship is between you and it. And so I suppose that should give us a warning that OpenClaw-like things that are offered on a centralized platform are probably a bad idea.

Philip: I want to put in a plug from another perspective I’ve heard from you, so I just want to restate it. I think part of what you’re alluding to is this distinction—I think you were telling me the other day about micropayments versus free. And I think what you said was free is a very intoxicating offer, and as we both remember, the early Internet had this question of, are we going to have this all be kind of commerce? Imagine Bitcoin micropayments all the way back in the mid nineties or something. That could have happened. Or are we instead going to do this bait and switch of, hey, I’m going to give you all this stuff for free, all you can eat, and then once in a while there’s going to be these advertisements in it?

And then of course in the earliest days of advertisements, we all made the mistake, I think, of saying, hey, wait a second, I don’t want to see tampon advertisements. I’d like to see electric motorcycle advertisements. So what could be wrong with letting Amazon or others know what my preferences are?

Jim: Seemed like a good idea at the time. Yeah. I don’t know if you remember the book Free by Chris Anderson. He was the one that I think summed it up most strongly. He basically says, if you can’t figure out how to make your product free and supported by attention hijacking, you will be killed by somebody who does. And unfortunately, it turned out to be true. Now part of it was the ecological niche of the micropayment, for whatever weird reason, never formed up in the United States at least, or the West more generally.

And by the way, kids—if you can figure out—45 year old kids, that’s kind of weird that a 45 year old’s a kid. Goddamn it. Hate getting old. But anyway, figure out how to do real micropayments so that if I want to read a Wall Street Journal article, I don’t have to go through this baroque process of signing on to the Journal on a two week trial and then canceling it and all this crap. I just say, I’ll pay one cent for the article. And everything already has your information, etcetera, and the transactions are aggregated so that the back end payments problems don’t exist. There’s been such an obvious use for that, and it could have produced a very different Internet culture if it had existed.

Philip: I agree. It can’t be just us, because you and I have talked about this before. I want to go to the news sites right now, especially given the immensity of news at this moment and the difficulty of filtering it. And yeah, I just want to pay for the article. I don’t want to read The Wall Street Journal much of the time. But like you, I frequently run into a Wall Street Journal paywall and then say, screw it, I’m just going to bail on this. But there’s an in between where I would have happily paid a nickel to read said article about, you know, oil or something.

Yeah. I think this should again be another warning to us that what seem like neutral infrastructure decisions end up having massive emergent cultural effects.

Jim: Well said. I wouldn’t even add anything to that.

Philip: I think that Second Life is really a proof of that too, from the world building perspective. You cannot have the hubris—and maybe this helped me out as an entrepreneur—to think that you can predict what’s going to happen in a world like Second Life with basically fixed low level principles. Yeah, it’s just obviously—you ran the institute that almost put that in bold Latin proofs. But the idea that you would have such hubris and be able to predict the future is just not true. And I guess I got a good look at that.

Jim: Yeah. People often will ask me, well, what did you learn from being at the Santa Fe Institute? How amazingly little I actually know.

Philip: Yeah. I think we’ve both talked about that too. At a large level, it feels like the delight of the present moment and the horror of the present moment is we’re learning more and realizing that we know fractionally less.

Jim: Yep. That when you really understand complexity, your ability to call your shot with great confidence very far into the future just doesn’t exist, which basically means that you need an emergent engineering perspective—which is to operate locally, have some guidance far out, but hold that far out guidance lightly. And it’s really difficult. That’s not how we were all trained. We were trained to write our five year business plan and then go out and raise $2,000,000 and go do it. That worked. But that’s not really how our modern world works.

Philip: It’s such a good point you make as both of us entrepreneurs. That five year business plan—I mean, did you ever not laugh when you saw that? I was always like, really? Seriously? I’m going to have to send you a deck with five years of projections? Come on.

Jim: Yeah. My day back in the eighties, it was absolutely expected—a 20 page at least written business plan. Talk about dead trees for no good reason.

Philip: So true. And remarkable how much times have changed. As you said earlier, we’re getting older, and it is staggering to look back at the business practice of the mid nineties as it related to starting companies and how really pretty different it is from today. Like you said, human history is narrow, but if we went back another hundred years before 1995, the business practice was I think probably in many ways more similar to ’95 than what we’ve got today.

Jim: That could well be true. Let’s exit on a final topic, which corresponds to worldview and some of our mutual interests. What are your thoughts about what is consciousness?

Philip: Well, again, I’m going to go back to your membrane. My take on it—and I think we’ve got some of the same ideas embedded in here—is this. If you draw a membrane, if you draw a volume around a part of the universe and you say inside that part of the universe, there is something alive, there’s a something in there—and I think Karl Friston’s work is one of the good pieces of work in saying what I’m saying here—if you take a volume and you say there’s something alive in there, then what is happening inside that volume, inside that membrane, has got to be some kind of anticipation of what’s going on outside the membrane.

The thing that’s inside survives, or has some selfness to it for a period of time, only because it moves out of the way of the incoming bullets. It’s somehow—and the way that it does that is that it has a world model that says, I think every five minutes a bullet comes at me and I shift to the left and it goes by. And so that kind of world model building is, I believe, a sensible statement—it seems fundamental to anything that seems to have even a subjective structure over time.

Inside that world model, and this gets to consciousness, is a little voodoo doll, which is a recreation of the thing you think is you. You have learned that you can wiggle your fingers with great accuracy, and yet you can’t move the door that I see behind you in the room with great accuracy by imagining it. And so you take on an identity. You find yourself in that world model. And I think that consciousness has something to do with looking into the mirror of that self and sort of becoming aware that you are you and you are there and that the world is there.

As someone who as a kid had these lucid dreams—dreams where you weirdly, partway through the dream, suddenly as a great shock realize, holy crap, this is a dream—so I can fly and I can jump happily off this building with no risk to myself. I had those dreams since I was a little kid. It turns out when you research them, they get less and less frequent as you get older. Another bad consequence of getting older. But they gave me a very visceral sense of that moment of waking up, because when it happens in the dream, you suddenly are like, holy crap, I was just having a regular dream, but suddenly I just realized that I am here now and that I, in fact, knowing everything, know that this is a dream.

And so that is my take on consciousness—the discovering of the self in the world model and then looking at it in a perhaps almost narcissistic way, like, wow, I’m there. And I think that’s my best take on what we call consciousness.

Jim: I like that. That’s good. Now do you think a consciousness of that sort—that chimps or dogs have that kind of consciousness—or is that a human only thing?

Philip: I think chimps and dogs definitely do. I think that the model introspecting itself—which is really what I’m talking about, the model somehow in doing its model thing finds itself—yeah. And I think Hofstadter called that the strange loop. I think that strange loop probably happens a lot, certainly inclusive of dogs and birds and stuff like that. And I guess I’m not smart enough to characterize—I’m sure it changes very qualitatively as you change the scale of that model, but certainly in lots of places.

Jim: And that is the other famous Thomas Nagel paper—”What Is It Like to Be a Bat?” A bat has completely different sensory modalities. We could have the same basic structure of consciousness, but with very, very different contents and very, very different dynamics probably. Anyway, I want to thank Philip Rosedale for an extraordinarily interesting worldview conversation. In this case, kind of a double worldview conversation—his own, and turned out not that different, the godly powers of creating Second Life and its own implicit worldviews. So thank you very much, Philip.

Philip: Thanks for having me. Fun to be here, Jim. That was great.

Jim: Alright.