The following is a rough transcript which has not been revised by The Jim Rutt Show or David Shapiro. Please check with us before using any quotations from this transcript. Thank you.
Jim: Today’s guest is David Shapiro. David’s a longtime techie and AI guy. He’s been studying and fine-tuning LLMs since GPT-2. That’s pretty amazing. You know, I play with GPT-3 a little bit, but GPT-2, that takes you way back. And he’s a longtime automation IT guy, so he’s got some real valid background here for what we’re going to talk about today. These days, he’s a thinker, a writer, and a video maker on topics around the impact of IT and AI to our future.
I first ran into Dave on Twitter, and his handle there is @daveshapi, and found him to be very interesting. I then discovered his Substack with the very original title of “David Shapiro’s Substack.” Well worth reading on all kinds of topics, by the way, and that’s mostly what we’re going to be talking about today. He’s also got a podcast on Spotify, and he has various other things going on—the busiest damn retired guy I’ve seen in a while. And you can check him out on his Linktree at /daveshap. So anyway, welcome, David.
David: Thanks for having me. It’s good to be here.
Jim: Looking forward to this conversation. The series that we’re going to be talking about today is basically pitched in long form across six installments that, when you compile them in a lightly formatted fashion, come up to 250 pages. It’s a lot of stuff, but it’s good stuff. And as always, I’ve read it all. The title of this series is “Post-Labor Economics,” parts one through six. I’d also suggest as a nice either warm-up or addendum, a more recently published stack called “You Should Let the Human Race Die Out,” which takes some of the ideas from the post-labor economics and has some fun with them. So anyway, tell me a little bit about how you chose to move from being the guy that was annihilating jobs through AI and automation to the guy writing about the impact of it.
David: My original pivot from information technology and cloud automation engineering was inspired by the release of GPT-2. I very quickly realized that this is the way of the future. Every now and then, you get your hands on a new technology. For a previous generation, it might have been Internet or personal computing, and for me it was AI. And I saw this is very clearly the way of the future, and I wanted to get in on alignment, AI safety.
Like many people, I had the kind of naive belief that AI would inevitably kill everyone, and we needed to get as many smart people on the task as possible to ensure that AI didn’t do that. So I started fine-tuning GPT-2, GPT-3. Fine-tuning, for anyone who’s not familiar, is a way of shaping the behavior of a deep neural network model.
Fast forward a couple years, and the space got completely saturated by people that, in my opinion, have no business talking about AI safety—people with no education or experience to speak of with respect to artificial intelligence, and it got super crowded. And I also realized that universities and frontier labs and governments around the world were taking AI seriously enough that there’s nothing else I could contribute.
So then, I spent about a year just kind of trying to figure out what to do next. And then I landed on post-labor economics. It’s an idea that I’ve been kicking around for a few years now. Just whenever you’re a public communicator and you’re talking about AI, one of the biggest questions that comes up is, “Well, what about jobs? What is AI going to do to my job and my family, and what are you going to do about it?” And a lot of anxiety and, frankly, some anger out there. Some people would tend to shoot the messenger, but it was a good question. And like any good scientist or thinker, a good question demands action. And so then I spent the last couple years kind of refining this idea and coming back and forth to it again and again. But then this year, I really kind of decided this is going to be my next big thing. And that led to the blog post series that you referenced. And that was kind of a dress rehearsal for an upcoming book that I’m working on on the same topic. So that’s from there to here.
Jim: Yeah, it makes actually a lot of sense. It is kind of interesting. I’ve been following the AI safety space since about 2004 where I got involved with the folks just sort of on a friendly basis who are now MIRI, and I’ve read all the books and all. But I’ve sort of kept my distance from it, and as you said, figured out there’s enough people worrying about it—many of them overly hysterical handkerchief ringers—that that care that will either take care of itself or it won’t. Nothing I can do about it. And there’s still some risks, and I did do an episode on my take on the seven AI risks. But in the meantime, let’s assume that that problem gets dealt with, and let’s think about the other implications, which is in some ways an AI risk, but maybe not. So we’re going to talk later about some of your solutions potentially to post-AI economics. But before we do that, you know, this isn’t new. Right? There’s been multiple transitions in how the human race has provisioned itself. You know, the oldest one being the transition from forager peoples to agriculturalists. So tell us a little bit about the history of these kinds of transitions.
David: Yeah. So it’s really impossible to understand automation or technology or what we might call general purpose technologies, not to be confused with generative pre-trained transformers. But general purpose technologies tend to reshape society, economics, and lifestyles. You could make a pretty compelling argument that going from hunter-gatherers, nomads, to settled agrarians was probably a bad choice for humanity. Zoonotic diseases took off, and chronic, RSI-type injuries took off as well, just because we’re not really meant to be farmers. Our bodies are optimized for traversing savannas and fording rivers and climbing trees, all in the pursuit of prey items and nuts and berries and following the herds.
Now, our ability to do that is remarkable. We’ve got these big brains. So for about twelve thousand years, we were agricultural societies. That’s kind of how it became our main lifestyle, main way of making a living. Fast forward until about two or three hundred years ago, after the Renaissance and we had the printing press. In my personal opinion, it all kind of started with the printing press, because that democratized information and learning, and literacy rates shot up after that.
That’s really kind of the beginning of the modern cognitive revolution for about five hundred years. Took a little while for it to build up steam, but then we had the first industrial revolution, which was about mechanization, particularly around manufacturing. Second industrial revolution expanded with steam, and eventually internal combustion engines. Third industrial revolution was the internet, and before that digital computers and servers, and now we’re in the middle of the fourth industrial revolution.
When you look at every epoch that we’ve gone through, it comes part and parcel with lots of upheaval. The printing press led directly to many outcomes, such as the Lutheran revolution, the Protestant Reformation, Calvinism. It also contributed directly to the French Revolution, with the ability to print new ideas and spread them on pamphlets—basically the French equivalent of tweeting back in the day, or posting on Substack, very revolutionary ideas. The French authorities would actually try and sabotage the printing presses by taking some of the key pins out of them, but then they didn’t realize they were easy to replace.
As the state and the church lost control over information, that had pretty dire consequences for the status quo. Ultimately though, it resulted in democracy as we know it being formed, constitutional republics being formed as we know it. France had a few false starts. It wasn’t until the Third Republic that they really stuck the landing. First couple times fizzled out, and that’s a dramatic understatement.
But you fast forward into the first, second, and third industrial revolution, we ended up with pretty negative outcomes. Children working in factories getting limbs mangled by the machines, people living in absolute squalor in both London and America, and other places that industrialized quickly. And on a counterexample, we also have places that failed to industrialize, namely czarist Russia. The aristocracy in Russia didn’t really have an incentive to industrialize. They wanted to keep people poor and uneducated because that’s just how things worked in Russia. They wanted to keep them all peasants.
Eventually though, Russia tried to lurch forward. There’s the Tsarist revolution, and then eventually Leninism, and we know how well that turned out—Iron Curtain. And now Russia is still paying the consequences, almost two hundred years later for failure to industrialize because they have a very lopsided society. Likewise, China was a slow adopter. And so when you have the Great Leap Forward and the Cultural Revolution, China made a lot of mistakes with trying to rapidly industrialize and rapidly reimagine society.
This is a very important lesson because whenever you have some revolutionary idea or revolutionary new technology and you say, “We’re just gonna completely rewrite the playbook of society,” generally speaking, millions of people die when you do that. So this is one thing that I am here to caution against because particularly in Silicon Valley and other tech circles, people say, “Oh, well, we can just reimagine all of society with AI and blockchain.” And while they certainly will play a part in building a new better futuristic society, engaging in what I call kind of a phoenix narrative or a phoenix fantasy of burning it all down and starting from scratch is not how you want to go about renovating civil society. So that’s kind of the broad arc of history, and we can unpack each of those layers as to how and why it turns out that way, the way that technology upends the status quo, and also why it sometimes doesn’t go well for people, sometimes for decades on end.
Jim: One thing I didn’t see in the essays, at least I don’t recall seeing it, is what we in the Game B world call the multipolar trap as the forcing function for transitions that we might not otherwise want to make. The idea of the multipolar trap, which is not quite obvious until you actually focus on it just a little bit, is that you get a group of people doing X, and they’re all living in reasonable cooperation. Some are winning a little, some are losing a little, but the competition is kind of fair. And then somebody switches to strategy Y, which gives them strong short-term advantage. But if everybody adopts Y, it fails Kant’s categorical imperative—everybody’s worse off.
As you said, one can argue, though I think we’ll accept the argument for the moment, that the transition from forager to agriculturalist might have been a bad one. The problem is if your neighbors switch to agriculture, they’ll have ten times the population density that you do. And they’ll also have periods of time when they’re not doing their agricultural work that they can come out and pillage your village, kill your men, and take your women. So you are essentially forced by the multipolar trap to either get annihilated and incorporated into the Borg or become agriculturalists yourself.
This happened again and again. One of the more famous examples was Japan waking up from its slumber and realizing, “Oh, shit. We’re at the total mercy of the Dutch and American barbarians.” They went into an amazing forced march copying the British, the Americans, and the Germans, and within less than a hundred years were a first-class industrial power. Whether that was right for them or not—ending up with Hiroshima and Nagasaki and most of their cities burned down by B-29s—is not clear. But they were basically forced to because otherwise they would have been conquered by the Russians or the Americans or the Dutch or somebody. This concept of a multipolar trap, I believe, is hugely important in understanding why these transitions happen even if in the long term, they may not be favorable. We should keep that in mind when we’re thinking about this AI transition.
David: I tend to think of that more through the lens of a Nash equilibrium. John Nash, the original game theorist, posited that if you end up in an equilibrium state, no one is incentivized to change their strategy because any change to their strategy would be suboptimal. This is why many developed nations—America, China, Russia—have adopted an innovate-by-default stance. Because if you don’t, someone else will. This is the lesson that you draw from historical Russia, historical China, and even pre-Meiji Restoration Japan: “Oh, let’s just maintain the status quo.” But in a complex adaptive system, if you’re not adapting, you’re dying.
I’m familiar with Daniel Schmachtenberger’s work and multipolar traps and that sort of thing. I personally feel like it’s an overcomplication of existing theory. But same difference—infinite games versus finite games, Game A versus Game B. When you look at geopolitics or societies as a complex adaptive system, adding a new technology, particularly a general-purpose technology, a high-leverage general-purpose technology—whether that’s a new method of farming, metallurgy, or even just the Mongols having stirrups—gave them a huge advantage over everyone they conquered.
This creates new incentive structures and opportunities. If you’re not going to arm up, your neighbor is. If you’re not going to produce more with your land, your neighbor’s going to take your land and produce more on it if they can. Another good way of understanding this is offensive realism, a geopolitical theory pushed by John Mearsheimer, which breaks it down into simpler brass tacks: what’s the steel production of a nation? What is the standing army size? What is the amount of energy they produce? From his perspective, that’s a good way of predicting who’s going to go to war with whom. If you have enough potential energy to conquer your neighbor, you’re going to if you can. But in a larger multipolar world, it’s not just player A versus player B on a chessboard—it’s eighty different directions, each with different strengths and weaknesses. Technology can upend many of those kinds of status quo and equilibria.
Jim: But let’s move then into something that’s starting to now go down the hill towards the core of the argument here, which is what you call the logic of labor substitution.
Jim: But let’s move then into something that’s starting to now go down the hill towards the core of the argument here, which is what you call the logic of labor substitution.
David: Basically, now that we’re in a free market capitalist paradigm—the particular flavor of the decade for the last about four decades, almost five, has been neoliberalism, a particular subset of capitalism. The entire idea is that free markets will incentivize innovation. In the same way that nations or tribes would compete for dominance, the idea is, well, you just turn that arena into businesses, and businesses compete for dominance.
And therefore, in an age where you have the ability to automate away human effort—in economics terms, the definition of automation is just labor saving technology. When you have the option of using labor saving technology, you have an imperative to do so. And that imperative is if you want to compete with your competitors, your peers in the industry, one way to undercut them is to offer the same goods and services at a lower price. And one of the best ways to do that is to lower your labor costs, because labor costs are often one of the biggest, if not the biggest, economic input to whatever goods and services you’re producing.
Then of course, you have shareholders, and the shareholders want to see better quarterly statements, and layoffs always make stock prices go up because it’s assumed that layoffs mean you’re going to have fewer billables. Your payroll is going to shrink, and therefore more money for the investors, more money for the shareholders. So you have this really powerful baseline, almost biological imperative—it’s a biological drive if you think of a corporation as a living entity—to trim the fat as much as possible.
And when you add more and more, you layer in more and more automation technologies, starting with factory assembly line robotics back in the fifties and sixties. Then you add in database technology in the sixties, seventies, and eighties. Then you add in Internet and personal computing, and now you add in back office automation software, and finally, artificial intelligence. What we’re seeing is just a continuation of the same decades-long trend, almost a century now, of adding more and more business automation into the mix.
So when people say, “Oh, well, this time is no different,” I tend to agree. It’s more of the same. It’s just the same trend has been playing out for a very long period of time, much longer than many people realize. And the ultimate goal, or maybe attractor state—to borrow some of the multipolar theory—the attractor basin that we’re heading towards is companies with near zero employees, asymptotically approaching minimal employees, which would be zero in many cases.
And now, of course, there’s still a lot of skepticism. Is AI even that good? Are robots that good? But we already see today that there are startups, unicorn startups reaching valuations of over a billion dollars with fewer than two dozen employees today. That was simply not possible a few decades ago. You needed the infrastructure. You needed the headcount, the manpower, the capital investments, and then all the lawyers and administrative tasks to manage that much capital.
So today, we’re seeing this capital concentration and capital intensification. That’s two economic ways of characterizing it. Capital concentration means that there’s more and more capital, which is valuable assets of any kind in the hands of fewer and fewer. But then capital intensification means that in order to make money, you focus more on capital inputs rather than labor inputs. And this is exactly the trend that we’ve seen for about the last six or seven decades where the amount of labor required to make money has gone down, which is exactly what you expect to see under automation and is exactly what you want to see to head towards a fully automated luxury space communism future, if that’s indeed what we’re building.
Jim: Yeah. And indeed, the math is such that the move is made, and this is where people often miss this, is that you basically have to calculate the capital cost versus the operations and maintenance costs versus labor substitute. And typically, and this is the scary part, also then modified by quality. And often the quality side’s higher on the automated side than it is on the human side. You know, the auto industry, especially the American auto industry, which had gotten lazy and sloppy, discovered their cars were a lot better once they automated. So even if you were right at the margin on capital cost, O and M versus labor costs, but won on the quality side, the move would still be made. And then as we move into AI, this is something very, very important, which is these capabilities of large language models in particular and probably other kinds of AI soon are moving at, like, triple exponentials in terms of reducing their costs. And so our usual generational time frame for major moves in technologies in terms of capital costs versus labor costs are going to be shockingly faster this time.
David: Yeah. And while that is all true and very exciting, it is important to remind people that cognitive inputs, because that’s essentially what we’re automating right now, we’re approaching a period of cognitive hyperabundance where, as Sam Altman put it, we’re gonna have intelligence too cheap to meter. Intelligence is not the only economic input to a lot of business activities. There’s still heavy materials—steel, concrete, rare materials, lithium, expensive materials to manufacture silicon wafers, that sort of thing, the energy that goes into it all. So there’s many economic inputs beyond just the intelligence.
This is one of my criticisms, particularly of the more utopian noises coming out of the tech world and Silicon Valley, which is, “Oh, well, once you have AI, you can automate everything and things are gonna be free.” That’s not how economics works. That’s not how anything works. The reason that I bring this up is because once you alleviate any single bottleneck, you just find whatever the next bottleneck happens to be. And in many cases, cognition is not the chief bottleneck.
For instance, for houses—materials are the vast majority of the cost of a house. Wood is a heavy material that grows in certain places and you have to move it. That means it requires energy and fuel and rail lines. And brain power is not one of the limiting factors of building a house. Building a good house, you only need one really smart person, which is the architect and the foreman or the general contractor. So maybe three smart people. Certainly we can do with more of them. But when people sometimes get starry-eyed and they say, “Oh, we’re gonna have the robots building robots and the robots building the factories to build more robots”—those robots all have actuators and sensors and batteries, and they require GPUs and all kinds of other inputs. So the robots aren’t free. Therefore, the factory that they built isn’t free.
Jim: As we remember from Economics 101, the definition of economics is the allocation of activity and assets under scarcity. Right?
David: Exactly.
Jim: And so the question is where will the scarcity be? And you make the good point. It won’t totally go away, but it’s gonna retreat probably from labor, at least in most cases. You talk about the epochs of strength. I live deep in the mountains where the roads were really difficult to build, and there’s pictures from the 1860s and seventies of armies of people with picks and shovels building roads over the little crappy two-lane dirt road, one-lane dirt roads over the mountains. Today, big old dozer do that in about a week. And mining again, well, it used to be one of the biggest industries in America. They talk a lot about coal miners these days—all the poor coal miners. You know how many coal miners there are in the United States today? Fifty thousand. Hardly any. Because it’s all the strength part of the world has long since been automated, at least since the later twentieth century. And then the dexterity part is next, and you talk about that. And industrial robots came online in the sixties initially. It had been accelerating exponentially since.
David: But one of the—
Jim: One thing I’m quite interested in in housing is what housing triggered this line of thought is the human factor, the humanoid robot combined with inexpensive cognition, may make a big impact in the dexterity space that we haven’t seen today outside of the controlled environment of the warehouse and the factory. For instance, with a humanoid robot—humanoid’s important because our built world is designed for things about our size and with about our capability for motion. And so if you can actually have a humanoid robot that moves like we do and fits in the spaces that we do, it can do a lot more things than something with wheels or a big blocky thing or a big long arm, the kind of things you see in the factory. And so if you had a humanoid robot with high level of cognition, a whole lot of labor involved in building a house could indeed be automated.
David: First and foremost, yes. That is the reason that humanoid robots are a form factor that are being explored, and many billions of dollars are being poured into that between America, between China and Europe. A lot of people see the economic value of saying, “Hey, we can have a laborer here that the labor amortizes down to about one to three dollars an hour,” which is on par for the cheapest labor markets on the planet. If the ability for labor arbitrage to go away is realizable, then whoever can make labor arbitrage go away is gonna make a lot of money. That’s why people like Elon Musk and Goldman Sachs—they estimate that the future, I don’t know the number off the top of my head, but that the future total addressable market of humanoid robots is measured in the trillions, if not tens of trillions of dollars. And I would tend to agree with that in the long view.
And what we’re kinda touching on here is that there’s four basic offerings that human bodies have in terms of economic value. So there’s strength, dexterity, cognition, and empathy. Everything that humans do for work comes down to those four kind of primary traits or attributes or the subsidiaries of those. So for instance, you might be thinking, “Oh, well, what about creativity?” Creativity is a combination of cognition and empathy. Or what about negotiation and sales? Negotiation and sales is empathy plus some cognition.
Jim: Or anti-empathy. Right?
David: Depends. Yeah.
Jim: Psychopathy sometimes. I mean, my first job out of college—
David: Oh, boy.
Jim: Yeah. Which was invaluable experience. Eleven months, I made enough money to pay off my student loans.
Jim: Or anti-empathy, right?
David: Depends, yeah.
Jim: Psychopathy sometimes, right? I mean, my first job out of college—
David: Was as a car salesman. Oh boy.
Jim: Yeah. Which was invaluable experience. Eleven months, I made enough money to pay off my student loans.
David: And I can tell you empathy was not the essence of what we were doing. I would say that you do need a certain level of empathy so you can read people and figure out what makes them tick, and there is such a thing as dark empathy that is out there in the literature today. But to your point, as we look at the progress that machines are making, there are blog posts and things going around. Just saw this morning, 40 percent of Americans trust AI chatbots as much or more than doctors. Therapists are starting to shake in their boots because they’re realizing that these new chatbots are spookily effective as therapists.
When you have a class of machines that are starting to encroach upon all four food groups of economically valuable activity that humans offer, one of the inevitable conclusions is, what if this encroachment continues for any length of time? Then it’s going to permanently eat our lunch. You hope and pray that maybe, as history has shown, technology often ultimately results in more jobs. Although that’s not really what happens. Technology is deflationary, which allows capital to be liberated to the production of other goods and services. But there’s no law of economics that goods and services must be rendered by human hands or human inputs.
If you can produce more goods and services, as you mentioned earlier, for much cheaper, and humans never enter the loop, that’s where the business imperative goes. One of the litmus tests that I use is if you can provide goods or services better, faster, cheaper, and safer with machines, that’s what you do. As you pointed out with auto manufacturing, the more we automate it, the higher the quality goes, the safer it becomes for workers because you don’t get crushed under a car frame or motor block. This ties back to why I say that what we’re heading towards is a zero-employee future, or as close to zero employees as possible, which is just an economic paradigm that’s difficult to really imagine when you have many centuries of companies and firms that have required thousands, if not millions of workers to render those goods and services.
Jim: And in terms of how far this will go, you do talk about some counterarguments, and you refute them essentially, what you called—
David: The glass ceiling examples, right, of—
Jim: I’ve been a power amateur user of these tools since GPT-3, a little bit with GPT-3, but then when the public GPT-3.5 came out, I became a heavier user, putting it at the API level from the get-go, and I could just feel them getting better day by day and faster and faster. The Keyro, the newest Amazon programming tool, is like, holy fuck. It has totally obsoleted cursor, at least for certain classes of problems.
This week, I’ve been working on a long-standing scientific paper, and I’ve been using a mix of Anthropic Claude and Gemini and OpenAI to refine the ideas and argue with me and critique what I’m writing. And I go, motherfucker. This is like having a postdoc sitting on my shoulder helping me think through. And frankly, it’s more like having an army. I’m going to spin up four or five of them simultaneously on different questions. The work that it’s doing, yes, occasionally it hallucinates. Yes, occasionally it does the wrong thing. But let me tell you, having been somebody who’s had thousands of people work for me in my lifetime, humans ain’t all that reliable either and not even close.
My friend Peter Wang, within a couple of weeks after ChatGPT was publicly released in November 2022, I thought was very prescient saying this was the Kitty Hawk 1903 moment. Ever since then, I’ve been trying to place where we are on the aeronautics timeline. Until lately, I’ve been saying that we were in the middle of World War I, right, with Spads and Fokkers and that cool French design that allowed you to shoot the machine gun between the propellers of the airplane, which was an amazing innovation in mechanical engineering. I thought we’re kind of there. But in the last six, eight weeks, it feels like we’re now heading towards World War II. But still a very long way from jets. Not even monoplanes probably yet or just very minimal monoplanes. And it still feels like there’s a shitload long way to go, and GPT-5 is supposed to come out this month. What the fuck? How good will that be? Anyway, so with that little aside there, tell us a little bit about the critiques of your hypothesis with possible limits to cognition and robotics in particular.
David: Yeah. So the one that I kind of already touched on was the Luddite fallacy, which is the idea that technology will destroy jobs forever and the jobs will never come back. And that has been proven incorrect time and again throughout history, where, yes, there are far fewer weavers today than we used to have. There’s no loom operators anymore that we would really recognize, but those jobs have been replaced by sweatshops. So we still have people making fabric. There’s a lot more machines involved, and the labor is grueling. But you couldn’t have explained to a farmer or a weaver a century and a half ago, “Oh, yeah. Like, your job’s going away, but there’s gonna be this new job called a cloud engineer.” They would have put you in an insane asylum if you talked about engineering on clouds.
So in the long run, technology tends to open up new avenues, new platforms, new ways of making money. Spotify and YouTube—even a few decades ago, it would have been difficult to describe like, “Oh, I’m gonna be a podcaster or a YouTuber.” It’s like, “Oh, you’re gonna have your own TV show.” Sort of, but there’s no network involved. Well, how do you get rid of the network? There’s a lot of steps to explain there.
When you look out across history, you end up with this idea that maybe technology does not itself directly create new jobs. It does sometimes—cloud engineering, if you build a data center, you need people to run the data center. But then the software that runs in that data center allows for entirely new categories of jobs. But my contention, and increasingly of other people, is when you look at those four food groups that humans have to offer, which is strength, dexterity, cognition, and empathy—even if there are new things to do, there’s not necessarily any reason for a human to do them. Particularly if the machine is better, faster, cheaper, safer, and smarter than the human alternatives. And people are saying, “Well, what about liability? What about who do you sue if the machine messes up?” You sue the owner. That’s always been the case. It’s easy.
Jim: Yeah. I hear that argument about the regulated industries. They go, total bullshit. Right? You can always sue the owner, and the laws will probably be tightened up about that in terms of strict liability or not strict liability in different domains. They may change the rules to strict liability, for instance, for self-driving taxis. But I don’t see that as a challenge, frankly.
David: Yeah. That is not a major technological friction to what can it do in the long run.
Jim: Correct.
David: You know? And someone said, “Ah, well, the board, the shareholders, and the board of directors will always want a fall guy, so they’ll have a CEO.” I’m like, so the CEO will have the job of Barney Stinson—provide legal exculpation and sign everything. And that’s his only job is just to be the whipping boy in case something goes wrong. There’s no point in having that job.
Jim: Oh, happy to do that job for 50 million a year. No problem. Right. Exactly. I’ll take my risk. Right? I’ll send—there you go. I’ll take my 50 million a year and put it in gold in a Swiss vault, under an anonymous Swiss account. And when the time comes for them to sacrifice me on the altar of convenience, I’ll say, have a nice life, guys. I certainly will.
David: I don’t see that really happening. That’s not realistic. So, you might say, okay, well, what if everyone becomes a YouTuber? Well, we’re already looking at the data though. And the data shows that the meaning economy, the experience economy, the attention economy, while these are growing and they are growing rapidly, they are not growing as fast as these other jobs are being encroached upon. So they’re not absorbing what I call the worker refugees. So you know how we have climate refugees. We have labor refugees as well. People fleeing from artificial intelligence and robotics and automation. And by the way, as you pointed out, we’re not even at the Falkor level of AI agents yet. And when we get to that point, the rate of attrition is going to go up from the labor force. And there is no evidence yet that I have seen or found that the new jobs are being created at all fast enough to compensate. Now, you might say, okay, well, that’s a short-term trend. Maybe in the long term as society rebalances, we’ll figure something out. That’s entirely possible.
Jim: But I don’t know about you.
David: I don’t know if society will wait twenty or thirty years to figure it out.
Jim: It may not have the ability to. Right? Because who will do the figuring? Right? If everyone’s disempowered. And I will point out as I was reading this, was thinking about, well, one of the biggest refugee jobs now is OnlyFans. Right? And I would not wanna be the owner of OnlyFans these days. Right? Quality AI-generated porn, I presume, exists today. I know the commercial models try to prevent it, but I presume by now with all these high-quality open source models out there, some people have fine-tuned them up fairly well to produce porn. And again, it’s ’26, it’s 1916. By 1925 on our aeronautics timeline, which if we went from February 1903 to 1916 in two years. So we’re basically what are we, like, about thirteen years in two years. So about a seven to one ratio, about a year away from having really high-quality porn from AI, and then fuck OnlyFans. Right?
David: I think OnlyFans just sold out. The founder probably saw the writing on the wall and said, now’s my time to exit.
Jim: Find a fool who thinks it’s an exponential. Get them to pay me accordingly. Right?
David: I think OnlyFans just sold out. The founder probably saw the writing on the wall and said, now’s my time to exit.
Jim: Find a fool who thinks it’s an exponential. Get them to pay me accordingly. Right?
David: Right. I mean, it has the highest RPE, which is income per employee, revenue per employee of any company ever created—something like 20-plus million per employee. But to your point, it’s actually the payment processors who are gumming up the works of AI-generated smut. The payment processors, the Visas and Mastercards, they don’t want that to be on their books, interestingly enough. And they even go after video games too for the same thing, anything that’s remotely controversial.
Jim: Well, there’s an easy hack for that, which I’m sure will also happen—why not just use stablecoins to pay for that stuff? Of course, those idiots who created crypto have made it head-hurtingly hard for normies to use it. Why they’ve done that, I do not know. But if I weren’t old, fat, rich, and lazy, I would build a normie interface to stablecoins to do all kinds of stuff. Right? But nobody’s done that. Why? Assholes. Right? And I would base it in a place that was outside the jurisdiction of anybody. Right? And maybe on Patri Friedman’s sea steading or something, and then people could buy all the smut they wanted. And by the way, sex on the Internet ain’t a—
David: In 1992, at the very beginning, half—
Jim: Of the traffic on the Internet was smut images, still images. And in fact, the biggest company had started out selling NASA space photographs, and they had optimized their servers and their network connections. And there was a middle-end market for what people want, cool pictures of galaxies and things of that sort. And then one of their employees said, “I wonder what would happen if we put up naked pictures of broads?” Right? And the next thing you know, it was the biggest business on the Internet in 1992. And when I was CEO of Network Solutions in 1999, we estimated that probably at least 25 percent of the traffic on the Internet was smut, and I think that number is considered probably about right today. So the idea of smut on the Internet is not a new thing by any means, and it will find its way out. I mean, it’s the first thing that people are actually willing to pay for, which makes sense. What are the drives? They’re food, shelter, and sex, basically. And in the Western world, everybody has food and shelter. It may not be the shelter or the food they want, but they got it. They ain’t starving. But getting sex is still hard. Right? Still work. And so, of course, there’s gonna be automated porn. Anyway, sorry for getting a—
David: Well, and to tie that segue to the other kind of criticism—many people say, “Oh, well, AI can’t do x, y, or z yet.” And it’s like, having been in this and specifically in the deep neural network transformer space for five years now, actually six years—2019 is when GPT-2 came out. Whatever it can’t do today, just wait six months. We have a new generation of models every twice a year basically. Or maybe once a year for the last year or so. But then there’s lots of little interstitial incremental improvements. And also, the market is broadening, meaning there’s more and more competitors. So the number of offerings is going up for images, music, video, text—I mean, everything. We’re going multimodal right now.
Jim: And also keep in mind, this is when I talk to executives on what to keep watching for, we have this exponential on models and we have exponential on hardware, but we have another exponential, which is agents, orchestration. Right? And that’s getting better and better very, very rapidly, probably faster right now than the models or the hardware. So you have these three exponentials working together. What the hell that means?
Jim: And also keep in mind, this is when I talk to executives on what to keep watching for, we have this exponential on models and we have exponential on hardware, but we have another exponential, which is agents, orchestration. Right? And that’s getting better and better very, very rapidly, probably faster right now than the models or the hardware. So you have these three exponentials working together. What the hell that means?
David: Yeah. So what I do every now and then is I look at it from a first principles perspective. Right? And first principles just means you look at the math, you look at the physics. The physics shows—I mean, we already know because we all carry around supercomputers in our heads that run on about 20 watts of heat—that we’re nowhere near the physical limits of what computation can do. Our brains are probably a million times more efficient than computers are today, if not more, just watt per watt. Because every time we create a new generation of supercomputer, we actually increase the estimate of how much processing our brain is doing.
There was a time twenty or thirty years ago where we thought that the brain was doing about a petaflop. Well, now it’s like, well, no. That’s underselling what the brain is doing by a long shot. So it’s entirely possible that our brains are many millions of times, if not billions or trillions of times more efficient, per unit of output, per unit of cognition versus input electricity or heat.
That’s what we call the Landauer limit. The Landauer limit just says, what is the minimum amount of energy it requires to process one bit of information. There’s a few other ways of looking at it: what’s the maximum amount of computation that a kilogram of matter can do? What’s the minimum amount of energy that you can use to store a bit or change a bit? Those sorts of things. And all of those show that we are many orders of magnitude above what is actually thermodynamically possible.
And so when people say, “Oh, well, Moore’s law is gonna give out one day.” Yeah. One day. But it’s going to continue for at least thirty to seventy more years at its current pace before we even approach some of those limitations. And by the way, if it only continues for another ten or twenty years, that’s game over because machines will have surpassed human intelligence and efficiency by then by a long shot.
So when you look at all of those from a first principles perspective, there’s no reason, there’s no physical law or constraint that says it’s not possible to make a machine that is smarter and cheaper and smaller and more efficient than a human brain. The math just doesn’t add up. And then likewise, when you look at robots, robots are able to be built and constructed without the constraints of cellular biology.
Now, that’s not to pooh-pooh cellular biology. We have the most efficient information transcription possible. Every single one of our cells has its own code that’s stored as a basically monofilament strand of DNA and it transcripts with RNA, extremely efficient. The bit per bit cost of cellular respiration is very low, but it’s also error prone, and we had to evolve. Right? We had to survive. We had to evolve. We couldn’t ever just stop and refactor human evolution.
Jim: It’s kludges upon kludges upon kludges. It’s amazing that it works at all.
David: Right. Yeah. There’s lots of trial and error over billions of years. So robots, however, we have the ability to just stop and go back to the drawing board and redesign them. And when you look at the energy density of solid-state batteries, you look at the computation density of ASICs and thermodynamic chips and quantum chips and photonic chips that we’re working on building, when you look at the material quality that we can create for actuators and sensors and that sort of thing, there’s no reason that humanoid robots will not also surpass us in all of the ways that count, at least from a labor perspective here very soon.
They’ll have better perception. They’ll have better fine motor control. They’ll be stronger. They’ll be lighter. They’ll be able to operate longer autonomously than we can. Because, yeah, a human with a lunchbox can work in the woods all day with a few pounds of food and a few pounds of water. But there’s no reason that robots will not surpass that within probably ten to twenty years. It’ll be a little while, but they don’t even need to surpass us. They just need to be cheaper.
Jim: Right. This is the capital cost versus O&M versus labor cost versus quality. Right? When that math works, then it makes sense. The same way self-driving cars don’t have to be perfect. They just gotta be clearly better than humans.
David: Exactly. And better, it has many dimensions. You know, I also tend to think of it as CapEx versus OpEx. Right? Human labor is operational expenses. But if you buy a machine, it depreciates and that’s a tax write-off. I kind of think that the numbers will be compelling. And another thing that a lot of people miss out on is, “Oh, well, people will just get other jobs.” Right? If you lose your job as a developer, go be a plumber, go get an HVAC cert. What happens when there’s too many HVAC technicians or too many welders? Their wages go down too. So you’re looking at the supply and demand. When you look at labor supply versus labor demand, you’re gonna see a lot of downward pressure on wages very soon.
David: It has many dimensions. You know, I also tend to think of it as CapEx versus OpEx. Right? Human labor is operational expenses. But if you buy a machine, it depreciates and that’s a tax write-off. I kind of think that the numbers will be compelling. And another thing that a lot of people miss out on is, “Oh, well, you know, people will just get other jobs.” Right? If you lose your job as a developer, go be a plumber, go get an HVAC cert. What happens when there’s too many HVAC technicians or too many welders? Their wages go down too. So when you look at labor supply versus labor demand, you’re gonna see a lot of downward pressure on wages very soon.
Jim: Let’s stipulate that all this shit’s gonna happen, that there are no barriers. There is no bottom. It’s gonna just go whoosh. Whoosh over twenty years. Right? Whoosh on the cognitive side over three to five years. Whoosh on the physical side. I think your ten to twenty is probably good. Ten is probably where it starts to really bite when the humanoid robots are really, really good, and really, really cheap. So let’s stipulate all this. Now let’s say what the fuck does this mean? Right? One of the things I think that you did here, which was interesting, which I haven’t—
Jim: Let’s stipulate that all this shit’s gonna happen, that there are no barriers. There is no bottom. It’s gonna just go whoosh. You know? Whoosh over twenty years. Right? Whoosh on the cognitive side over three to five years. Whoosh on the physical side. I think your 10 to 20 is probably good. Ten is probably where it starts to really bite when the humanoid robots are really, really good, and really, really cheap. So let’s stipulate all this. Now let’s say what the fuck does this mean? Right? One of the things I think that you did here, which was interesting, which I haven’t—
David: Seen too many other people focus on, is the importance of labor as a pillar of society. You know, in our current world, labor is, it’s almost an irreplaceable aspect of our social coherence and our social operating system.
Jim: Talk about that a little bit.
David: Yeah. So we gotta unpack a little bit. And the shortest way of explaining this is anytime in history when labor has been cheap, human life has also been treated cheaply. I mentioned much earlier about Tsarist Russia, where they wanted to keep the peasants poor and disempowered. Why? Because a poor uneducated peasant was a docile worker.
Likewise, throughout much of Chinese history, humans were basically considered a perennial crop, where if you wanted to have a warlord who wanted to seize something or conscript a bunch of cannon fodder, he could. Why? Because the Yellow River and the Yangtze River produced so much fertility that you just wait a few years and you got a whole bunch more humans. No management required.
So labor gluts tend to be bad for civil society. And we actually see this echoed in Europe as well, and Central and South America when particularly the conquistadors showed up. Cheap labor means that society becomes unbalanced and it favors the people in power. Whether it is monarchy, whether it is the church, whether it is unbridled capitalism, it doesn’t really matter. Whoever’s in charge benefits from cheap labor.
Labor scarcity, however, gives people leverage. So the bubonic plague that rips through Europe a few times dramatically reduced labor. So there’s a huge labor scarcity, and that actually gave people the kind of prototype of collective bargaining rights. As you said, well, there’s only so much of us workers, which means you have to pay us more, or we can say we’re not gonna work for you. And that’s why democracy as we know it really kind of was germinated in England. Because in England in particular, due to island dynamics, it was much harder to import laborers.
Now that’s not to say that all of Western Europe didn’t benefit from labor scarcity. Fast forward a couple centuries, and then you have collective bargaining rights, you have the formation of unions, where the kind of underlying principle is you need to have a credible threat or a credible exit. A way of telling the elites, the owners of production, you’re not gonna make any money until you come to the bargaining table. The more aggressive way of saying it is you can coercively extract concessions from power when you have bargaining chips.
Now, the reason I’m bringing all this up is because the combination of AI and robots means that we will never have labor scarcity ever again. And that is a good thing from a total productivity of society. GDP is gonna go parabolic or hyperbolic through the roof. At the same time, it means that individual voters and individual workers are gonna have even less leverage to get what they want.
So there’s a few basic levers of power, pillars of power in society for the modern operating system, the modern civic operating system. One, labor rights. You have the right to quit. You have the right to withhold your labor to negotiate for better wages. And to a lesser extent, lately, you have the right to form a union. Number two is property rights. Acemoglu and Robinson in the book “Why Nations Fail” characterized very extensively why property rights are really powerful for innovation and a civil society. And that’s nothing new. It goes back to ancient Athens and ancient Rome where the idea of property rights as a cornerstone of a civic society is important. And then finally, democratic rights. The right to vote and influence the policies and the state apparatus that influences everyone.
However, as mentioned from particularly when you look at Soviet Russia and pre-modern China, without the labor rights, other rights erode. Because who cares about your property rights if human lives are cheap? Who cares about democratic rights if human lives are cheap? So this leads to what I see as the biggest problem. And this is where there’s a lot of agreement between some economists and some of the tech people out of Silicon Valley is, well, we’re gonna head for dystopia. And you know, it’s not just money, it’s power. And without power, that is how you end up completely disempowered by definition. And if we get stuck in that attractor state, then it could be a very long time before anything changes. And that’s kind of what the entire cyberpunk genre has been talking about for the last forty years.
Jim: Yeah. Indeed. And in fact, in my analysis, which I call “In Search of the Fifth Attractor,” I laid—and this was I did it in 2015—
David: And had been kind of one of—
Jim: The roots of the Game B analysis. One of the attractors I called out there was neo-feudalism. Now we call it techno-feudalism. And when labor has no power at all and where money has been let entirely off the hook in 1971 when you abolish the gold standard, the combination of very big business and very big finance will usurp all power—
David: Unless something’s done about that.
Jim: Yeah, indeed. And in fact, in my analysis, which I call “In Search of the Fifth Attractor”—and this was I did it in 2015—
David: And had been kind of one of—
Jim: The roots of the Game B analysis. One of the attractors I called out there was neo-feudalism, now we call techno-feudalism. When labor has no power at all and where money has been let entirely off the hook in 1971 when you abolish the gold standard, the combination of very big business and very big finance will usurp all power—
David: Unless something’s done about that.
Jim: Yep. Absolutely. And that’s what essentially neoliberalism is. In theory, democracy can be a strong hedge on money power and business power, but the neoliberal hack has essentially allowed money to subvert government, subvert regulation, subvert the formation of ideas through media, et cetera. So we’re kind of fucked, at least if this trend continues. So what do you think are some additional ways that we could have power? You put forth a fourth pillar of power.
David: So this goes back to the idea of stablecoins and crypto and blockchain and that sort of thing. We need a credible replacement for labor power. Labor is inalienable from the human body. It’s perishable, and so on and so forth. There’s many characteristics of labor that make it very unique. So we need something just as unique, maybe just as exotic to replace that pillar.
It could be a way of creating a credible exit or a credible threat to productivity, which could then be in the form of algorithmic rights or blockchain-based rights and cryptographic-based rights. One of the underlying ideas or kind of inspiring ideas is the concept of economic withdrawal. Economic withdrawal is saying, “You know what? I don’t care if you own the means of production. I’m just gonna boycott you. I’m gonna strike. I’m gonna boycott. I’m just gonna not participate. So you don’t get what you want, and I can afford to do something else.”
Now, this requires a massive amount of coordination. To your idea earlier of replacing the payment processors with stablecoins—that’s a perfect example. And it’s why one of the things that I think that the current American administration is doing well, which is banning the idea of central bank digital currencies, which would give basically too much surveillance and too much veto power to the state. And instead, they’re creating a crypto council, which is basically how can we enable cryptocurrency and other blockchain-based technologies to develop in the marketplace, which I tend to be in favor of the more that I read about it.
That gives us a credible exit. It says, “You know what? We can actually circumvent the Fed. We can circumvent the payment processors.” We can have what I call in my pyramid of power, “freedom to transact.” Freedom to transact basically says, “You know what government? You can’t tell us what we can and cannot buy.”
So that’s one example of where these algorithmic rights, these cryptographic rights might come in. Another example is records and identity. All over Europe, they’re experimenting with self-sovereign identity, meaning you own your credentials, not the government. Another one is direct democracy in Estonia using your phone, because your phone has lots of cryptographic stuff built in—biometric sensors, microphones, cameras. You can verify your identity with a phone. And that allows them to vote directly, which means that elections are not contested as much because all the information is there transparent for everyone to see and scrutinize.
In Georgia, the nation, they are putting land records on blockchain, meaning that corruption is much harder to get away with. You can’t hide things in deals. You know who bought what when, and who owns it now, and everyone knows who owns it now. There’s a book called “The Bitcoin Standard,” which has a good walkthrough as to, to your point about going away from the gold standard, as to why that was a bad idea. We don’t have hard money. We don’t have sound money. But we could build instead of gold tying value to a rare precious metal, we can tie it to proof of work, proof of stake, that sort of thing.
So all of these are ideas that are kind of coalescing. And it is a shame to what you said earlier that a lot of the people in the crypto space are all grifters.
Jim: Ninety-eight point five six seven percent or something like that.
David: The vast majority are grifters, you know, trying to hawk NFTs and that sort of thing and meme coins and those sorts of things. And yes, it is not user friendly yet, which is a big problem. But when you look at what this technology has the potential to do, it’s sort of like the same potential as the wheel or the same potential as the airplane. Once you put humans airborne, there were of course military leaders at the time that said, “Oh, you know, airplanes, that’s just a rich man’s toy. It’s never gonna be useful for military purposes.” Then they started putting guns on it and dropping bombs out of them. And they said, “Oh, well, if you can fly over our trenches, we need to be able to shoot those planes down.” And again, away it goes into the Nash equilibrium of escalation.
At the same time, I think that we’re starting to enter—people are starting to realize we need something other than labor. But it’s such a different way of imagining a negotiation of power, and it’s such a different way of imagining organizing a society that when I talk to people, they sense that something is wrong and that we need something else, but the solution is not obvious. And it’s not even obvious that blockchain or crypto could be part of the solution to many people.
Jim: And I will say, I’ve been somewhat critical of crypto. I’ve identified a few real projects and backed them. But when I put this out, some people actually developed an alternative, which failed. I do think the failure of crypto is the insistence upon radical trustlessness. Humanity has never been based on radical trustlessness. One could actually make crypto that was efficient and worked and was easy if instead of insisting on the utterly arcane, though brilliant math of radical trustlessness, instead you had a cooperative set of servers, maybe 100, that had trustworthy people managing them collectively with a design such that no coalition had any power to coerce. You could actually build a system of stablecoin things quite simply. Without the trustlessness part, you’re just talking about a ledger, and a ledger is 1968-level technology. But the absolute religious insistence on radical trustlessness has, to my mind, either forced crypto into long-term slowness but eventual brilliance or just as an evolutionary dead end, and I’m not sure which.
David: I can definitely see both sides. Reputation and trust are critical bedrock factors for humans to transact. What I mean by that is, do you know who you’re dealing with? Know your customer. That’s kind of built into most businesses today. To add on to your criticism of the insistence on radical trustlessness, because I don’t think that’s the way to go, many people in the Web3 or crypto space insist that only digital things are real. They don’t care about real estate, real assets, or the energy inputs or even the servers that it runs on. I said, well, if you just hand-wave away most of human activity and most of the economy and only care about cyberspace, you’re not going to get too far as an ecosystem. There are definitely problems with it as it is today. But when I’ve done my research and looked at where stablecoins have been implemented around the world, where records and trust-based systems have been implemented, the new rudiments of the social operating system are being built already. Now, will it be enough to replace the labor wage negotiation of our current social contract? That remains to be seen, but I think it is directionally correct and it is certainly a component of the future puzzle.
Jim: At least it provides a set of pieces to build some alternative. And what the alternative is, I think, is not yet clear. I would also suggest that, yes, it’s all nice to be able to have stablecoins to buy things. If you don’t have a job, where are you going to get your money? So it’s nice to have, but it doesn’t really address the fundamental problem. So we better now turn to essentially the picture or sketch of a possible solution that you lay out, which basically consists of power and prosperity. By the way, people, there’s a lot more in these series of essays that we’ve jumped around and picked and chose which I thought were the high points. Now let’s get to your description of the pyramid of prosperity and the pyramids of power.
David: The pyramid of prosperity is a very simplified model of what we’re going to build. The first-order problem is AI is going to take our jobs. We need money. If households or families of individuals don’t have spending money, aggregate demand collapses, and the economy grinds to a halt, and nobody knows what to do. What there is broadly already agreement on is that we need new ways of economic participation. Whereas right now, wage labor negotiation is the primary distribution mechanism for most people.
I break it down into five layers, starting from the bottom up. The bottom layer is what I call the universal—universal basic income, universal basic services. Universal basic capital is becoming popular now where you have baby bonds and those sorts of things. Sovereign wealth funds, where everyone gets a piece of the pie and it is mediated by government action, whether it’s the federal government, an international government body, or state government.
For many people, the conversation stops there. They say this is enough. And I say that it’s not. It’s necessary, but not sufficient. One of the reasons it’s not sufficient is because you don’t want to be wholly dependent upon Social Security and Medicare, as the government can change its mind later. If you are entirely beholden to the government for handouts and allowance, you are in a highly precarious space. So then you’ll want to look for market-based solutions and property-based solutions. This is where a lot of economists who are looking at this problem say one of the clear solutions is we have to broaden property-based incomes. So the next four layers are more based on market-based solutions. Layer two is collectively owned public assets—that could be urban wealth funds, a national wealth fund, but it can also be things like goods held in common, carbon taxes, land value taxes, spectrum auctions, those sorts of things. All kinds of options exist at this level.
Jim: There’s another one too, which you mentioned just in passing, and that is land, the George model. Right? Hong Kong, for instance—before Hong Kong was messed up by the Chinese—the land was one of the main assets for funding the whole state, and Hong Kong was able to operate on a 15 percent tax rate.
David: Yeah. So land value tax, LVT, basically, one of the ideas there is that it funds the government. It doesn’t necessarily pay people, but of course, if—
Jim: But it could. You could use it as a pay-in to a social dividend essentially. Right? The same way Alaska does the oil royalties.
David: Right. Exactly. And the idea there is that it incentivizes whoever owns the land to either sell it if they’re not going to do anything useful with it, or do something useful with it that produces value. In places where that has been implemented, things like housing prices usually go down very quickly, just as an example.
And by the way, some of these ideas are not necessarily redistribution of just money. Some of it is pre-distribution of value. Meaning that, let’s say you have a collectively owned public utility of a water company, trash company, solar farms, those sorts of things. It’s not necessarily going to pay you money directly, but what you can do is you can get those services for cheap or free. So you’re effectively being subsidized by those activities that are either held universally or collectively publicly owned.
Layer three is collectively owned private assets. So this is things like cooperatives, trusts, DAOs, decentralized autonomous organizations, which have been legalized in Wyoming now. I think Wyoming is the only state. I’m not sure if it’s the only state. But then I think Switzerland has also legalized DAOs. So the idea here is a more conventional approach to saying, hey, I own shares in something that’s valuable. Whether it’s a fractional ownership of a robotaxi, fractional ownership of a bank, or whatever. So collectively owned private assets, that’s really not going to change except that the management of those assets can be moved to AI and robots once those are better at managing those assets than humans are. So then rather than having to find a CEO or a board of trustees to manage these things, you just say, let the AI manage it.
Number four is conventionally owned private assets. This is stuff that everyone is already familiar with, which is things like rental properties, businesses, stocks, bonds, that sort of thing. Nothing really changes there except again, a lot of the benefits from those are going to be rendered from automation technologies.
And then finally, residual wages. I’m very bullish on AI and robots removing the need for most human labor. There’s going to be a few job categories that do stick around probably indefinitely just due to market forces and human preferences. Really, that’s kind of what it comes down to is in some cases, people are going to want to hear Jim Rutt talking about philosophy and technology forever. Maybe. We’ll see. Hopefully.
But then you say, okay, Dave. How does that actually solve the problem? Because if universal basic income is only basic, how do you actually participate if you don’t have any money to start with? And the idea is that you create a higher floor, and that floor actually raises over time. Particularly with those collectively owned public assets. Think of it as like an endowment. Your local town, your local county, the state, the national government, they all can invest in stocks and bonds and other appreciating assets, which the benefits from those are then redistributed or pre-distributed via shares to citizens. Meaning that even if you don’t lift a finger and do anything, you will automatically have a stake in the growing economy in the future.
Now that is of course a very grossly oversimplified version of the model. But that’s the idea is you create an entire broad menu of many interventions, all of which have been piloted around the world. Some of them are in production, implemented globally, or in various places around the world, not necessarily globally in a unified manner. But you do all those, and you move to that pre-distribution and redistribution model where property participation is broadened rather than relying on wages as a primary component of or a primary source of income for your average household.
Jim: Alright. Let’s go through them one by one. That’s a few comments and questions on each one. The universals, UBI, universal basic services, and basic capital, how’s that funded? If nobody has a job, where’s the economic energy come from to fund the universals?
Jim: Alright. Let’s go through them one by one. That’s a few comments and questions on each one. The universals, UBI, universal basic services, and basic capital—how’s that funded? If nobody has a job, where’s the economic energy come from to fund the universals?
David: One of the simplest ways is through taxes. If you tax and redistribute. One criticism that some people have is, well, what if you just print money? Then you have inflation, and that’s certainly a risk factor. But what I’ll point out is that cities, counties, and states don’t have the ability to print money. They have to either take out debt and distribute the proceeds from that debt, or ideally, they tax something and then redistribute those taxes. That can be land value taxes as an example. You can levy robot or automation taxes against companies that use automation heavily as other examples.
What I’ll point out though is that the amount of wealth that we’re going to generate is not necessarily the problem. The problem is more allocation—how do you decide who gets what? Because right now, we decide who gets what through property rights and labor rights. If you build and own a company as you have, you get the lion’s share because you took on the risk and organized that labor and capital, and you produced a lot of value. People are still going to be able to do that. There’s always going to be opportunity for someone to find efficiencies or market niches that are not being served or could be served better. Of course, when everyone is an entrepreneur, the competition is going to be more fierce.
Some people think that you should just have the AI manage the entire economy in the future, which I think is probably a bad idea. We might get there one day, but it’s going to be through an organic evolution rather than a very specific economic mission.
So who pays for it? Taxes, you can print money. One thing to keep in mind is that job loss is deflationary. As people face more economic uncertainty, they spend less money. If they’re not having any income, they have less money to spend in the first place. In order to keep inflation where you want it, rather than the Fed printing money and handing it to the central banks, maybe you just do it through universal basic income the same way that America did during the pandemic with the stimulus checks.
Basically, to keep the economy growing, you keep the stimulus checks. Eventually, ideally, people would probably prefer it to be a regular thing, where you get a stimulus check every month, and it’s calibrated to keep inflation going in the right direction. Wealth taxes, where you tax static wealth at a relatively modest rate of a half percent to one percent, could keep the velocity of money up high enough as well. There’s not going to be one silver bullet, one panacea that’s the source of all this redistribution. The wealth generated isn’t the bottleneck. The scarcity is in how you reallocate this excess wealth that’s generated.
Jim: You mentioned capital taxes. That’s probably going to have to be a big part of it because as wages and income go away, and those of us who played the business game know it’s relatively easy to manage your apparent income. And if we know how little of our tax comes in from corporate taxes, even though it’s a supposedly relatively high tax rate, you’re going to have to go to capital taxes and/or capital appropriation. That may be another way to do it. For instance, one could imagine any company that’s doing over $10 million a year in revenue may have a percentage ownership that is nationalized. Don’t nationalize the company because we know governments suck at managing businesses. But suppose that once you get to a billion dollars, 60 percent of the company is owned by the National Trust. I think you’re going to have to focus more on capital because that’s what the transformation is going on—the capital intensification of the economy, and at some point, the labor component may be trivial.
David: I do tend to agree with that. And the idea of capital appropriation where you just say—was it New York or maybe the federal government is looking at requiring companies over a certain size to all have an ESOP, an employee stock option plan, which I think—
Jim: Is a great start. Multipolar trap won’t work until it’s nationwide because it’s real simple to move from New York to California.
David: Yeah. The way that I kind of think of it is pressure gradients. If you have too much capital concentration, then you have a high pressure vessel, and that pressure wants to go somewhere else, and you just have to give it a channel to do so. But of course, if you poke too many holes in a pressure vessel, it loses all of its pressure. So that’s a physics analogy.
David: I agree with that. And the idea of capital appropriation where you just say—I think it was New York or maybe the federal government looking at requiring companies over a certain size to all have an ESOP, an employee stock option plan, which I think—
Jim: Is a great start. Multipolar trap won’t work until it’s nationwide because it’s real simple to move from New York to California. Right?
David: Yeah. The way that I think of it is pressure gradients. If you have too much capital concentration, then you have a high-pressure vessel, and that pressure wants to go somewhere else, and you just have to give it a channel to do so. But of course, if you poke too many holes in a pressure vessel, it loses all its pressure. So that’s a physics analogy.
Jim: And of course, that idea of appropriating capital, slices of capital from larger companies, gets into your level too, which is collectively owned public assets. At some point, maybe the public owns 90 percent of these mega companies. Right? Ninety-nine percent. Frankly, owning one percent of NVIDIA is worth what? Ten billion dollars. That’s enough to motivate a management team to build the next NVIDIA, and to continue to work there, probably. The five people running NVIDIA and directing the robot chain. So maybe there’s a gradual sliding movement towards public ownership, but not public management of business. It might be a way to do it.
And in terms of collectively owned private assets, I do like that in particular, and you do point this out. This is a quite subtle thing, that Colorado has recently reformed their cooperative law that makes it actually feasible to build real businesses in the form of cooperatives and bring in outside capital as it used to be extremely difficult to do. At least in the transition period, that may well be an alternative, and although, again, multipolar trap time, harder to get investment dollars, a transitory intermediate form may well be new businesses being set up as hybrid coops.
The interesting thing about coops is that they are one for all and all for one in terms of ownership. And you mentioned—oh yeah, I made piles of money starting a company. Truth is I made too much money relative to my contribution. A lot of it was luck, right place, right time, multiple times, a lot of hard work, but also recruiting a bunch of great people. But the way the power game works today, the returns go way too much to the top. Fucking ridiculous. And there’s no reason that Zuckerberg, who through, yes, a bunch of hard work, but starting a business to help him get laid that by a series of amazing lucky navigations of the wormholes of the universe ends up with him being worth tens of billions of dollars? Fucking nuts. And nor would he have needed to be. If you said, “Hey, Zuckerberg, not only do you get laid, the thing works great, you’ll make ten million dollars.” He’d go, “Where do I sign up?” This absurd overconcentration of wins to individual people is just sick, and I think will eventually lead to guillotines in the street if they don’t wise up. And then this goes to your fourth point of conventional private assets. How do you get the conventional private assets out of the hands of this tiny, tiny group that’s gonna control them all?
David: Well, first, I want to say, yeah, I tend to agree with you. When you play this out in the long run, you look at more and more capital concentration and capital intensification, and the economy becomes lopsided. Eventually, the value of a founder or the value of a CEO might drop as well. If what a venture capitalist does and what a CEO does is primarily cognitive labor, then we might be seeing the last generation of big CEOs and big venture capitalists. Because if there’s no market differentiation, no way for that labor arbitrage to be concentrated, then we might end up in a pure capital world or what some people call hyper capitalism, where there is no labor, not even a CEO, where Mark Zuckerberg doesn’t even have a job. So it’s just whoever owns it.
So then to your point, if you have people where the law says that we have strong property rights, and therefore you own Meta or you own Microsoft or you own NVIDIA, therefore you’re rich, and that’s just the way that it’s gonna be from now on. To your question about how do you actually get conventionally owned private assets in this situation where, it’s kind of like Monopoly. If you—the end game of Monopoly, whoever starts ending up with 55 percent of the property then has 65 percent of the property and eventually ends up with all the property.
Jim: Peter Thiel and Marc Andreessen own the whole country. Right?
David: Right.
Jim: Yeah. And they have some feudal lords which they’ve given little slices to. So maybe there’s 10,000 knights and two princes, and that’s the deal. And the American people have enough gumption to get their guns out and just kill them. And I think to remember people, private jets are extremely vulnerable. Land a drone on their wings with a little bit of magnesium on it, burn into the gas tank. Gas tanks are in the wings on these things, people. These targets are way softer than we think. And at some point, people are gonna wake up and say, nope. This ain’t the way we wanna go.
David: Right. Yeah. So what you’re outlining there is use of force, which historically has been absolutely part of the repertoire that people have. So before we go there, let’s transition to part two, which—
Jim: Is the pyramid of power. Right?
David: Right. Yeah. So what you’re outlining there is use of force, which historically has been absolutely part of the repertoire that people have. Yeah. So before we go there, let’s transition to part two, which—
Jim: Is the pyramid of power, right?
David: Yeah. Yeah. So pyramid of power, fundamentally, what it comes down to is, okay, labor rights, property rights, democratic rights, all of those are predicated upon some balance of power. And when you take a step back and you say, what are people really afraid of? You know, losing my job. Okay, that’s a first-order fear, but why? So that you can feed your family and live the lifestyle that you want. Okay. But why? Why is that a problem? It’s like, well, because I’m not gonna have any leverage. I’m not gonna have any power to get what I need and what I want. That’s really what it comes down to. And so for me, the pyramid of power actually precedes the pyramid of prosperity. Pyramid of prosperity assumes that we have enough leverage, that we have a civic operating system that will empower citizens.
Jim: Power creates the pyramid of prosperity.
David: Exactly. So how do we do this nonviolently? Ideally, we don’t get to the point where we live in a cyberpunk dystopia and—
Jim: Well, we gotta let them know that we’re prepared to go kinetic if we have to, right?
David: That’s the credible threat. Right? The credible threat, the use of force, that sort of thing. So yeah, exactly. Building this up, there’s five layers, and the names of each of these layers are a mouthful. So bear with me.
Layer one is the immutable civic bedrock: identity and records. So it’s public records. That is your identity. That is—and by records, I mean things like land records, democratic records, voting records, that sort of thing. And then your identity. Who you are, your reputation, your trust. Divorce that from the state. Say you own it, it’s based on blockchain, it’s user friendly, and you control it.
Next layer, layer two is open programmable rails or freedom to transact. So this goes back to the stable coins that we mentioned earlier. You get rid of ACH transactions—or maybe not get rid of them. You overhaul ACH transactions. You overhaul the SWIFT network—
Jim: Or just outcompete them based on price. Right? If stablecoins weren’t tied up in government overregulation, they would have already put all that out of business.
David: Yeah. I tend to agree with that. And so freedom to transact, to do so quickly and cheaply and efficiently, and privately, that’s a—you know, privacy from the state, from the corporations, that’s another really important thing. So zero knowledge proofs and fully homomorphic encryption. Very critical to have quantum ready algorithms, that sort of thing.
Jim: Maybe not. We’ll talk about this next.
David: I have a radical alternative perspective on this. But anyway, looking forward to hearing that. So that’s layer two, which is freedom to transact because that again is power. If you can transact and there’s no possibility for the state or companies to be rent seeking, that’s power to the people.
Layer three is radical transparency. So that is transparency of financing, transparency of algorithms and decisions, meaning that if the state or a company, a publicly traded company—it’s the same reason that we had Sarbanes-Oxley, which I think has been removed. But it’s the reason that we have the SEC, the reason that we have Freedom of Information Acts, and those sorts of things. But what if we make it even more sophisticated where transparency is directly controllable by the people? And through consensus about what is transparent because knowledge is power. Information asymmetries are one of the primary ways that companies and states maintain power.
Jim: And also make money. You know, Wall Street’s all about manufactured opacity.
David: Right? Exactly. So transparency, generally speaking, gives power to the people. And even when you look at nations like Russia, which is super opaque, when you look at China, you can see, okay. The nations that are more transparent than those nations, you know, Iran and so on, they have better democracy and are more productive. So transparency tends to be better for civic society.
Jim: Alright. So this is where I go radical. Let’s go all the way. Let’s make all the ledgers world readable, make all the ledgers mandatorily connected to humans via no more than five links of legal abstractions with the ownership between the legal abstractions fully documented so that every transaction could be tied to a legal entity, which could be tied to another legal entity, et cetera, which is tied to a human. And so we know what humans are involved in everything and to what degree. Now I proposed this back in 2012, and many people were horrified. And I added a modification to allow a certain amount of off-chain money, anonymous money for vice, but no more than $2,000 a month. Right? Which evaporates after thirty days. Right? So you can buy your drugs. You can buy your hookers. If you have a vice habit of over $2,000 a month, you’re a piece of shit anyway, but I’m not even sure that that’s necessary. Let’s start with the thought experiment. Let’s go all the way. Public ledger, everything tied to a human, always trackable back to a human. And why that’s important is because it provides the recourse, right, to your point about power. Alright. You know that these people are up to no good, and you can prove it. Look. Here’s—let’s follow the money. Then the mob comes forth with the pitchforks and torches. So radical truly radical transparency could be possible in this world, and it might be good.
David: Yeah. I’ve gone down that rabbit hole of thinking, and I do think that privacy has a place. Certainly, if someone wields a certain amount of influence and power, if there’s backroom dealings, we want to be able to know about that and not go through the courts for a decade or more. At the same time, I would also say we should just legalize the vices. Right? That’s one place where I agree with neoliberalism. If there’s a market for something, suppressing that market doesn’t work. It’s the bootleggers and Baptist problem. Make the drugs legal, make prostitution legal, and then it’s safer for everyone.
But on a personal level, going back to information is power, if you do have full radical transparency down to the individual level—and this is my primary counterargument—that can become a surveillance state. You end up with thought crimes and those sorts of things where the government might say, and of course it would be government of the people, for the people, by the people, “Well, let’s say we backtrack on gay rights and ban gay marriage again.” Then with everything completely radically transparent, that becomes an information hazard where there are records about me that I don’t want out there. So I think there is a tension between the civic benefits of total radical transparency and the right to privacy.
Jim: And that is legit. But we’ll tie that back to your next one, your level four, which is important. But a final thought on truly radical transparency is they can know all about you if they want to. Right?
David: They can already do that.
Jim: Yeah. They can already do that and do it to a very substantial degree. Take a look what you can get for ten cents from Axiom, for instance, about you. Center data points, and I can call some gray market guys who know people in banks, and I could get your credit card transactions if I were so inclined for a few hundred dollars. So they can know all about you, but we can’t know about them. And radical transparency automatically kills the information asymmetry, even though it does have some negative side effects, as you say, an information risk, which is why I would say I would never do radical transparency unless we had a sound layer four. So take it away on layer four.
David: That’s a good segue. Layer four is direct programmable democracy. One way to frame it is that we live in a republic, which is an Iron Age idea using agrarian-level technology of representatives to represent our willpower. And in America, we still use paper ballots. This is why I say we’re basically still running on an Iron Age government operating system.
Now, in this future where AI has completely destroyed labor, AI is also smart enough to be your representative and to help be your interface with the government of the future. Speaking of multipolar traps, the moment someone has the levers of power, they become a very large target for manipulation, coercion, bribery, and that sort of thing. So what if you just remove the middlemen?
I don’t mean direct democracy like straight up-and-down voting. We can talk about quadratic voting. We can talk about expressing your desires and needs into a value graph or a moral graph. I know you talked to Ellie quite a while ago at the Meaning Alignment Institute. There are all kinds of ways you can have programmable democracy that, whatever specific mechanism you use, that apparatus—which is built on blockchain and artificial intelligence and software and provable data—carries out the willpower of the citizens directly.
Let’s say the people want to create a new office responsible for colonizing Mars. In ten, fifteen, twenty years, we have the Office of Martian Colonization, and it’s a globally based programmable democracy where it says, “Okay, the people have decided it’s going to have a budget of 10 billion dollars a year, and you have this mandate, and we’re going to appoint x, y, and z people as the mission director.” That’s the kind of thing you could theoretically do today through decentralized autonomous organizations.
But if you take that same mentality and say, instead of legislators passing laws for us, we all debate our values directly. And then the AI says, “Okay, based on the constitution that we’ve agreed on and based on what everyone is saying, let’s implement this new law directly.” And then it becomes immediate and automatic. That’s one of the things—the immediacy and the automatic nature of it.
Let’s say you want to change traffic laws. In this hypothetical future with a direct programmable democracy, we say collectively, “You know what? We are done with speed limits.” Everything is a robotaxi in the future, and the speed limit nationally is still 70 miles per hour. And we say collectively, “Let’s circulate a petition. Let’s change the speed limit to 120 or 150, because these new cars are just that much safer and it’s just that much more efficient.” That could be an example of direct programmable democracy where suddenly everyone made the decision that the national maximum speed limit is now something else.
Jim: Yeah, I like this approach a lot. In fact, I’ve written quite a bit about something called liquid democracy, which is a variant on this. And I will say, I’ll give a little bit of a warning, which is if we look at history, the mob can do crazy things in the short run. Would you have wanted our civil liberties to be voteable the day after 9/11? I wouldn’t have.
I read a very good essay called “An Introduction to Liquid Democracy” that’s up on Medium, and I have several other detailed essays on Medium about different aspects of liquid democracy. But one of the things I identified very early is you need to build in viscosity into these systems and reconsideration, and perhaps asymmetry in voting in and voting out. I like the idea of requiring a 60 percent vote for something to occur and a 50 percent plus one vote to cancel it—to give a bias towards inaction or doing nothing.
If you go back to the Founding Fathers, there was a reason they built this complicated balance of powers because they were pretty jaundiced about the tendency of the mob to do crazy things. And I think those insights about human nature are true. This is where some of these blockchain things, I think, are just idiotic. Every DAO governance thing I’ve looked at, I’ve torn apart within minutes. I go, “This will not work,” or “It’ll be hijacked,” or “This is a false front for a small group of people behind the scenes.”
The engineering of these things is absolutely critical. I’d love for us to see an Institute of Decentralized Governance get established and to try these things at small scale. Something like liquid democracy could work, but the German Green Party actually tried it—that was a disaster. It may have been the details of their implementation, it may have been that it’s a bad fit for human nature. So let’s start it out in ten towns and govern them by liquid democracy. If that works, let’s go to a hundred, then do some counties, then do some states, then do some nation states, then do the world. I’m absolutely with you that Iron Age technologies, or call them Steam Age technologies . . .
David: We’re a steampunk nation. It’s true.
Jim: Yeah. It’s no coincidence that Watt finally perfected his engine in 1776. The year 1776 was the steam engine, Declaration of Independence, and The Wealth of Nations. A big year. So yeah, it’s time to move on from steam power to something better. All right, so that’s level four. Now let’s talk about level five, which I loved actually. That was a really interesting and I think important insight.
David: We’re a steampunk nation. It’s true.
Jim: Yeah. It’s no coincidence that Watt finally perfected his engine in 1776. Right? 1776 was the steam engine, Declaration of Independence, and The Wealth of Nations. Right? It was published in 1776. Right? A big fucking year. Right? So, yeah, it’s time to move on from steam power to something better. Alright. So that’s level four. Now let’s talk about level five, which I loved actually. That was a really interesting and I think important insight.
David: Yeah. So everything that you’re talking about, which is the governance of the governance. Right? Because you can create a system, but if that system itself is flawed, you need to be able to modify the system. And, you know, in constitutional republics, we do have ways for the system to self-modify through amending the constitution and that sort of thing.
Layer five is called forkable constitutional meta-governance, which is a gigantic mouthful, which basically means with the correct mechanisms and the correct consensus, you can change the way that this new operating system works. You can change the constitution. You can change the bedrock values. You could add something like you said, viscosity, which I actually love that term because it’s so visceral. And it’s very clear what you mean by it, which is like things are going to be gated and there’s going to be a certain level of deliberate slowness to the way that rules are changed.
This is kind of where the world is already. And what I mean by that is we have everywhere from Brazil to Switzerland to America, less so in China—they’re very suspicious of these kinds of things. But plenty of places all over the world are working on different ways of this meta-governance. How do you govern meta-governance is governing the governance. Right?
You know, Elon Musk’s idea was simple straight up-down democracy, which is a bad idea. You get tyranny of the majority, you get mob rule. So then you say, okay, well, how do you slow that down? How do you change that? Do you adopt a super majority rule? Do you have cooling-off periods? There’s all kinds of things. So those rules about how to conduct business is layer five. That’s the pinnacle of the pyramid of power. And again, this can also be done, is already done through code, through algorithms, through those various consensus mechanisms.
And actually, from what you’re talking about, have you read Liquid Rain or talked to Tim Reutman who wrote Liquid Rain?
Jim: No. I do not know that.
David: I definitely recommend it. It’s a work of fiction, but the guy is really smart. And he basically walks you through like the main character got into a car accident and wakes up in I think 2052 after all this has been implemented. And so here’s him playing catch up for thirty years of technological change. But the entire thing is based on liquid democracy and just taking it out.
Jim: Cool. I’ll definitely read that. If it isn’t bad, I’ll have him on my podcast.
David: Yeah. No. It’s a good book. I will say that he should have used an editor. It’s self-published. There’s a few typos throughout the book, but the ideas are so very well researched, and it’s a fully well-realized world. So anyway, the idea there though is in the spirit of liquid democracy is you can change the operating system itself. And of course, you want those changes to themselves be slow and metered and deliberative. But at the same time, you need the ability to have if there’s an asymmetry that’s emerging, if there’s something that is lopsided, you need to be able to correct that as you go.
Jim: Yep. And also, I like your idea of forking. Right? And, in our Game B world, we were sketching this out yet. The book will be coming out one of these days. We see a whole series of self-organizing membranes, one that contains the other to an outer membrane that has a very small set of accords that are downwardly inheritable in an object-oriented fashion. And the different layers can have accords that either apply to their own layer or to their own layer and are inheritable to guys behind them and the governance mechanisms and all.
But the key thing is there’s not only forking internally, any membrane can fork, but the top membrane itself can fork. Right? So there could be Game B and Game B Prime. That’s a very important constraint on abuse. Right? Because it’s interesting that—and did you talk about this in your essays? You might have. If not, something else I read recently. One of the ways Rome was able to actually perfect a sort of workable democracy was periodically the plebes would just leave and go across the river. I forgot what they called it. They marched away and just said, “We’re gonna go start Rome Two.” And then the patricians said, “Oh, I don’t wanna empty my own chamber pots. Goddamn it. Let’s give the plebes a fucking vote. Okay. Then let’s let the plebes become consuls. Let’s establish the tribune of the plebes.” Right? And so multiple times, the plebes of Rome threatening to fork, force the powers that be to grant them more and more authority. And so the threat of fork is in some ways as powerful or more powerful than the actual instance of forking.
David: Yep. Yeah. So this goes to what Balaji talks about with the idea of a network state. I don’t necessarily agree with his model in full, but certainly the rudiments, some of the components are there. And fundamentally, what we’re talking about is just A/B testing governance, where you say, hey, we have the ability to exit. Because right now, under American constitution and French constitution, you don’t really have the ability to make a new state. If you don’t like something, you need to organize enough resources and so on and so forth.
Jim: Alright. Very good. Any final thoughts you want to provide?
David: Yep. Yeah. So this goes to what Balaji talks about with the idea of a network state. I don’t necessarily agree with his model in full, but certainly the rudiments, some of the components are there. And fundamentally, what we’re talking about is just AB testing governance, where you say, hey, we have the ability to exit. Because right now, under American constitution and French constitution, you don’t really have the ability to make a new state. If you don’t like something, you need to organize enough resources and so on and so forth.
Jim: Alright. Very good. Any final thoughts you want to provide?
David: I remain pretty optimistic. I see solutions. I see a path. I’ve done a tremendous amount of homework, studying everything from history to economics and so on. And, you know, when you look at the concept of path dependency, which shows that you need to go from here to there, rather than saying, okay, let’s imagine the shining city on the hill and just build that. You have to go through incremental interstitial steps. And I would say that that’s probably one of the key takeaways is just saying, okay, what are the principles that we can follow? I created the pyramid of prosperity and the pyramid of power. And that I think is the way to view this courageously and boldly without trying to tear everything down and start from scratch. Some things, you know, you do break. There is creative destruction, which is part and parcel of an innovative culture. We’ve got the innovative culture. That’s the thing that’s really kind of in our corner, is as a culture, we are not afraid of disruption. And that will serve us really well because the disruption is coming, like it or not.
Jim: Alright. Very good. I do think that I would underline that, but also add that it’s going to take a realistic Bismarckian application of power by the solidarity of the people to make something like this happen. Because otherwise, the attractor of techno feudalism seems awfully obvious.
David: Oh, yeah. I agree. It’s not guaranteed that we will figure it out. We can. There is a path.
Jim: There is a path. I absolutely agree. I want to thank David Shapiro for very interesting and important body of work in his six posts, and I’m looking forward to reading the book, and maybe we’ll have you back when you publish the book if there’s some new stuff in there. Even if there’s not, help you sell the book. What the heck? Right? And as always, all the things we talked about will have links on the episode page at jimruttshow.com. So thank you, David Shapiro, for a really excellent episode.
David: Thanks for having me. Good to talk. It was fun.