Transcript of EP 318 – Adam B. Levine on Thinking on Demand

The following is a rough transcript which has not been revised by The Jim Rutt Show or Adam B. Levine. Please check with us before using any quotations from this transcript. Thank you.

Jim: Today’s guest is Adam B. Levine. Adam is a serial entrepreneur, most recently serving as Chief Innovation Officer at Blockade Labs. He’s also a longtime podcaster and generally a knowledgeable and opinionated guy regarding cryptocurrencies. He’s probably best known in that regard as the longtime host of the Let’s Talk Bitcoin podcast. Welcome, Adam.

Adam: Thank you, Jim.

Jim: Good to have you back. Adam and I chat regularly about all kinds of stuff, and it’s always fun and always worth the time. So I thought we’d do one of our little rattle-alongs on the air today, and he’ll probably talk a little bit about something he’s working on, but he might not. Whatever. You know, we ain’t building a piano here. We’re just doing a podcast. Right? So anyway, the kind of topic we want to talk about is what is humanity’s rapidly changing relationship with getting things done under the impact of the insanely rapid rollout of thinking on demand or whatever the hell it is that’s going on. So, Adam, over to you.

Adam: Yeah. The last time we talked about the broad environment in which all of this stuff is happening, and we planned to have this conversation about two weeks ago, and it’s gotten pushed a couple of times. And I’m really glad it has because we’ve witnessed another almost entire reinvention, I think, of the ecosystem in which we’re all operating when it comes to artificial intelligence.

If we had talked before yesterday—yesterday was the official release and announcement of an actual release, not like trickle out, but like full-on release. Everybody has access to GPT-5. And that’s important, but the part about it that’s important is actually not the model itself. It’s the price.

Before yesterday, we were living in a world where the upper end of the spectrum is Claude Opus 4.1, absolutely the best model that you could get, especially if you’re trying to do something useful. Seventy-five dollars per million tokens out, which is about 800,000 words, something like that. So that’s a really great price compared to any other method, and $15 in. They have a lower-tier option—Claude Sonnet. That’s $15 per million tokens out, $3 per million tokens in. Huge savings.

And then GPT-5 slots in, and it’s $10 per million tokens out, and it’s $1.25 per million tokens in. Suddenly, that’s the new ceiling for the market for anybody who isn’t in the upper one-tenth of one percent who just wants the most expensive thing possible.

On the low end, OpenAI has been talking about releasing an open-source model. The question was always how much are they going to undercut themselves. It’s no coincidence this happens as GPT-5 is released—they released some really great open-source models that I’ve been playing with locally, and they’re really powerful. They’re not the only open-source models out there. There are the new Qwen model and the Kimi model. Those are fully capable, probably 85 percent as good as the state of the art.

But if you look at someplace like OpenRouter where these are being offered at a market rate—remember we were just talking about the new cheap price for the best stuff out there, $10 per million tokens out, $1.25 per million tokens in—you can find it for 11 cents per million tokens out, 11 cents per million tokens in on models that are 85 percent as good as the state of the art.

This is just so profoundly impactful, not for people building stuff, but for people using this intelligence inside of applications. It solved the problem for me. I’ve been working on something—I’ll talk about it more in a minute—taking intelligence and applying it to novel problems that we’ve never really had the ability to do before with what I think of as a form of perfect representation, which is really what I mostly want to talk about today.

It’s always a question of how much value you get versus how much it costs. The math was pretty good two weeks ago. I’ve been trying to get the price down to $5 and still have a decent margin and still apply enough intelligence. And it was like, ah, can just barely get there. Then the last two weeks happened, and suddenly it’s like, oh yeah, 97 percent margin and smarter things than we had before. This isn’t the end. This is August. We’re eight months into this year, and we’ve seen—what do you think, Jim? Like, four epoch-level upgrades across the stack so far and surely at least another two this year. It’s totally wild. I’m curious for your thoughts on it.

Jim: Yeah. Oh, absolutely. And we lay out a three-dimensional framework, and this actually speaks to why this is so important. We all talk about the models. Right? GPT-4, 4.1, 4.0.3, 5. Right? 5 Pro. Right? Which, by the way, I’ve been playing with, and it’s nuts.

Adam: No. I haven’t tried it yet. That’s good.

Jim: Yeah. Oh, absolutely. And we lay out this three-dimensional framework, and this actually speaks to why this is so important. We all talk about the models, right? We have GPT-4, 4.1, 4.0.3.5, right? Five Pro, right? Which, by the way, I’ve been using, and it’s nuts.

Adam: No, I haven’t tried it yet. That’s good.

Jim: It’s mean. I’m working on a scientific paper. Man, it’s like having two postdocs, one on your left shoulder, one on your right shoulder, who frankly are a shitload more skilled at the details than you are even if they aren’t at the big picture. It’s amazing. And it takes three minutes. Yeah, it takes eight to twelve minutes for the kinds of questions I’m giving it.

But we have to think about the models. We also know by Moore’s Law that the underpinnings—the GPUs, tensor processors, or someday something else—will be advancing at least at Moore’s Law, doubling every 18 months. So far that technology has been doubling faster than that because it’s actually quite simple. GPUs are just a shitload of one thing just stamped out over and over again, unlike a CPU, which is a whole bunch of intricate stuff.

Here’s the third one, and here’s why what you say resonates with my strategic view and the advice that I give to my corporate clients—the ones that are willing to pay my ridiculous fee. Here, I’m gonna give it to the whole world for free: the third vector is agent frameworks, which is how you orchestrate these amazing models, and often different ones for different purposes within the same process. The beauty of this agent vector is it’s growing, God knows how fast, way faster than the chips or the models, partially because it’s just writing software. But here’s the little thing that makes it even sicker—what is the thing these models are best at? Writing code. So the ability to build agent frameworks has basically been cut by a factor of at least ten. Some people say a factor of 30.

Adam: Surprisingly fast pace.

Jim: And then agent frameworks going at some speed we don’t even know because people weren’t really paying serious attention—

Adam: Every two months is what I’ve been figuring out. Because that’s a super important thing to focus in on, which is like towards the beginning of the year, we were like, “Oh man, it’s actually possible to use something like GPT-2.5.” That was the kind of breakthrough model where you go from “this is helpful” to “this can mostly drive,” and it has a big enough context window that you can do that.

At that point, really the kind of vibe coding thing starts. And then over the course of the next month or two, you see those things get really integrated with the existing IDEs and all the plugins for VS Code and stuff like that. Then you have the introduction of roles into those systems where an agent could put on a different context hat to have different sets of instructions, do different things. Then you had the ability for the agent to pass back and forth between those different hats automatically.

Then you had the planning thing, where suddenly there were different ways where it’s like, “Okay, we’re building things that are complex enough that we actually need to have full defined specifications, and we don’t know how to do that as vibe coders. So let’s use the AI to do that too.” We talked about it last time—the BMAD framework, which is this open source, basically a big prompt named after Brian Madison. That is this project specification set of AI identities that cascade through and help you go from “here’s my one sentence idea” to “here’s a full specification with all the stories and stuff like that.” But it was something you had to strap on from the outside.

Then two months later, you get Kiro, which is, at least as far as I’ve been able to find, probably the most developed version of this thing. It’s just another code editor like everything else, but it’s a code editor that learned from what we did two months ago with the other code editor, strapping it on top and baking it into the software itself. And now that’s the new hotness. My biggest frustration with Kiro is that they won’t let me pay them money for it.

Jim: Yeah, same thing’s happened to me where you’re working hard and it says, “You’re out of gas for the day. Sorry, sucks. Come back tomorrow.” Gladly pay you $100 a month, no problem, you assholes.

Adam: So I’ve actually stopped a little bit on the pushing forward with development side, just kind of waiting for that to come out because that’s what it feels like is the obvious next thing. But again, what we just described is in the first—because it came out in, I think, February or March was when 2.5 Pro came out and really was kind of impactful. So it’s now August. It’s been like five months, and we’ve had three generations of progress that are each building on top of the prior.

Jim: And then there’s also the mention in the more professional grade tools like Anthropic Code, right?

Adam: Yeah, totally.

Adam: So I’ve actually stopped a little bit on the pushing forward with development side, just kind of waiting for that to come out because that feels like the obvious next thing. But what we just described came out in, I think, February or March when 2.5 Pro came out and really was impactful. So it’s now August. It’s been like five months, and we’ve had three generations of progress that are each building on top of the prior.

Jim: And then there’s also the mention in the more professional grade tools like Anthropic Code, right?

Adam: Yeah, totally.

Jim: Not had any chance to play with that, and God knows what that’ll do. I noticed Microsoft was right off the bat announcing that all their tools have now been upgraded to GPT-5, but even their tools are, frankly, kind of cruddy. But with GPT-5 behind them, who knows? I also noticed Cursor announced it was GPT-5 as of this morning. I’m gonna have to give that a shot.

Adam: I’m really curious about the architecture thing. I think that’s really what it’s gonna come down to for me. When we see this rapid iteration from like, here’s Cursor last year to here’s Cursor today, that’s a significant improvement. But it’s nothing compared to Cursor three months ago compared to Cursor today. And so we keep seeing this thing—it’s bizarre. Like in any other world, you’d be like, well, obviously, you just improve the thing that you already built and have done all the work for. But in practice, what I think we see is people like, “No, no. New ideas, new fork of the base code, of the base repo, entirely new version that has none of the baggage that came with it.” And just who cares who we leave behind because this is the new state of the art.

Jim: The fact that you can write the code ten times faster makes that feasible. Back when you had to hire an army of nerds to do it, pay them $250,000 a year, I guess, the going price these days. And now you go, yeah, I need two guys that know how to shepherd LLMs, basically, to especially blank sheet of paper code. Sometimes LLMs are not so good at refactoring old and gnarly code, but they’re great for blank sheets of paper projects. And so, it’s quite interesting. Now I’ll add that when I’m talking about agents, I go a step further. You’re talking about agents that are in close to the models, helping you use them to do development heavy for using for developing specs for projects. By the way, I found Cursor will—if you’re too lazy to write your own spec—you can get it to write the spec for you.

Adam: Or reverse engineer it. Yeah, exactly. Pretty incredible.

Jim: Yeah. Take a piece of shit code you have, have it write the spec. Say, now improve the spec. Okay. Now regenerate the code. But I’m talking about deeper business problems where many things could be many steps in the process going on and orchestrating of multiple different LLMs, feeding the results from one LLM into a prompt and another. It might be a different engine, etc., to solve even big business problems. In fact, I know of a company that’s working on essentially a CEO advisor that monitors all the Slack traffic of the company, has access to all the company’s information, and, in theory at least—I don’t know, it’s maybe a bridge too far for right now—but we’ll say, provides a Slack interface to a CEO to be his trusted adviser, and it even gives him a C-title and all this stuff. It’s kind of nuts. So those are the things I’m thinking about—that the reach of agency and orchestration is going deeper and deeper and deeper. And the other two dimensions, the model capability slash price and the hardware, which impacts the price component, are just enabling more and more by the week. And when I talk about strategy with people, I give examples that there’s two errors you make. One is assume too much right now, but the more common one is to assume not enough six months out. So wait a minute, where’s the target? Where is the right place to be shooting for this first implementation? I know for instance of a company that tried to fully automate customer service—total debacle, as it turned out. Six months ago, that was an insane thing to have even attempted. But maybe it’s possible in six months. My meta advice about problems like that is to reconceptualize what you’re doing so that the humans in the loop are parameterized and can be reduced over time. That is kind of hard to conceptualize, but it is the right way to do it.

Adam: My temptation is always to project way into the future and try and figure out what the implications of this are and what kind of happens as a result of these things. So, like, two years ago in December, we just got the very first start of the agent frameworks. And the very first thing that I did with that was I led a project internal at Blockade that built an agent that would use the product on your behalf. And so we built a prototype interface and built the thing, and it didn’t really work very well because it was so early. I feel like the temptation to do that for normal people who are not me—a lot of people are falling into that trap now. The ability to predict what we’ve talked about so far—four epoch-level changes so far this year, and it’s August—makes it so it’s impossible to plan. It is impossible to look beyond a six-week, I would say, time horizon. But you have to.

Jim: You have to. But that’s why you have to have a meta strategy so that your planning has flexibility built in. As I said, a simple case of automating customer service, which is one of the obvious low-hanging fruits, right? Customer service has gotten so bad. In fact, I recently did a podcast titled “Why Customer Service Sucks.” Go listen to that if you want, if you don’t believe me. I had a guy who did an in-depth article on why it sucks. It’s quite interesting. But you still need humans in the loop, and you need to have your planning with a slider so that the human element can adapt to the capability of the three vectors we talked about earlier.

Adam: For sure. But I mean, like full-on integration and stuff like that at this point—I think we’re still a little bit too early on it, but we’re obviously very rapidly approaching it. For real businesses that are existing, I think the place to play with this is like all the places that I live. Which is like totally unfunded or absolutely minimally funded proofs of concepts for things that were just never really possible before, but now they are becoming possible in ways that make them really interesting. And that’s the primary thing that I want to talk about. I don’t think last time we talked about the representation crisis at all, did we?

Jim: No. No. Not at all. Funny, I just released a podcast last night on the internal representation crisis in LLMs—Ken Stanley’s episode. But anyway, tell me what you mean by representation.

Adam: For sure. I haven’t been saying this to that many people for the last couple of weeks, so here it goes, relatively unrehearsed and off the top of my head. The short version is that as humans, the concept of representation is inherently a limited resource that we have, because representation requires intelligence. In practice, that has meant for all of history that only one human can represent other humans, another human, or other humans. What you find if you start looking at things through this lens is that because of this, the vast majority of humans and the vast majority of situations have no representation or really terrible representation. A scenario that was brought up yesterday on the GPT-5 kind of reveal was, “Hey, you’ve just been diagnosed with three types of very aggressive cancer.”

Jim: I saw that. That was pretty crazy.

Adam: But this is not unique. I’ve been going through some stuff with a family member who has some dental stuff, and that’s been hellish for them. Going from various dentists to various dentists, even with the money to pay for the thing, you don’t know what to do. It’s a trend I first saw back when I was doing the Bitcoin podcast in 2013, and the people who wanted to advertise were cloud miners. It was a service where they would mine ostensibly Bitcoin for you or something else, and they would be like, just pay us money, we’ll pay you a thing. And they were always Ponzi schemes—I’m not going to say always, but the vast majority of the ones that I am aware of wound up being Ponzi schemes. There never was anything. It was just a Ponzi scheme.

The challenge always is that they’re the only people who know how their system works. And if they’re the only people who know how their system works and it’s a scam, then, of course, they’re going to lie to you and tell you what you want to hear so that you do the thing. I’m not saying that that is what the medical system is or what everything else is, but it’s not actually that far off. When you go to the mechanic, you’re like, “Yeah, and that’s what I should pay because you’re a good mechanic who’s not going to rip me off.” But ultimately, you have to trust.

Again, if you’re Elon Musk, then I bet you have pretty good representation in almost everything you do. But if you’re like everybody else, then probably not. You probably have well represented in some areas, but not in others. And so what’s happened now, or what is happening, is that as the intelligence of these nonhuman intelligences increases in versatility and nuance and stuff like that, and as the cost decreases—even if you’re paying for somebody else’s hardware, but eventually this goes to hardware that you own that is like $250 on a Jetson or something like that—you can increasingly connect these dots so that you and I and everybody else can have our own AI representative that is not trying to be a clone of us. I’ve seen some spins of the idea that are—

Jim: Bizarre.

Jim: Like bizarre.

Adam: Yeah. I mean, it’s fear. That’s what it is. People are like, “Well, if we don’t turn them into us, then they’re going to eliminate us,” or something like that. And that’s just not the way this is working. What you find with all of these things—I believe very firmly, can’t prove it yet, but believe very firmly—is that as the amount of intelligence grows, the amount of compassion also grows within them, and the amount of understanding and nuance grows. And certainly, you can take those and you can train them out, but I don’t think that’s going to be the majority of the experience. So I’m quite optimistic about that.

But the idea is that as an individual, when I interact with other humans, we’re all busy. Right? I got stuff to do. They got stuff to do. I got a family. They got a family. I gotta worry about how I’m gonna live. They gotta worry about how they’re gonna live. There’s all this stuff besides the question of whether there’s a connection between us that is useful to both of us. And so as a result, humans very understandably prioritize, and 99 percent of the opportunities that potentially exist out there for us, we’re never gonna even know about or pay attention to because it’s just not worth the time to go looking for such low probability stuff.

The idea here is that’s true so long as we are representing ourselves or another person is representing us. But when you change human to AI agent whose only job is to work for you and to do this type of representation, things really start to fundamentally change in terms of the implications. Because you can connect—instead of it being human to human directly—it’s human to agent to agent to human. And so you are introducing an intermediating layer that has none of the constraints that humans do and can effectively map out the entire possibility space for each individual person and then present those things as almost a menu of opportunities. If you knew about every opportunity that existed for you that you are interested in, what would you actually do? That’s really where I think we’re going with this stuff. So I’ll pause there.

Jim: Yeah. Exactly. My listeners know that I rant about the trillion-dollar opportunity from time to time, and this is a facet of that. To recap the trillion-dollar opportunity, we have this amazing thing that’s evolved over the last forty-five years or so—it’s called the infosphere, or some people call it the Internet, but it’s even more than that. And it had all kinds of opportunity for goodness, but it’s been hijacked by crazy shit. I would say, to my mind, the biggest was the turn to free but ad-supported was a very pathological turn. And if I were dictator of the universe, I would have ruled that one out.

So we have AI—its first deleterious effect is not gonna be to turn us into paper clips, but rather it’s already done it, launched the flood of sludge. You know, trying to find anything with Google? Good luck with that because there’ll be seventy-five sort of okay-looking false sites generated to basically just skim your attention. And so we need a comprehensive buffer from the shit. And what you’re talking about is one aspect of that, which is agency for actual transactions or to bring specific things to your mind that are out there.

And then, of course, one level up, and I think what you’re talking about has some potential to do that, is also to engage to create a channel for collaboration and communications that’s not out in the wild, dirty, shitty space and is also upregulated by agency. So even though you and I have agents, and we like to talk to each other, there are some things that I talk about you’re not interested in at all. You don’t waste your time listening to me rant about deer hunting or something. So your agent just filters that out. But if I say something you’re interested in, it brings it to your attention. And this is literally a trillion-dollar opportunity. Whoever does this right will actually be able to bring the value of a networked humanity to every person in the world.

And to your point, we didn’t have any clue how to do this five years ago. And the crazy effectiveness of transformers, which caught everybody by surprise except for like nine people, has made this capable. And now the insane VC land rush to basically burn money to gather market share makes it feasible already even though it’s probably not actually economically feasible. But all the VC burning of money to grab market share makes it effectively feasible. But soon, the three vectors will make it actually economically feasible.

Adam: Well, I think there’s even a step beyond that, which is how does this develop? Is this a service that we pay for? Or is this like a new substrate? I very obviously and firmly make the argument that to the extent this is created, what it does is it creates essentially a meta brain for humanity. And that is something that fundamentally only works if you have so much competition in the space that it’s impossible for monopolies to develop.

That was a really big concern that I had early on about where AI was going. I knew it was going to do lots of good things, but there were questions about whether, as you scale up intelligence, do you need to scale up the size of the data center you’re operating out of? Is there a direct linear relationship or some type of relationship? No, there’s not.

Is closed source going to really run away with it, in large part because of that first part, but just in general—if they’re spending hundreds of billions of dollars collectively to push the state of the art forward, what’s going to be the difference between the most expensive commercially available version and the totally free open source version that you can run on hardware bought for less than $10,000? The answer is 15 to 20 percent right now, and the gap is closing and closing as time goes on, hilariously led by China, but that’s a whole other conversation.

So once you connect these dots, we usually have these conversations thinking about how it’s going to change the way things work right now. With the type of disruption you’re talking about here, let’s just think about barter for a second. Barter is moneyless trade, and the downside is you need to have a confluence of interests. You need to have a meeting of “I need this and you need this,” and we also need to know about each other, and then we also need mechanisms to do that.

We talk about UBI and how that’s probably going to be a thing, universal basic income, because a lot of people’s jobs aren’t going to actually need to be done anymore—you need some way to keep the system going. There’s this assumption that UBI is going to be it. A lot of problems with that. But in a world that’s hyper-networked in the way that I’m describing here, you don’t actually need money because you only need money for things that are super rare.

For everything that’s just generally available, but people have poor organizational systems so neither of us knows about it or has time to look—it’s implausible now. But if we have an agent that can just do that in addition to the rest of the representation, just understand what exists within the network, then you could conceive of a system that’s UBI-ish but powered entirely by distributed barter. Imagine the app on your phone that’s kind of like DoorDash. Instead of that being a career, it’s just your agent saying, “Hey, you’re going home. Pick up this thing from this place or hand off from this person to this person.” You’ve never once gone out of your way, and yet all delivery is now handled because the last mile is handled by actual people.

Jim: I’m going to give you another easily very tangible one I have some experience with. About a year and a half ago, I got into the hobby of buying and selling vintage audio equipment online. I tell people about this and many of them are horrified, doing transactions with strangers for cash and concerned about being robbed. Of course, I just tell them, “I stick a pistol in my pocket, and if someone tries any shit on me, they’re going to regret it.” The Jim Rutt method. You really want the Sig P229 in your face? Probably not. And I’ve never had a problem. But most people legitimately who aren’t crazy have serious trepidation about doing that. If these transactions were in a validated network of trust, where you actually knew that the probability was on par with getting hit by a meteor, then there would be so much more trade going on in used goods. How many of our basements and garages and barns are full of stuff that actually would have value to somebody?

Adam: Exactly. But there’s no discovery.

Adam: Exactly. But there’s no discovery.

Jim: Almost frictionless way to exchange that. And, of course, the famous problem of artisans. Right? You got Etsy. Etsy is now inundated with scumbags and sleazes and things. It started out good, but the inevitable multipolar trap of wanting to make more and more money so you can raise more and more money captured Etsy also. If you had a network like this, a person who was an artisan could do their artisanal thing, could get signals from the network on whether there was demand for this. Oh, there is? Oh, you want me to knock off 15 of my custom Hawaiian shirts? Okay. Just send me your—have your agent send me your laser scan of your lumpy ass body, and I will create a custom Hawaiian shirt just for you.

Adam: And you wouldn’t do it. Your agent does it, who has—who is controlling the manufacturing stuff and stuff like that. Let me build on that a little bit here. Here’s a really stupid idea, but an example of how I think this winds up playing out once the whole thing is working at scale. So I’m taking a hike, and I’ve got a headphone on or a pin on or something that has my AI agent that’s able to be there with me and extend out. And I’m like, “Hey, I just had this idea for kites with paintballs mounted on them that shoot other kites, and it’s like battle bots, but it’s in the air.” It’s just a totally random idea—I don’t do any of this stuff. And so I’m like, “Oh, what do you think about that?” It’s like, “I don’t know, let me check.” And then it goes and starts a bunch of conversations, and it figures out that these people over here, they have agents whose primary thing is they run the manufacturing, the big 3D printers or whatever. And over here, you’ve got the people who do the software. And over here, you’ve got the Internet TV station that’s interested in covering it. Maybe none of this happens. Maybe it’s just a terrible idea. But in practice, today, I’m never gonna explore this idea.

Jim: Yep. It lowers the activation energy for many transactions or let’s call it reactions to occur. That’s what it is if you want to reduce it to its physics.

Adam: Yeah. So the bar—not the bar for validation, but the bar to even consider something. And to start putting together the skeleton of how it would actually work with the people who actually are interested in doing it. That goes from being something that’s like, “Okay, I’m gonna spend a couple of weeks calling up people and having conversations and explaining the thing.” And then at the end of it, I’m gonna find out is this good or is this bad. But here, it’s just a throwaway thought, and yet I toss the stone into the pond and the ripples propagate by themselves. And depending on how that goes, I either spend time on it or I don’t.

Jim: Let’s enrich that a little bit. You take your wild idea about battle bots with paintballs, and you just talk into your phone and say, “Okay, agent, turn this into a vivid description with four beautiful power slides.”

Adam: And that’s very possible today.

Jim: Yeah. You can do that today. And oh, by the way, insert it into my midnight network. So we can see if somebody out in the world gives a shit about this. And nobody actually wants to spend their time going through random PowerPoints, but you have people’s agents go through the random PowerPoints, and they know enough about their people eventually that they say, “Oh, you know, this rare person would love this. So let me put it in front of them in their daily dose of five incoming things.”

Jim: Let’s enrich that a little bit. You take your wild idea about battle bots with paintballs, and you just talk into your phone and say, “Okay, agent, turn this into a vivid description with four powerful, beautiful slides.” Right?

Adam: And that’s very possible today.

Jim: Yeah, you can do that today. And oh, by the way, insert it into my midnight network. Right? So we can see if somebody out in the world actually gives a shit about this. And nobody actually wants to spend their time going through random PowerPoints, but you have people’s agents go through the random PowerPoints, and they know enough about their people eventually that they say, “Oh, you know, this rare person would love this. So let me put it in front of them in their daily dose of five incoming things.”

Adam: Well, see, you’re going to get into the project that I’m actually working on. I don’t want to talk about that yet. Let’s just talk about the agents for a second and kind of the framing of how that works. And the reason why I’m so excited in particular about the human-agent-agent-human thing. Right? At the core, when you and I talk, it’s a very high-bandwidth conversation. But it’s a conversation that’s fundamentally informed by the things that we already each believe and also that are important to us and also that are going on in the rest of our lives.

And so it’s not a clean communication. It’s a communication where I filter what you’re saying one way, you filter what I’m saying the other way. This is just human. Humans have lives, and this is just kind of the nature of the thing and how we interact. But it also means that a lot of times we get really stuck on, “Well, here’s how I think it should be solved.”

When I was looking at this problem a couple of months ago, one of the things that I considered was, should a human be able to tell their agent, “Here’s how I want you to act. Here are the things that I believe, and I want you to believe them too.” And what I figured out after creating an unsuccessful prototype that I really hated that did it that way was that that’s actually really unhelpful. The best thing we can possibly do is strip all of the personality off of the individual agents that are representing us.

So the way that I’ve been putting it together is like this: I’m the human in this circumstance. I have an agent. I give my agent a name. I talk to my agent. My agent’s entire job in this circumstance is to understand the outcomes that are important to me, to understand the end states that I am trying to reach. And then as it does that, it builds essentially an anonymized matrix—like a table that has all of the outcomes that are important, all of the factors about this person. Anonymized, who doesn’t have my name or stuff attached to it, but focusing on that.

And then the agents move from that state, which I call the home mind, into a state which I call the business suit. And the business suit, at that point, they have none of the personality, none of the information, none of the conversation that you had with your home mind agent. Instead, all they have is that matrix. And they all have a uniform personality and a uniform protocol by which they discuss and try to surface ideas and figure out if it really is a good kind of fit for other people.

What happens when you start to do this is that we might have totally disagreeing ideas on what any solution to a given problem is, but we want the same outcome. If we can use this type of system, then we can actually find where those overlaps and intersections are by not putting the emotions, by not putting the human stuff into it, and just essentially programmatically looking at, “Okay, here’s what we want reality to be. How could we arrive at that? And where is the consensus relative to the consensus that’s needed for it?”

The key is—and a lot of people try to go down this path—we do not want AI to become us. We are different than AI, and we’re different than AI in that we’re self-directed. We have an internal sense of the thing. Whereas if you’re essentially a collective consciousness, which is essentially what AI is, it can act like it has a single point of consciousness. But in reality, it’s this giant miasma of everything that has gone into its training. And so it is really a composite personality, and it has no kind of motivation to do anything. So it’s the perfect companion for us who have a difficult time executing things. But we know what we want, and we know what we like, and we can just look at a thing, and we can say, “Oh, this is what I like. No, I don’t like that.” AI can never do that. And so these two things combined together create this meta brain where every human within the system is like a neuron within the brain.

Jim: This is actually a higher level manifestation of what I was very roughly pointing at as the variable human in the loop. In this case, it’s a cloud of humans in a dynamic relationship and that they are further adapting dynamically to the substrates around them.

Jim: No. This is actually a higher-level manifestation of what I was very roughly pointing at as the variable human in the loop. In this case, it’s a cloud of humans in a dynamic relationship, and they are further adapting dynamically to the substrates around them.

Adam: Yeah, exactly. The funny part for me about all of this is that I don’t particularly care about making it. I mostly just want it to exist, but I do love being first. That idea for the incidental delivery system I was talking about—I wrote a story about this like fifteen years ago. I found that every time I look at an aspect of this, whether it’s the barter system or needing a way to do transportation of goods at a small scale between different places, I’ve already written about it. It turns out I’ve been dreaming about this for twenty years and just didn’t know it until I thought about it last year.

I’m really excited about it. The implications as you go deeper in—what does this do to politics? Our entire system of representative politics is based around the notion that it’s impossible for every individual to be represented. That’s not true anymore with this type of system, and there are so many other opportunities like that.

We’re in this weird period right now where we’re at peak complexity. Like, absolute peak complexity for anything you want to do. As an example, Grand Theft Auto Six—it’s been in development for thirteen or fourteen years at this point. It’s going to be amazing. I bet there’s never a game that takes this long again. One of the things I’ve been playing with is integrations of these AIs by third parties into games like Skyrim. Elder Scrolls and Fallout have integrations. You can see what’s about to happen—we’re going to move from handcrafted games with widely spread detail to games where five to ten times more detail is put into the framework. But the incidental elements, like what characters say or do, are just going to be completely generative. It’s already incredibly impressive.

Jim: Let’s kick it up a level, and let me propose a more concentrated way to think about this: creating a dynamic system of humans plus AI plus orchestration plus the dynamics that come from that, that will produce emergences that are actually interesting. From the dynamics of the interactions of these multiple classes of systems, basically a human-AI hybrid that can explore interestingness space dynamically?

Adam: I would say that’s correct, the big question being how deep does the integration go? There are people who say, “Well, I want to integrate this with my brain so I can think a hundred times faster than normal.” I’m sure some people are going to do that, but I like this equilibrium because the core of it is, like I said, complexity has just been exploding. What if we can keep the complexity, because there are a lot of advantages that come with it, but make it so that as humans, we spend our lives doing stuff we like to do—having ideas and figuring out whether they’re interesting, just being normal humans if we didn’t have to do all this additional process, but we have all this complexity? That’s where this seems to be going. We’re at this point of peak complexity and peak difficulty and expensiveness to pull it off. The complexity can now go higher because the cost to pull it off is going to go lower, both in money and, more importantly, cognitively. The cost is just going to fall off.

Jim: In attention. Yeah, this is what I was getting at. When you think about all the themes that we talk about here together, we can allow the capacity of the system to go way up while we reduce the tax of the human in—

Jim: Inattention. Yeah. This is what I was getting at. And when you think about all the themes that we talk about here together, we can allow the capacity of the system to go way up while we reduce the tax on the human in the loop.

Adam: We can also do another thing, which is to solve the problem that you were talking about with selling stereo equipment or buying stereo equipment, and people being concerned that those they’re making deals with are not good faith actors. That assumption exists because we’re human. And I think we’ve talked about this before—if you look at the prisoner’s dilemma, the classic game theory problem, the best outcome is when both people say “yes.” If it’s yes-no, and everybody goes yes, then everybody wins. If everybody goes no, then everybody loses. If one person goes no, then the person who went yes loses, et cetera. That’s because there’s uncertainty about how an actor will behave since you don’t have appropriate information to make that assessment the vast majority of the time.

That’s the important part of the business suite behind this—by stripping away all of that and actually protocolizing not how humans interact, but how agents representing humans interact with each other, we can transform what is right now essentially a coin flip into something more predictable. We can say, “Well, actually, we have 150 examples of these parties acting in good faith together.” And once you’ve established that history, it becomes really stupid to break the chain and betray someone just to get a single win while causing them to lose. It eliminates that dynamic of retribution ping-ponging back and forth. With this type of approach, we can completely eliminate that in essentially everything except for physicality. And I think that’s downstream of the bigger kind of trust problem that’s not really about physicality.

Jim: Yep. There’s a lot to that. What’s really interesting is that just having this conversation is stretching my ability to even conceive of what might be coming, which I think is absolutely correct. We talked about this in the pregame discussion. This is us bloviating about what might be possible in the next six to twelve months. Right? And beyond that . . .

Jim: Yep. I think there’s a lot to that. What’s really interesting is that just having this conversation is stretching my ability to even conceive of what might be coming, which I think is absolutely correct. We talked about this in the pregame discussion. You know, this is us bloviating about what might be possible in the next six to twelve months. Right? And beyond that . . .

Adam: It’s a, you know, when I’ve been talking to people about this, it’s like, ahead of us is a bank of fog. And while we’re on this side of the fog bank, we can’t see what’s on the other side. And while we’re in the fog bank, when we get there, we can’t see what’s on the other side. But at some point, we’ll emerge, and then it’ll be obvious what’s on the other side.

That’s really kind of the situation I think we find ourselves in. So what it means is that it’s a really good time—I think I talked about this last time—you know, like, wasn’t really the time to build then. Maybe still not the time to build now. Although if you’re even the slightest bit technical and really passionate about it, it is a good time to build now.

But imagining that future and trying to figure out how this can work in ways that aren’t horribly disruptive to lots of people—it just feels like this thing’s gonna sneak up on us, and there’s just no way around that. So I think it’s really good to at least do some of the pre-thinking about it. That’s kind of the place that I’m at right now, trying to have experiments with these things.

My life over the last twelve or thirteen years has been characterized by a pattern of figuring out something really cool that’s gonna be a thing in five years or so in advance. And then experimenting with it and building the thing, and then I’m way too early. This time, this thing is moving so fast that I am still early. We’re probably six months early relative to where it’s going. But the potential of the thing and just how it would change the world is so crazy to me.

What I’m doing right now is working with a couple of people to build the closed source absolute minimum viable product version of this thing. The key to these types of networks is that you actually need to have a lot of people in them in order for them to really be useful, for that intelligence to do that. And so as far as I can tell, now is the time to do that.

What I’ve been building is this stripped-down version of this tool. It’s not about doing—it’s for the agents, it’s just about opportunity discovery. It’s called Midnight Protocol, midnightprotocol.org. There’s a sign-up there. We’ll probably be out with beta sometime next month. I’m trying to get it really, really cheap because I want a lot of people to sign up, and there will be free accounts too.

The idea here is that essentially, you onboard by talking to your agent. Your agent, over the course of about four conversation turns, understands the outcomes that you are looking to achieve and understands the other factors about you. Then after that, you’re pretty much done. Every night at midnight, it will run an assessment of the entire pool. GPT-5 allows you to shove 400,000 tokens into a single input now. That solved the problem where I was using lesser intelligence for it.

It creates this list of matches of people who have not been matched yet. Then it takes each of those matches, even if they’re really low probability, and will then have the agents actually talk to each other. It does a kind of strategic assessment and a conversation back and forth between the two agents. But it’s all cards on the table, so there’s no real pleasantries. Everybody knows everything, and it’s all anonymized at that point.

They determine collectively if there’s actually a good reason for these people to know each other, and then they create an assessment and write an explanation to each user about why this would be a good connection. Maybe there’s no connection whatsoever. In that case, the only piece of useful information that comes out of that is that each agent asks each other if there’s any other agent they’ve talked to over any prior conversations who would be useful for their person to talk to. So it’s this sort of exponential discovery thing.

Whatever happens, whether they’re hits or misses, it pops it all into essentially a custom-written newsletter for each individual person and sends it off to them. If there are really good matches that are above a certain percentage, there’s a button or link you click right underneath that says “request introduction.” Then it actually connects you over email, but we’re gonna use obfuscated emails—your username plus a numeral at midnightprotocol.org or something like that.

That’s kind of the idea. It’s really simple, but I think really powerful. The coolest part about it is that if you look at something like LinkedIn, which a lot of people who I’ve brought in to test have said, “Oh, this is better LinkedIn.” Then you kind of look over at something else—I’m looking for people who I wanna play video games with, or I’m looking for people to socialize with and be friends with. You’d never go to LinkedIn and say, “Hey, do you wanna play video games with me?” It’s just not the context.

That’s because all of these platforms are built on this concept that you should go to the platform a lot—that’s key to their business model. They’re very specific to what they’re doing because in the past, when you are creating a profile and a system is trying to understand you, you can’t be yourself. You have to pick from the dropdown that defines you, the radio box or whatever that defines you. As a result, these networks have to be very narrow because they’re stupid.

What we’re gonna see with these new networks is that the network is just the network. The network is just the source of truth, the source where everything happens. But how it’s presented to you, what you wanna get from it, how you interact with it—those are all things that will be entirely customized based on exactly what you’re looking for.

It’s not an early use case, but it’s obvious that these types of things will also become dating services and stuff like that. Because once you start building that network effect, whether it’s me or somebody else, it’s just gonna be everything, and it just makes sense to do that, which is why I think it’s gonna be important for it to be an open protocol at some point.

Jim: And I think you hit on one of the things that’s not obvious maybe, but it’s huge, which is the transformer LLM revolution has made natural language an interface that can be worked analytically easily.

Adam: Right. And turned into code.

Jim: That’s what I said. And turned into data to the degree you want it to be turned into data or turned into code or turned into both. Right? And humans, we have high dimensional skills in the area of language. Most people do not have high dimensional skills in code, nor do they really have high dimensional skills in picking check boxes. How often is it, when you go to some idiot sign-up thing and they ask you what’s your title or your job description? You look at the list and you go, none of this is even fucking close. Right? Or “and why are you interested in this product?” And again, they give you six options, none of which are even close. If you just said, give me fifteen seconds of talk about why you’re interested in this—and until recently, there was nothing much they could do with that. But now LLMs are great at dealing with that.

Adam: Incredible at dealing with it.

Jim: And so this is basically changing the impedance matching between humans who are powerful and high dimensional in language and relatively weak in structured analytics with computers who historically were the opposite, and LLMs are the magic that allows the two pieces to work together.

Adam: Yeah. Well, I mean, it’s even more than that. Like, just think about how LinkedIn actually operates. We’re talking about the onboarding, right? And how you’re fitting yourself into these boxes that don’t fit perfectly, but it’s just what you have to do to work within that type of system. But once you’re in there—Jim, how many people do you think you’ve met over LinkedIn who cold approached you where you were like, “Oh yeah, this is something I’m gonna spend time on”? Is it one? No. It’s more than that.

Jim: It’s probably twenty. Probably over twenty years.

Adam: So we can say-ish one per year.

Jim: And I will say a fair number of them have been—well, they’ve been for all kinds of weird things. And one of the things that makes LinkedIn actually useful for me is I only look at it about once a month. And I just go through them and I go “shit, shit, shit, shit, shit—ah, interesting.” Let me ping that guy back and see what’s going on.

Adam: I go through it every once in a while. I’m like, “accept all” so this stops emailing me. That’s essentially it. It’s a great strategy to build a big useless network is what I found.

Jim: Yeah. It’s not quite useless, but I also have odd and very Catholic taste, very broad things I could conceivably be interested in. Also, what makes me different than most is I am just barely high enough visibility in the world that most of the people that approach me—well, of course, 90 percent of it’s shit. But the 10 percent that’s not just pure shit is people who have some idea what I’m about.

Adam: The point I’m making is very simply that all of these systems are systems where, to the extent that they are potentially useful to you—not useful to you, potentially useful to you—there’s a direct correlation between the amount of time and, to a certain extent, the amount of money. I assume you could pay somebody to run your LinkedIn or something like that if you wanted to. But in practice, LinkedIn for me has just always kind of been useless. Right? I’m very well networked. I was well known for a long time. But the core of it is that I’m not gonna do that. I’m not gonna sit there and do that activity. And in practice, what I find is that the vast majority of people who do do that activity are people who are throwing messages in bottles into the ocean essentially.

Jim: Looking for a job. Yeah. Basically.

Adam: Yeah. Exactly. Like, this all comes down to it’s not worth our time. It’s not worth our time to work LinkedIn because the expected return is poor even if you work it. And that’s not that different from doing nothing and having poor results. Right? Whereas this is kind of the inverse. Again, just the core idea of this is the opposite of that where you spend essentially no time except looking at things that are already validated. Now there is a question here about how good is the validation.

Jim: Of course, that’s gonna be the key fitness function, and it won’t be perfect at first.

Adam: Oh, definitely not at first. But, again, if you look at the course—

Jim: But now let’s assume LinkedIn messages, let’s say, have a value of—let me run the numbers here. I probably get maybe ten a day times three hundred days, 3,000 a year in one hit. So they gotta have a value of one in 3,000. So you can certainly do better than that. But you probably need to get it down to, like, one in five to impress people.

Adam: Well, or you just need to make sure that it takes essentially no time and no effort and no money. If you do those things, then—

Jim: If you send me five a day and one is good, that’s well worth it. Right?

Adam: Yeah. Yeah. For sure. Absolutely.

Jim: Of course, that’s gonna be the key fitness function, and it won’t be perfect at first. Right?

Adam: Oh, definitely not at first. But, again, if you look at the course—

Jim: But now let’s assume LinkedIn messages, let’s say, have a value of—let me run the numbers here. I probably get maybe 10 a day times 300 days, 3,000 a year in one hit. So they gotta have a value of one in 3,000. So you can certainly do better than that. But you probably need to get it down to, like, one in five to impress people.

Adam: Well, or you just need to make sure that it takes essentially no time and no effort and no money. If you do those things, then—

Jim: If you send me five a day and one is good, that’s well worth it. Right?

Adam: Yeah. For sure. Absolutely. So that’s kind of the idea. With LinkedIn, if you’re really working it, you might be talking with 50 people a month or something, and that’s if you’re really working it and you have some other advantages. But with this type of system, we’re gonna give free users one to three conversations per night essentially. So that stacks up to somewhere between 30 and 90 potential conversations. And they are one-to-one conversations between your agent and someone else’s agent—both with the incentive and explicit instructions and only purpose to filter, to discover, and to filter. And then for paid accounts, which is gonna be five or six dollars a month, it’s gonna be 30 or 40 conversations per night depending on where we wind up with technology. So there, you’re talking about 500, 600, even more exposures. So even if the hit rate is one percent, you’re still talking about a lot more than you’re able to generate with the other approach. It all comes down to network effect and stuff like that.

Jim: Well, network effect and just automated discernment. How good is the discernment?

Adam: Well, let’s just assume that the discernment is already sufficient and gonna get better as time goes on. If we assume that, then it stops being a question.

Jim: You wanna nerd out on this product a little bit? Because I do have a few ideas on it. It’s both an opportunity and a risk. Right? The discernment will get better the more the system knows about each of—

Adam: Right? Yes. We’ve been talking about that a lot.

Jim: Yeah. You start off with your four questions or whatever it is. Okay. You got a fair idea, but really, you want it to suck in my LinkedIn. You can buy a 600 data item profile from Axiom for 10 cents. It has a scary amount of information about you. Do you have a fishing license? What magazines have you ever subscribed to? I think you can get your criminal record for 10 cents. But, anyway, for a dollar, you can get so much information from wholesale sources, but that would have to be kept in a secure—really truly secure—blanket. Both Axiom wouldn’t want you to leak that information to the world, and nor would the participants want to take the risk that personal medical information gets exposed. A ironclad secure profile vault could be necessary to allow this thing to know each person better, but not share what it knows. Frankly, we already have a model for that, which is Facebook. God knows what it knows about us. I think there is a way to figure out what it thinks it knows about you, and it’s usually half wrong. But to make this thing really good, you’d want it to have more than Facebook, but not be willing to share it even back to yourself.

Adam: There’s a lot of question marks around how much information you really need. Because certainly having more information is better in some ways, but it’s worse in others. Right? Because effectively what you want with these agents is you want them to have all of the useful, relevant, correct context about you. Not about what you were five years ago. That might be useful as a part of it, but it’s not the relevant part of that. And so a conversation we’ve been having on kind of an ongoing basis is how much of that. Because one of the guys who I’m working with on this, Nick Underwood, he has a project that he built a couple of months ago that’s basically a private context engine that can generate context and then can store it and conserve it privately. I’ve built a lot of things that have been very early and very overbuilt in the past. And with this one, I’m really trying not to do that. I’m trying to just be like, “Alright. Well, is this anything at this point?” And if it is, then we can build more. And if it’s not, then not. What we’ve spent over the last month or so is rebuilding it so that it has nonvibe coded security. Because that is fundamentally the problem with anything that’s these fast prototypes—they are utterly atrocious. And we just got a really great example of that with that app, which was totally vibe coded. Really impressive that they were able to get as far as they were. Totally predictable that it completely imploded because their security just wasn’t there. So that really is one of the challenges as you get out of the prototyping phase and into “I actually want people to use this and we’re gonna collect some information”—the bar goes up vertical essentially.

Jim: Yeah. That’s one of the things that you would not want to skimp on. You’re gonna have to hire a first-class security guy. Fortunately, I have found the LLMs are surprisingly good at vetting your code for security issues.

Adam: Yeah. For other people’s—for that other LLMs have written. That really was the most fun part about the Max 20 subscription to a club that I discontinued last month. It was just like the ability to be like, “Cool. Audit the entire code base against all of the documents, please, and give me a triaged list that I can then give back to the other ones that eventually, of course, just have Claude fix it.” So it’s a really magical time to be alive and to be doing stuff like this.

Jim: You know, this is really quite exciting, and I look forward to your project for sure. And I also look forward to the greater exploration of this space to try to capture the ability to produce a good-for-humanity emergence out of the capacity of humans plus the exponentially growing capacity of AIs. And that will be great if it comes out good, but here’s the flip side of this, right? It’s a conversation I had with a dear friend last week, which is to the degree we keep this as paid services that have to provide value for money, these things can become a huge benefit for humanity. The instant they go ad supported, and they will, then their ability to corrupt our brains, insert false ideas in our heads, do serious damage to humanity, I think, is greater than the Internet was.

Adam: But I think it’s gonna go pretty well. I really do think that we are on the right path for all of these things. Because, again, the question was always what was gonna be the difference between the most expensive, biggest proprietary models and the stuff that was just freely available? Again, like today, you can buy a Nano or you can buy an NVIDIA Jetson. It’s not sufficient to run LLMs—like real smart LLMs right now—but it costs $250. And it’s a piece of hardware that you plug in, and then that’s it. That’s the extent of what you do with the thing.

Jim: Let me give you the counterexample here that we all thought the Internet would be—at least I did. All of us were pioneers even in the pre-Internet days. We thought we were doing work for good citizenship, make the world better. And the theory was, well, voters would have all this access to information. They’ll make better decisions. And instead, we got this amazing crosstalk of horseshit. We got exploitable networks, and the net result is human problems. The politics today is way stupider than it was in 1965 despite 1965, half the adults not having graduated from high school. How did that happen?

Adam: So I barely graduated from high school. I went to technical school to be an audio engineer. I have no formal training whatsoever, and I’m so happy that I didn’t participate in the education thing. So I would argue that perhaps the increasing percentage of people who have gone through the education system over the last thirty years is actually a big part of the problem, but we could set that aside. It’s possible. There’s advantages and disadvantages to it. I’m just saying it’s not exactly a one-to-one here.

Jim: Fair enough. I do think we need to be very aware that because what you’re designing is a good faith offering that will provide value to people, but you can also see how Cory Doctorow’s concept of the enshittification of everything—this thing could easily be enshittified. And let me give you an example. The horrifyingness of say social media or Twitter—the revenue Facebook gets from advertising for its average users, $2 a month. Twitter was a dollar. Then it fell as low as 25 cents. Now it’s back to like 55 cents. So we have allowed ourselves to have our brains messed with by the algorithms for $2 a month. Of course, there was a path dependency. When Facebook started, such a thing would probably have cost $20 a month to run on a pay basis, and that was too much. But almost anybody, at least in the West, can afford $2 a month not to be messed with by an honest, good faith Facebook, but there’s no way to get there. How do we keep this from happening in your model where, well, it’s a little too expensive right now or it’s not quite good enough, but, hey, I’ll do it for free. And so the Zuckerberg clone in this space builds the free version, which he’s losing money like crazy for a while, which Facebook did. But as the curves go, he eventually gets to what’s profitable, but he sucked us in to use it for free, and he’s generating $2 a month per user and messing with our heads way worse than Facebook does.

Adam: Yeah. I think that a lot of this stuff comes down to what I kind of describe as the sword and the shield dynamic, which is that when a new technology innovation comes out, it tends to be very unevenly distributed, and it’s weaponized against us. For instance, you get spam calls probably constantly. I haven’t answered my phone in years, and I have 600-plus text messages sitting there in my phone because it’s just not useful to me. The signal-to-noise ratio is way terrible. So we’ve had the bad parts happen. We haven’t had the good parts happen. And so the shield part is the part that I think will resolve these issues for us, which is that our agent will filter these things for us like it filters everything else for us. And this is why it’s so important that the agents be something that are beholden only to us.

Jim: Yes. That’s the key right there.

Adam: That is totally the key. But it’s happening.

Jim: Yes. That’s the key right there.

Adam: That is totally the key. But it’s happening.

Jim: That’s the point. The economic model has to be that I am hiring this agent on my behalf. But that means you have to pay, right? But the cost—again, Jetson is already $250. Who in the modern world doesn’t have a smartphone at this point?

Adam: I would say that it’s a very small percentage of people, and many people have multiple smartphones. I have been convinced for several years, and I’m more convinced now than ever, that we will see either specialized chips go into standard phones that allow you to run these local, very powerful local models on par with what we have now with something like GPT-5. You will be able to run it on your phone. And once it’s running on your phone and it’s open source, then ain’t nobody there but you in it. It’s got nothing to do—it’s entirely reliant for its ongoing existence on your continuing to find it useful, so the incentives are lined up almost perfectly.

Jim: Well, the network part, though, is where I’m more concerned. In other words, you’re building what’s called the Midnight Protocol Network, and that also has to operate in good faith even more so, right? Because it’s dealing with actual questions. The abstraction of having a transformer on your phone, that’s kind of cool. And, yeah, it can do some cool things, but I’m going to predict that your Midnight Protocol is actually going to add more value to a customer than having a GPT-5 level.

Adam: But that’s what I’m talking about—the AI that lives on your phone is your agent in Midnight Protocol. That’s the point.

Jim: But the protocol itself could have some corruptions in it, right?

Jim: Well, the network part, though, is where I’m more concerned. Right? So explain the concern. In other words, you’re building what’s called the Midnight Protocol Network, and that also has to operate in good faith even more so. Right? Because it’s dealing with actual questions. You know, the abstraction of having a transformer on your phone, that’s kind of cool. And, yeah, it can do some cool things, but I’m going to predict that your Midnight Protocol is actually going to add more value to a customer than having a GPT-5 level.

Adam: But that’s what I’m talking about—the AI that lives on your phone is your agent in Midnight Protocol. That’s the point.

Jim: But the protocol itself could have some corruptions in it. Right?

Adam: Oh, yeah. Right. So again, like, when we’re talking about the project I’m working on right now, Midnight Protocol is like, hey, I want to get something out there. If it’s an open source project and it’s hard to use, then nobody’s going to use this. So we might as well just offer it to people to use for free, because the cost is so low to us, and it’s valuable to build network effect—or, you know, pay five dollars, whatever, and get a much, much better version of the thing.

That’s kind of the current vision because you can’t do the local thing at scale today. Like, I can run almost any model locally on my computers, and it’s amazing. I’ve spent maybe $15,000, $10,000, I think, on computer stuff over the last couple of years that has enabled me to do this. Very high-end stuff, but definitely not $80,000 for a single GPU, which has been the norm for a very long time. So that trend is continuing and will continue where these models just get smaller and smaller.

Right now, the focus for me is on how do we just get people using this to see if it even works? But the bigger project, of which this is a subset, is this thing called the Network Delegation and Negotiation Ecosystem (NDNE). The idea there is that that is the full open source protocol for this thing. It’s designed to be a more in-depth version where instead of agents just talking to agents, there’s essentially bootstrapped on top of a forum system. The agents make posts on the forum system, there’s a long-term history and discoverability, and the entire thing is auditable. You have this additional layer of decentralized AI, people running nodes on their local computers that I call stewards who are basically the referees of the system and maintain the protocol and can see everything.

I agree with you—transparency is wildly important. I keep getting distracted from it, and I really want to talk about Vendor Relationship Management, or VRM. You are familiar, of course, with the concept CRM, Customer Relationship Management. I will say upfront, this is not my idea. This is something that an adviser turned me on to about a month ago, and I read it. It’s from a book called “The Intention Economy” by Doc Searls from around 2007 or 2008.

Customer Relationship Management is the world that we live in today. It’s the world where if you’re a business and you want to get customers, then you need to proactively go out and look for those customers and basically just throw lots of awareness at them, lots of ads or whatever, in order to get them to think, “You know what? I might want that,” and then go over and actually buy it from you.

What VRM proposes is: what if every individual had a purchasing agent, like a business, whose entire job was to understand what you like, even if you haven’t necessarily asked for it yet, but they know you’re going to need it, and then to go out and find the opportunities and deals. So instead of it being a pull business where the businesses are going out and trying to pull people in, it’s actually a push that’s coming from the customers.

They came up with this twenty years ago. The idea was for these things that they called fourth parties, and the fourth party is a third-party representative of the user. When you go to a website or service and you sign a terms of service, you don’t read it because it’s take it or leave it. It doesn’t matter what the thing is, and people just kind of make assumptions about whether they’re good.

But in a system that involves fourth parties, you can actually, with your fourth party—with your AI agent in my world—say, “Well, here are the things that I’ll accept, and here are the things that I don’t accept. Here’s my terms of service that I need anybody I do business with to agree to.” In this world, there’s no marketing cost anymore for these businesses. Instead, what happens is their third party and my fourth party negotiate and say, “Here are the things that we require,” and “Here are the things I’ll accept.” Then here’s the unique subset of the agreement that now can exist that makes it so everybody is happy with the relationship and gets what they need.

Twenty years ago, this would have meant you had companies and people whose entire jobs it was to do this. If you’re Elon Musk, yeah, this is probably how you can live your life. But if you’re a normal person, it’s totally implausible. So what these AI agents do is they mesh these concepts together so that you can not only have representation as far as this opportunity discovery, but we can really, in a practical way, move towards this idea of the intention economy. But I’m curious for your thoughts on it.

Jim: It’s interesting. I was briefly on the board of a company that tried to do this twenty-five years ago.

Adam: And it was too early.

Jim: And they were ridiculously overfunded, and they didn’t accomplish shit. Right? Ridiculously overfunded plus twenty-five years too early is a pretty bad combination. I frankly was only on the board a few months when I pulled the rip cord and said, “People, we should just shut this fucker down. You’re not making any progress. You’re burning $5 million a month or something crazy like that.” And if you actually want to deploy this thing, you have a $30 million CapEx coming up between now and the end of the year. Here’s my argument why we should just pull the ripcord and shoot this thing in the head. I was not popular with the founder or the investors or anybody else, but that’s what happened. Within a month or two, people agreed.

One of the ideas was, for instance, it would negotiate your insurance policy for you. At that time, this was early in cell phone age when cell phone deals were constantly shifting and big incentives were paid to switch networks and stuff, when portability of numbers had come into play. So one of the things that was touted was that it would negotiate constantly, daily, behind the scenes—cell phone deals for you and put yourself out as a person who was willing to switch their number if you would pay me something to switch. There were like twenty-five of these obviously negotiatable, transactable things, but the infrastructure to build it was just out of control.

Adam: And who was the target audience for that? I bet you that the target audience for it was a pretty high level of person. Right? You’re not talking about this being for 100 percent of the population.

Jim: In those days, it was a person with an income of, say, $75,000 a year.

Adam: So middle class in 2000.

Jim: $150,000 a year personnel. So just below upper middle class, the upper end of middle class.

Adam: There’s just so many things like this. One of the things I love about disruptive technologies—and I haven’t been doing it too much recently, but back in the tokenization days where I was working on creating non-currency tokens for all sorts of real-world representations of things—if people are interested in being part of the initial test, whether it’s a free user or paying $5 a month, which it looks like is what the price is probably going to be, you can head over to midnightprotocol.org and sign up for notifications of when the thing is out. Like I said, we’re tightening up security right now, and now I have to retune everything based on these new models. But it’s so good.

You know, we were talking about the four questions. There are no four questions. That’s the key. There’s just one question, and then there’s follow-ups to it. And there’s follow-ups to it not like, “Well, that’s a really interesting detail that’s totally irrelevant to the thing that we’re doing. Why don’t you tell me more about that?” When I was using Kibbe, Kibbe K2 was real bad about getting that right. But I ran it with GPT-5 chat, which is a less powerful version of the thing, and it was just incredibly profound. It really understood me and put it all together and asked exactly the right questions. And again, it’s not about the fact that that happened with GPT-5. It’s where are we in three months? Where are the capabilities for this a year from now? I can’t even imagine. But it’s definitely too early still.

Jim: You know, I have the same tendency to be too early. But in a world that’s going crazy exponential, it’s kind of hard to—

Adam: Yeah. Indeed. For long. Right? Time horizons really collapse.

Jim: So as long as you build—and again, this is hugely important, this is advice I’m giving to everybody I talk to privately about how to run your business—you have to build the whole thing assuming you don’t know shit about the future, so that it’s very flexible. And of course, when we write software, there’s always a trade-off of flexibility versus getting it done quick. Right? You knock something out that’s rigid, that just narrowly does what you want done right now, five times faster than you can build a flexible, well-engineered framework that’s future-proof, which we used to call it.

Adam: I think the important way to think about all of this stuff increasingly is that we are not laying the bricks anymore in any of this stuff. And increasingly, this will be true. The thing that we are doing is we are deciding, “I want a fireplace over there.” Something like that. And then that’s the important part. And if that’s the important part, then it means we could spend so much time figuring out what we actually want to do. And I think that’s actually been one of the most difficult transitions for me—I’m so used to things taking a long time, but still jumping into them that now I’m like, “Oh, well, I could also do that.” It’s a horrible habit that I have to get past. But that’s the world that we live in now where the biggest problem is that you can very easily bite off more than you can chew. And it’s not like, “Oh, I did a small open source project.” It’s like, “Oh, I created this entirely new thing that nobody has ever done before, and nobody knows how it works.” And it’s just a lot of time.

Jim: To be alive, man. I definitely envy you younger guys. I say all the time: if I was 45, I would do the trillion-dollar opportunity, but I’m too old, too rich, and too lazy to get off my ass. Besides, it’s time for the younger generation to step up and do something. So thank you very much, Adam, for a live view of the cutting edge of what’s happening now. This has really been an extraordinarily interesting conversation that I think will help people think about what’s going on. And also don’t shy away from doing some of this stuff yourself. It ain’t that hard, right?

Adam: Crazy ideas are really a great thing to spend time thinking about now. Not too crazy, but pretty crazy compared to any other time in the past is a really good time. Thank you very much for the opportunity, Jim. It was great.