Transcript of EP 222 – Trent McConaghy on AI & Brain-Computer Interface Accelerationism (bci/acc)

The following is a rough transcript which has not been revised by The Jim Rutt Show or Trent McConaghy. Please check with us before using any quotations from this transcript. Thank you.

Jim: Today’s guest is Trent McConaghy. Trent is a serial entrepreneur, having built multiple successful companies. His current project is Ocean Protocol, which is at the intersection of AI, data and Web3 technologies. It’s cool stuff. You can learn more about it at oceanprotocol.com.

Trent is also a thinker and writer on cutting-edge issues confronting humanity. One of my favorite essays Trent has written in the past is called Nature 2.0: The Cradle of Civilization Gets an Upgrade. You can read that and the one we’re going to talk about today on Medium and he also has a pretty interesting Twitter feed. You can find links to both at episodepage@jimrutthow.com. Welcome, Trent.

Trent: Thanks, Jim. Great to be here.

Jim: Yeah, we’re having you back. He was actually an early pioneer. He was on EP13 when we did an episode called Blockchain AI and DOA. That was a fun early episode. I didn’t know quite what the fuck I was doing, not that I actually knew now, but I knew even less than. Today, we’re going to talk about a new essay Trent has published called bci/acc: A Pragmatic Path to Compete with Artificial Super Intelligence. Before we jump into the meat of the matter, let’s for our audience, not all of whom are experts in this shit, what is BCI and what is ACC?

Trent: Right. So BCI is brain computer interfaces and it’s basically the idea that you can think in order to control computers and potentially the other direction around too where not just reading from your brain, but also writing to your brain as future extensions. So that’s BCI, brain computer interfaces. And then ACC, short for accelerator accelerationism and BCI/ACC overall is a riff on the phrase E/ACC, effective accelerationism, which I consider BCI/ACC to be a zoom in of E/ACC. It’s within the overall E/ACC umbrella. And E/ACC is this movement that’s been gaining a lot of momentum in the last three months, six months, nine months.

Basically, it’s a bunch of people that are very excited about technology and they’re technology optimists, but they’re also pragmatists. They’re building, they’re builders and they’re focusing on, “Let’s accelerate rather than slow down with respect to lots of things in technology.” And we can probably zoom in on that as well over time, but that’s the summary. BCI/ACC is the idea of accelerating brain computer interfaces as an answer to artificial super intelligence as well as really fun stuff beyond for humanity.

Jim: All right, that was good. So you start off the essay by saying that artificial super intelligence is perhaps three to 10 years away. That’s an arguable proposition, which we can argue about a little bit. But before we do that, let’s again, for the audience’s sake, do the traditional, let us remind us what narrow AI is, what AGI is and now this kind of new buzzword over the last couple of years, ASI.

Trent: Absolutely. So narrow AI, you can think of it as AIs that can do tasks that only a human could previously do, right? And there’s many of examples of this going back decades, right? Antenna design, analog circuit synthesis, etcetera, chip design in general and a bunch of other things too. Software compilers even, you could call narrow AI in some forms of it, translation, all of that. So that’s narrow AI. It’s only focusing on one narrow thing, right? Now, there’s this push to AGI, artificial general intelligence, right? That was coined as a term to emphasize the difference from narrow AI, right?

Because initially, when AI got its initial naming back in the ’40s/’50s, it was really around the artificial general intelligence, right? Basically something that can do all tasks that a human could previously do, right? That’s AGI. So rather than doing something narrow, it’s broad. Paul Graham has this really cool definition, “If you’re smart, it means you’re really good at one thing versus, if you’re wise, you’re decent at everything.” So we’re getting AIs that are moving from smart, good at one thing, the narrow AIs, to AIs that are decent at everything-wise, AGI. And then basically, when you get to ASI, artificial super intelligence, that’s basically AIs that have progressed not just being decent at everything that a human could previously do, but way better than a human in many, many of those domains, right?

And it could be not just 1x smarter, but 10x, 100x, 1,000x smarter in many, many, many different intelligences there. So that’s the idea of artificial super intelligence, radically more intelligent than humans across the board.

Jim: Yeah, and this is something people talk about a lot. I remember having a conversation with Eliezer Yudkowsky and Robin Hanson once on this topic, which is sitting on the floor of a crash pad in San Jose. How much room is there above the human level of intelligence, right? Give a human with pencil and paper long enough and they may be able to solve a lot of problems, but on the other hand, in my own relatively deep dives into human cognitive psychology and human cognitive neuroscience, it sure seems like humans are a pretty weak form of AGI actually.

We have working memory size of four or five, six maybe. We have very faulty memories, right? Low fidelity, inaccuracy retrieval. Every time you retrieve it, you mess with the memories, etcetera. Our circuits operate a ridiculously slow speed. At the fastest, one firing per millisecond. Average firing, one per second. What’s your thoughts about how much room there is above human level of intelligence?

Trent: The sky’s the limit, and actually, the universe is the limit, right? So overall, even the word artificial is misnomer. Why should we call it artificial intelligence? You can have intelligences that exist on a silicon substrate or a meat-based substrate like your brains or potentially other substrates. And I think it’s pretty egotistical of humans to even use the phrase artificial intelligence because it’s saying, “That’s not real, but it’s as real as anything.” And of course, you can have intelligence of different shapes and sizes, different levels of power. And like you described, Jim, right now, the intelligences that reside in our brain, the specs are pretty poor, right?

The specs of our computers, by the way, were actually much worse in terms of the number of flops for second until about 10 years ago when we started really getting into heavy duty computing and Moore’s Law kept progressing, right? Moore’s Law kicked off in the early ’60s and we’ve had many, many decades of Moore’s Law, where every 18 months or two years, the area taken up by a transistor is divided by two, right? And so imagine that exponential happening for many, many decades and that’s happened. So now, as of 10 years ago, we had chips that were at the level of human capability. You had this recent guest, George Hotz, that has this label, calling it that one human of compute, 20 petaflops, like one horsepower or 768 watts. And now, we have systems that are a hundred humans worth of compute or more, right?

And that’s with the stuff we have now. So how far can we go, back to your question? Well, Moore’s Law will keep growing in its current form and these days is going 3D. That’s where a lot of the gains are happening. And as time goes on, imagine that this intelligence keeps expanding, right? You could end game in a sense as this idea of computronium where every molecule in the universe is converted to something that does compute, right? And maybe we’re already there, we just don’t know it because we’re living in a simulation, but that’s probably a stretch for this conversation.

Jim: I stipulate no simulation, Rutt’s very minimalist metaphysics. Universe is real and it’s existed for a while. One day, we’ll have a metaphysical conversation, but I try to keep it simple, because otherwise, you can chase yourself down rabbit holes. You can’t prove or disprove. It doesn’t really matter. If it’s a simulation, it’s sufficiently good, then we can pretend that it’s not there.

Trent: Absolutely, right? It doesn’t really change what you do, minute to minute, day by day, action by action unless you frame it as you want to break out of the simulation, right? If you go meta, if you get intelligent enough, maybe you can go beyond, right? Right now, we see this heat depth of the universe and all that, but that’s with the current physics that we know. So who knows?

Jim: The old hunt for the glitch in the matrix, right? Just one last thing before we move on, substrate. The one area with good old meat brains still have a huge advantage is in true parallelism, right? Our 100 trillion synapses that can fire once a second on average, they can all fire once a second simultaneously, not simultaneously, but they could be within one second, they could all fire. And today, while we’re getting more parallelism, it’s been actually slower than I would’ve thought, the development of truly distributed computational fabrics that could have distributed processing and distributed memory that could be relatively arbitrarily organized over relatively large domains. That strikes me, there’s still a big pop coming when the engineering on that’s all figured out.

Trent: I would argue that that pop is happening. So here’s how I’ve been thinking about it recently. It’s what technology wants. So imagine that there’s this technology called ASI or AGI and it wants way more compute. So it’s taking action right now to get way more compute. So at the very top level, to get way more compute, it needs graphics cards, GPUs, right? And those are naturally extremely parallel, right? And it needs the software on top, which is basically these deep neural networks with billions of parameters, hundreds of billions more parameters. And this is actually what we’re seeing.

So there’s been an explosion in demand for the chips running on the silicon. So now, we have … Nvidia is sold out all the time basically and into the foreseeable future with Meta buying 350,000 H100s, Nvidia’s leading chips and another 250,000 of other stuff, Google building their own, and then you’ve got capacity issues at the leading semis like TSMC as the main one because everyone’s been building on them. So now we see two other peril efforts to build out TSMC competitors basically. We just saw the Korean government with Samsung commit $500 billion in the next 20 years for their own next campus of silicon to compete with TSMC.

And then also OpenAI, of course, they’re seeing great demand for the silicon, they’re seeing the shortages. So Sam Altman also has been going out there and raising money towards the TSMC competitor. So I view this all as, “What does technology want?” in the Kevin Kelly sense, and specifically, “What does AI and AGI want?” and we’re seeing this massive uplift, this massive pull of silicon wanting the graphics cards and then taking actions to make that happen, right? So I think the parallelism is happening and this pop is happening right now. It’s just not in the mainstream awareness.

Jim: Though I will point out, the GPU is just a very specific form of parallelism, very limited. Someone came up with a good hack for doing transfer functions inside of neural nets in GPUs basically and then calculating how to do gradient descent in parallel, but it’s not really a universal distributed processing platform. Computronium, I imagine as very high speed FPGAs essentially or something like that.

Trent: Right. It could be, but if you have every device talking or every process talking to every other process, then you have exponential explosion and complexity, right? There’s lots of tricks to attract complexity and hierarchy is a key thing. So you have levels of levels of levels of different things. And then what are the key building blocks, right? And with GPUs, the key building block is to view everything as this tensor, this 2D matrix or end dimensional higher levels and then have these tensors to flow through the GPU and having different transforms. That’s the heart of it.

I recently read Jeff Hawkins book, the three of A Thousand Brains and he talks about this too. So in the human cortex, it’s basically one trick as well, right? You can view that as tensor manipulation. It’s these columns that just get copied 10,000 times, right?

Jim: About a million of them.

Trent: Okay, thank you. Yes, thanks for correcting. And so it’s basically copy, copy, copy too, but it’s this one building block that evolution just ran crazy with, right? And so that’s actually what happening right now with GPUs. It’s a building block that’s really good to scale the compute that the neural networks want. That doesn’t say, of course, you can have stuff on the side with FPGAs and regular good old CPUs to help manage it all and having those in parallel, but basically the GPUs is the cortex getting scaled like crazy right now.

Jim: And certainly, for the paradigm of deep neural networks, it’s a very nice fit. So now substrate, market drawing it forward, right? What are the risks that if we stumble into ASI and you say three to 10 years, what could go wrong with that?

Trent: Yes, the heart is the following, right? I usually point to this analogy. We are 1,000x plus smarter than ants. So we don’t respect the rights of ants, what the ants to say. If the ants come to us and say, “Hey, to dumb yourself down, to be at the level of us, we’d be like, ‘Yeah, forget about it. Go away, right?'” So essentially we’re their gods, right? So that’s humans relating to ants. Here’s what’s going to happen, these ASIs, these artificial super intelligences, are going to be 1,000x plus smarter than humans. We have just become the ants, right? So will they respect our rights? Maybe, but maybe not, right? There’s no guarantee. And that’s the risk, right? That’s the big challenge and there’s lots of philosophical debates about, “Sure they’re going to do it. For sure, they’re going to respect our rights for sure,” but in what scenario would we ever let the ants have the same rights as us, right? So I think that’s a big challenge. And all one needs to do is acknowledge that the ASIs won’t necessarily respect their rights. Maybe it’s a 10% chance they’ll respect, maybe it’s a 90% chance, we don’t know, but there’s a reasonable chance that they won’t and that’s ASI risk.

Jim: And of course, there’s always the possibility that they’ll be weaponized, right? That for a short period, while we have some ability to direct them, they could said, “All right, go conquer them, goddamn Chinese,” or, “Go conquer them, goddamn Americans.” And then of course, if they ever get volitional will of their own, they may say, “Well, after conquering them, goddamn Americans, I might as well conquer everybody else too and take all the material and turn it into computronium.” There’s a vast literature on AI risk and I agree we were not going to go down that rabbit hole of the risk, but we are going to go down the next rabbit hole, also not in huge depth, but just enough to get the idea across, which is, all right, what are some of the approaches of ones that other people have come up with for managing ASI risk and what you see as the negatives or maybe the naivete of those approaches?

Trent: Absolutely. And I think it’s really healthy and helpful that for people to have these conversations, this conversation just entered the mainstream basically in the last year. Whereas some people like Eliezer Yudkowsky have been talking about it for years and years and others too, most of us who have spent a decade, two decades plus in AI have mused about it over beers or otherwise for a long, long time, right? 20 years ago, we were musing about it, but then it’s like, “Oh yeah, that’s 20 years away.” Well, 20 years has passed and here we are and it’s happening, right? So that’s the thing.

So going to your question of what are some of the different ideas, one of the things that’s in the mainstream is this idea, “Let’s slow it down until we figure it out. Decelerate.” The main challenge with that is for such a deceleration to work, all deceleration efforts would need to be successful. If even just one entity defect, that could dominate the others. And that’s why it likely won’t happen, right? There is an AI race. At the core, it’s China and its proxies versus the USA and its proxies. And there’s just too much at stake for one side to speed to the other, right? There’s a lot of money being made already and it’s going to be a lot more. So this race is going to go on. It’s like nuclear in this sense. For all the disarmament discussions over the years, we still have the news.

Jim: Remember a few months back, all the big luminaries signed this letter saying, “Slow down LLMs.” Guess what happened? Absolutely nothing, right?

Trent: Exactly. Exactly, right? Although there’s … Benjamin Franklin gave this quote hundreds of years ago now. So actually before I give the quote, here’s the risk around this though. What might happen is that governments themselves use the banner of safety to take further control, surveillance for taxing and for otherwise, right? This is just the oldest trick in the book. And so we’re starting to see this, right? Where the governments are making noise in Europe and USA and otherwise around this saying, “Hey, we need to put limits on this. We need to put import bans, etcetera.”

And what that means is that, basically, the acceleration will keep happening, but within groups controlled by the government basically or tracked by the government. And go to this famous Ben Franklin quote, “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” This is the challenge, right? And probably the biggest problem of this overall deceleration idea is it doesn’t actually address the problem of ASI risk. It’s just saying, “Slow it down,” but we still have to solve the core problem, right? So you still have to think about what other ASI risk approach is aimed to do.

Jim: Well, I will say that Yudkowsky would say, “Yeah, true, but we don’t have a solution, but I’ve got one in my head if I can get the math to work, right? So just give me another 10 years to get the math to work.” So there’s not entirely an absurd idea, but I’m with you. I don’t believe it’ll ever happen because we’re caught in a multipolar trap, an arms race, which it’s really hard to see how we can step away. All right, so now some more ideas.

Trent: Yeah, exactly. And these more ideas is the answer to Yudkowsky saying, “We need more time to think of ideas.” Well, actually, there are a bunch of interesting ideas and some have real merit. So yeah, let’s explore some of these. Another ideas is just saying, “You know what? This is evolution. These ASIs are the next phase of humanity,” and that’s what Larry Page and others would talk about and I get it. I worked on evolutionary combination for a long time and those algorithms are very powerful. So it could very well happen. The challenge is that I’m a human and I want to stick around and have a role in the future and I’d love if humanity could too, right?

So maybe this could happen, but I think it would be fun if at least the provenance of our human intelligences could stick around and play a role in the future. That’s how I see it. So I don’t want to … Rather than giving up and saying, “This is just how it is,” I want humanity to remain being a live actor in this overall grand scheme of things.

Jim: Yeah, the radical transhumanists, which is, “Oh, humans serve their point as the launchpad for what comes next,” but then what comes next is out of our control. And that may be the way it plays out, but I think many of us, including myself, would hope that there are some alternatives. So other alternatives.

Trent: And on this, by the way, one way to distinguish, it’s not quite the radical trans humanists. We’ll get into that. It’s more like imagine 100 years from now where there’s all these patterns of intelligence in this ether on post-silicon substrates, etcetera, some of them, for sure, will be derivatives of today’s or tomorrow’s AIs, but I hope that some of those patterns of intelligence will have a history from human intelligences, right? Where humans have … That’s part of the provenance of that intelligence. And that’s, to me, worth fighting for, right? Rather than just giving up and saying, “It’s all going to be from the AIs of today and tomorrow.”

The next idea, so I’ve talked about the, “Slow it down,” I’ve talked about the, “Let go,” and to go to the other edge of the spectrum, “Speed it up,” and that’s effective accelerationism, right? And the idea is basically, “Let everyone have that. Speed it up.” So rather than just a few entities going for it in the AI safety approach, it’s saying, “Let everyone chase it, right? This multipolar AI world of thousands or millions or billions of super intelligent AIs or entities with superintelligent AIs.” So millions or billions of super intelligent AIs or other ones controlling those, and those, it can balance on that way because they can keep each other in check, right?

And we’ve seen examples of this in the past. My two favorites are the USA, how it’s organized balancing power among three entities, legislative, executive, judiciary or even better blockchains, right? They balance power among thousands of nodes typically. So that can be healthy. So if we have thousands or more AI intelligences balancing each other, that can be very healthy. And that’s the direction that E/ACC goes for, which overall, I think it’s really healthy and it’s also realistic because it acknowledges that the markets and the game dynamics, you’re not really able to slow it down. So in this case, you democratize it basically.

Yeah, that’s the core idea of E/ACC and the overall philosophy, of course, is grounded in physics, very optimistic, build up, not tear down and so on. So I’m quite partial to this. I think it’s pretty cool and it’s quite open to things beyond the BCI stuff, all of that. It just hasn’t emphasized that, right? And there’s other rifts that have happened too like Vitalik Buterin has D/ACC where the D means decentralized or defensive, zooming in on some other aspects. But yeah, that’s D/ACC.

Jim: And to give a grounded current example, there’s quite a bit of tension around large language models which aren’t AGI and they may not even be on the road to AGI, but they’re pretty impressive and they do lots of good things, a lot of bad things. And it’s the battle between the handful of big juggernauts building the commercial models and the open source world. And we track that in our little script writing project. And man, the gap is closing pretty rapidly. Now of course, the next big leap from the commercial guys may open the gap up again, but if I were to project out two or three years, it’s going to be really, really hard to put the open source large language models back of the box and that’s going to be true for all these technologies.

Trent: Absolutely, right. And also another thing that’s happening is blockchains have evolved a lot in the last 10 years. They’ve gotten a lot better, right? Not just for computational abilities but also privacy and more. And with Ocean, we started working on using blockchain to help drive AI even seven years ago. But now, especially with the explosion of interest in AI in the mainstream and so on, there are a lot of projects that have emerged. That’s great news to me because some of these projects as well as upcoming projects, etcetera, you can have decentralized approaches to getting intelligence, right?

So right now the world’s largest computer is Bitcoin, right? But it’s mostly useless compute doing the hashes, right? It’s not fully useless because it adds security, but you can get more useful work yet, right? And doing things like training LLMs in a decentralized way where you could have these world models that are fully owned by everyone participating, that’s really cool. So I think that’s going to be this extra pillar that’s happening to augment the open stuff that’s happening and I think that’s very healthy. So we will have … The open model stuff, we’ll be able to accelerate that much more.

Jim: Got it. So now the next step that people have talked about forever is the idea, “Well, we can just pull the plug if those ASIs get bad.” The cage model, talk us through that.

Trent: Sure. There’s two aspects of this. First of all, if these things are intelligent, not just as smart as humans, but super intelligent, then they’re going to be smart enough to figure out how to unplug themselves or they can’t get unplugged to be precise, sorry. So by background, I was mentioning Bitcoin, you can’t go and unplug Bitcoin. You can’t turn it off because there’s tens of thousands of Bitcoin nodes. And that’s always going to be … Even in the nuclear holocaust, there’s always going to be a bunch of people that keep running these things, keeping them around.

Bitcoin is around and that’s just one example of a decentralized system, but now there’s dozens, hundreds, thousands of different chains and each of them has these properties, right? Each of them having hundreds or thousands or more nodes and you simply just can’t turn those off, so you can’t unplug them per se. So going back to the AI then, it’ll be smart enough that it’s already made itself decentralized, therefore sovereign, therefore pluggable just like Bitcoin. So that’s the heart of it. You might say, “Well, let’s just have an extra fancy cage. Let’s use cryptography and some cool fancy way, etcetera, right?” Well, the problem is that humans are the actual weak link in computer systems, right?

I read autobiography of Kevin Mitnick many years ago. He was widely considered the world’s greatest hacker and it was amazing reading his stories about how he made Swiss cheese of all these computer systems by basically calling up the Department of Motor Vehicles and saying, “My wife is pregnant. Got to get to the hospital quickly, quickly, quickly. Please, can you give me a password. I need it. I need it.” The people on the other side are thinking they’re being nice and they give them access, and then before they knew it, their system is owned. So that happens all the time and there’s lots of other famous examples like a Loki in the first Avengers movie escaping from his fancy cage. So basically social engineering is always going to happen. Humans are the weak link. It doesn’t matter how fancy our security is, humans are the weak link.

Jim: Yeah, I got to say in my own little early days of phone hacking and computer hacking, which I was reformed by the time I was 20, 100% of my real successes were social engineering, impersonating Colonel Lorigan of ROTC to get access to the Defense Department’s data networks and shit like that, right? Most of the great exploits are social acts. There are other methods as well, but let’s not dig into them so much today. Let’s go and dig in at least superficially to the idea of BCI/ACC and then we’ll get into that in considerably more detail. So it gives us a high level idea of what you’re talking about and then we’ll dig into it.

Trent: Sure. So at a high level, the idea is for humans to get a substrate that’s competitive to silicon, silicon is the substrate that ASIs are going to be running on and then there’ll be post-silicon, etcetera, but for now it’s silicon. And it’s wildly powerful, right? It already has amazing compute storage and bandwidth. It keeps improving exponentially. That’s what’s powering AI, etcetera. So like mentioned already, our current meat bag brains just can’t compete against silicon for processing power. It’s one person of processing power versus 10 million, right?

But there is a pretty interesting idea with silicon. For everything that silicon touches tends to go exponential. Ray Kurzweil will explore this a lot in his previous book. I like to call it the silicon Midas touch. Everything that silicon touches goes exponential. For our brains to compete with silicon, they must touch silicon. Then the higher the bandwidth, the connection, the more that our brains can unlock the power of silicon for ourselves, right? That’s the heart of the idea. Well, there’s actually two ideas, brain computer interfaces or uploading, right?

If we did either, that would be a way to be competitive with ASI. So the brain computer interfaces, you have the super high bandwidth interface where it’s your meat bag brain talking to a silicon brain, co-processor, you properly aligned and so on, I can get into that later or you can upload, right? And with that, you get basically human super intelligence, right? And it goes under many labels, right? Some are calling this the merge. You can call co-intelligence amplification otherwise, right? And that’s the core idea of BCI/ACC, but it doesn’t just stop there. It’s not just saying, “Okay, okay, here’s the macro level way, but it is a race,” right?

It is timelines of three to 10 years from the people that I talk to and also what I read and so on and just observing what’s happening with this acceleration in the demand for silicon and stuff, what technology wants, etcetera. So simply hoping for this merge likely means it won’t happen fast enough. We’ve got three to 10 years. It’s a horserace and our horse is losing, right? So how do we accelerate that? Once again, the options are BCIs are uploading, but uploading is still mostly a scientific problem, right? It’s way too far out to be relevant for ASI risk, unless we have some massive breakthrough tomorrow, right?

Jim: I see no way that uploading happens in 10 years. I see no way that even uploading is likely to happen in the Kurweilian 2042 timeframe. As you say, it’s a science project and one with no real obvious solution at the moment.

Trent: Yeah, but I would love to see some ambitious scientists try to find a way, right? But rather than just relying on that, BCI, brain computer interfaces, have already matured past the science into engineering problems. And yes, there’s still ongoing science problems, but we can take the engineering from the technologies and sciences of the now and push it really hard, right? So BCI is far more pragmatic. So we’ve got to get there and we’re getting along and I’ll get onto that, the status quo, but we can’t just invent it, right? We also need to get it into the hands of the mainstream billions or at least millions, right?

We need at least a good chunk of humanity to be competitive, right? That’s the summary of what BCI/ACC is about. We need to accelerate BCI and get it to mass adoption.

Jim: Let’s talk a little bit about why bandwidth is important and then the two main roads to bandwidth, the mass first versus the implant first.

Trent: The bandwidth thing is important, first of all, to unlock your meat bag brain more because think about right now in this conversation, we’re talking back and forth where it’s mostly a serial stream of bits that’s pretty low bit rate. It’s just human language, right? And we have a bit of extra bandwidth because we’ve got the face-to-face and that helps looking into each other’s eyes a bit, but that’s still a relatively low bandwidth. It’s mostly just the serial stream of conversation. But inside my brain, I think to myself much faster than I can communicate to the outside world.

So if I’m thinking my meat bag brain to a silicon brain co-processor, if I’m only thinking at the speed of language, then it’s really holding back the meat bag side and the silicon side. You’ve got a risk on the silicon side actually, which is the silicon side could be way more powerful and could end up being very unaligned. And on the flip side, it means that we’re not really fully unlocking the capabilities of what the meat bag side wants to do. You want to maximize the bandwidth such that you can keep the silicon side aligned nicely with the meat bag goals. So you can think of it like RLHF, reinforcement learning with human feedback, but at a hyperlocal scale per human in real time, right?

So that’s the bandwidth arguments there. So you need to have this super high bandwidth and then how do you get to this bandwidth and we can get into that, but one path is noninvasive first, things like EEGs, but that’s relatively low bandwidth or you can go the route of something a bit more invasive like implants or optogenetics or otherwise, which unlock much higher bandwidth. But of course, because it’s cutting open skulls or gene engineering people or otherwise, then there’s regulatory approvals and other medical restrictions around human safety for that.

So yeah, that’s the summary. We have paths to get there. Ultimately, we want to have a high bandwidth interface back and forth between the meat bag side and the silicon side, and ultimately, we need to be a bit more invasive. But we have a couple of paths to get there.

Jim: As we know in the venture world, I often say, you’ve probably heard me say it, “You can’t jump up a cliff, right? There has to be a road from where you are to where you want to go.” And someone comes around and says, “Here, I got a can opener. I’m going to open up your skull and I’m going to stick this thing in from Elon and you’re going to be smarter and better.” My first response is going to be, “Why don’t you go first, right?” So let’s talk a little bit about the noninvasive road and maybe even a little bit of your history, keep it short about playing with noninvasive brain computer interfaces way back yonder.

Trent: Yeah, very briefly, I’ve been interested in this stuff since forever as long as almost AI, which was, since I was a kid like 10 or 12, I think I ran across my first book in AI and devoured it of course. In the mid ’90s, there was a company called The Other 90% and debunked, but that’s what it’s called. The president of Atari had started it, the ex-president. And you put this little device in your finger and it reads the electrical signals on the surface of your skin. With that, the idea is you can ski down a ski slope in a videogame simply by thinking about it and it was an echo of your thoughts. It actually worked better than random, which was kind of amazing.

So maybe there was 30 poles, and in me playing this game, this is about 1996, 1998 or so, I managed to hit about 70% of the poles I think, and maybe 30% I would fail, which to me was amazing. Because if you don’t put your finger on at all, it would miss most of them, right? And other people that tried it could hit 100% of the poles, which shocked me. It was like, “There was really something to this,” and I never ever forgot that. And then over the years, I kept buying other devices that would hit the market to try them out. Everything from OCZ NIA to the IBVA.

And I was always disappointed. The signal is never good enough to go build a company from, but it said the importance and technology around it kept getting better, silicon and computing and all that. So yeah, that’s a bit of the history. And then maybe about 10 years ago, I started really realizing, “Oh, wow, VCI could really be a great answer to the risk of super intelligence.” And I first wrote about it publicly. This is sort of stuff that I was honestly apprehensive to write about publicly for years simply because it’s wild and stuff. And so I first published on it in 2016, didn’t make a big deal of it or anything. And a really wonderful thing that’s happened in the last couple years is AI has gone mainstream. The worry about AI risk has gone mainstream and then especially E/ACC has grown as a movement. That gives a good framework for people to think about this in a healthy fashion.

So that’s a bit of my history on all of this and why I decided to sit down and write down this piece, expanding on ideas that it had even 10 years ago, five years ago, but put it into one comprehensive place, my thoughts on it.

Jim: All right, well, let’s dive into where things are today and where they may be soon on the noninvasive road.

Trent: Sounds good. Yeah, so overall, to get to this high bandwidth, you really actually have to solve three problems, engineering, which is hard enough on its own, regulatory approval for high bandwidth chips or etcetera to be in humans and societal acceptance for people to actually want it, right? Then the question is, “What order do you go about doing that? So Neuralink is going the implants-first route. So what that means is that it’s solving engineering first. It already had some pretty good answers to that several years ago, and just in the last few days here, it’s announced that it’s gone far enough in the regulatory that it had its first human implant and they showed results of that, which is pretty cool.

And then of course, societal acceptance, it will take time. They’re focusing on medical first. So fixing humans who aren’t healthy compared to optimizing healthy humans. So that’s the implants-first route. There is another route. Rather than going engineering first, then regulatory, then societal acceptance, you can start with mass acceptance first on noninvasive. So you basically do a partial on step one in the engineering where it’s not implanted or anything. It’s just noninvasive, EEG otherwise and then you go with that non-Invasive. Regulatory is much, much simpler. And you also do it in a way where you go for a killer app of some form and so you get societal acceptance, so some form of an EEG or otherwise that people want, right?

So think of it like a startup that you want to get into the hands of a million or 10 million or a billion users ASAP. How do you do that with something noninvasive? It reframes the problem rather than, “We must solve ASI risk or whatever.” You zoom in to the problem of, “Hey, I want to have a startup and the main constraint is that it’s got to win the hearts and minds of the masses and be noninvasive, so that the regulatory is easy.” That’s the bottom rung of the ladder. And then once you’re on that, people will start pushing for more and more bandwidth because they want better performance, right? They want to be able to control their computers better, control other things better, push the killer apps better.

And bit by bit by bit, this will also grow societal acceptance and push the regulatory, nudge it nicely, etcetera, until eventually, to get any further on the bandwidth side, you’ve got to start going invasive, optogenetics, implants otherwise. So those are the two paths, implants first or masses first.

Jim: Give us some examples of some perhaps realistic killer apps that hundreds of millions or billions of people might want to adopt that are noninvasive?

Trent: For sure. And as a precursor to that, we’ve got a bunch of [inaudible 00:35:38] technology now, right? We already have EEGs. This is basically sensors that sense electrical signals on your scalp. We have glasses that are AI-powered that have voice interfaces like the Meta Ray-Ban Smart Glasses. We’ve got subtitles in glasses. We’ve got things like the AR, the Meta Quest 3, and very recently, the Apple Vision Pro. The Apple Vision Pro has eye tracking, which is awesome and it really takes us closer because now people can start to control things where it feels like they’re thinking about it, right?

And eye tracking you can view as a poor man’s BCI, right? It feels like BCI from a human perspective because you’re not moving as far as you’re concerned. Maybe your eyes are moving, but you don’t think about it like that, but it actually will help adoption. So it’s really great to see that the Apple Vision Pro has come out. So then to your question, we’ve got these adjacent technologies, what are some of the killer apps, right? One is silent messaging. So basically, WhatsApp sending text messages by thinking about it. Basically, imagine you’re walking around and instead of having to pick up your phone and type, you instead can think by typing.

And this is a technology that’s been around for decades. People like Stephen Hawking typed all his books by EEG, right? And so that shows you that it’s already possible, but of course, he had to have a pretty big apparatus to do it, but what if it’s simply inside your Apple Vision Pro or your Meta Quest 4 or otherwise some other glasses, right? So that’s one and I think that’s pretty cool on its own, right? You’re sending messages by thinking about it, you’ve got this keyboard and then how do you see it? It doesn’t have to write to your brain. Instead, it’s simply a display on your Apple Vision Pro or otherwise, right? Subtitles in glasses otherwise.

So that, you can view as pragmatic telepathy. And I think it’s pretty neat when Elon announced the recent Neuralink progress, they are naming their first product to be called Telepathy. So totally aligned with that idea, right? So communication with other humans by texting about it. And of course, instead of texting others, you can also use the text to prompt some internal LLM, so you can be asking questions to an internal LLM and it can be giving you ideas. And I like to think of that as Jiminy Cricket, this guy on your shoulder like in Pinocchio giving you advice whenever you want and you have this advice machine all the time.

Jim: As an example, when you were jamming on Hawking, you could think quickly, physicists crippled, wrote book, what tech, right? And then your LLM or even Google would come back with the answer, right?

Trent: That’s exactly it, right? It’s pretty useful for things like, “What’s the capital of Portugal?” “Is this person lying to me?” “What’s next in my to-do list today?” So it’s going to be wildly useful and there’s a lot of cases where you don’t want to pull out your phone and look down and stuff. It’s going to be that much faster to simply type in then get that such response, right? And of course, you can type with the eye tracking or the BCI or even things like subvocalization and that’s EEG-type sensors that are on your throat where you whisper to yourself, but even lighter than whispering.

Jim: Well, that’s not going to be good. I’m in business meetings and it’s going to be sending out, “Stupid motherfucker. What the hell did that shit for brains come up with that idea?” I could get into big trouble. Self-vocalizations got memorialized.

Trent: Well, there’s different apps. These technologies have different pros and cons and I think about it like they’re best and complement with each other, right? So there is research that combines BCI with eye tracking for example, right? So you use the eye tracking to move around where you’re looking and then say, “EEG,” to click. Instead of right now with the Apple Vision Pro, you have to sometimes click with your fingers, right? That’s the main interface they have right now by putting two fingers together. So that’s the internal dialogue. Another one, already we have things like the Meta Ray-Ban Smart Glasses, which are recording images, audio, video in real time storing it or these necklace style devices like the Rewind Pendant and so on and it can get stored locally or globally, much better if local of course.

But imagine you can search for these via EEG, BCI or eye tracking otherwise or you can use another form of BCI, near-infrared. And that’s really good at looking just a little bit into your skull, but if you can do it in a 2D fashion. And there’s pretty cool research where you can think of an image and that image pops up in your visual cortex. It can be detected with the IR, infrared, and then maybe you’re thinking of an image in a movie and then it’ll actually be able to detect which movie that is. And the accuracy is surprisingly good. So you can use this. It’s like 2D input, a way to visually query simply by thinking of an image rather than the serial querying or serial typing that you would otherwise.

So you can have visual querying to get images in the past or maybe also to prompt LLMs to come up with new images, etcetera, right? And that takes me to the next thing too. So you can be prompting LLMs to create images and then you can share all this, right? You can share your memories, past memories. You can share ideas. And this is pretty cool because then it’s this idea of, “Picture’s worth a thousand words.” And so you use the very rough 2D that’s sensed from your visual cortex to generate an LLM. You iterate with that in real time over the span of half a second or three seconds and then you send that onto the other person you’re communicating with and you go back and forth thinking in pictures.

Jim: I’m going to raise a hand here and this is interesting, but it’s also a bit hand wavy. And to my mind it’s like … Remember Google Glass? I bought one of those Google Glass. I could find no useful use case for it, so it sat for years in my bag of useless devices and then I gave it away to somebody who wanted it to put in a museum or something. And I’m thinking about these things and they have to reach sufficiency for the task to get mass adoption, but I’m thinking about, let’s say for instance, simplex network-style BCI out for silent messaging, but then reading it on my glasses and then responding to it. They go, “Is that actually going to be worth a versus just pulling up signal on my phone and doing it?” Now, so what do you think has to be the level of goodness and killerness, a fuzzy topic, to make it so that this will be 5x better?

Because some people say, “Everything has to be 10x better.” I think that’s wrong. 5x is enough to drive adoption. I have a hard time visualizing, being willing to walk around with the big old thing on my face to very slowly and painfully send slow text messages to people.

Trent: Absolutely, I fully agree. So you do need to be 5x or 10x whatever on some key metric with either being able to type silent messages faster or maybe it’s enough to where you can just do it silently where for certain use cases. I also see though that the visual in could be a really big boost, because then instead of this serial input, you’ve got this parallel input, right? This 2D input. Right now, for me to communicate a picture to you with words, which is the only way I can or maybe I literally wave my hands and try to drive it by waving, it’s a pretty lossy slow way to communicate a picture, right? But if I can instead imagine a picture in my visual cortex and then share that to you, that might be enough, right?

So I don’t know which is going to be the killer app that takes off, but it’s sort of the tools of an entrepreneur that you apply here, right? Like you come up with 10. 50 different ideas that are all potential killer apps all around BCI and then you say, “Okay, which one?” You rate them according to, “What is the main USP, the main value proposition that works? How much of a benefit it is? Who would be the lead customers? What’s my go-to market?” all of that and then you pick the one that you go for. And it’s my hope that there’s not just two or five, but 10 or 50 or a hundred different companies going for it where they’re all trying different things and some of them will pop, right?

And we’re going to start to see some of these just by people building on the Apple Vision Pro, which is great. Even from Apple Vision Pro, there’s going to be a push to get higher bandwidth in some of the apps there. So I don’t know, which will be the best? I have my own favorites, I don’t know, right? And the one that I’ve mentioned, that’s just a sampling, right? I’ve thought of some, but for everyone I’ve thought of, there’s probably another 10 or 50 out there. And I would just love to see all of these explored.

Jim: What is the benefit to getting to the destination to BCI/ACC from these maybe the relatively prosaic noninvasive apps?

Trent: So ultimately, there’s two milestones along the way. The first is being competitive with ASI and the second is unlocking humanity to explore and reshape the cosmos, right? But what does this path look like overall? Maybe I can drill into that. So at first, the first thing you need to do is get to a point where you’re at high bandwidth, where regulatory is taken care of and the societal acceptance is taken care of, right? And we mentioned the two ways by either going masses first or implants first. But regardless of those, there’s still going to be a push to keep pushing the bandwidth further, further, further, increasing the bandwidth further such that you can augment yourself that much more, right? Think faster, make more money, whatever it is.

So there’s going to be this market push for that. Just like right now, there’s a market push for more powerful GPUs to drive the applications of AI today and other things. So this is going to keep happening, this push for higher, higher, higher bandwidth and then that will unlock we humans to do more and more and more stuff. And as we grow our capabilities, at first, the silicon side, it’ll be maybe 10% of the power of our meat bag brain side and then it’ll be 50% and then 80 and then it’ll be par, 100% each. But then the silicon side will get more powerful. It’ll be 2x more powerful in terms of flops or otherwise compared to the meat bag, 2x more powerful, then 5x, then 10x, then 100x, then 1,000x, right?

So bit by bit, the silicon side will have gotten more and more powerful, but it will stay aligned maybe by the virtue of its history, having this high bandwidth back and forth with the meat bag side. And then eventually fast forward to, say, you’re 90 years old, you’ve got your meat bag brain and your silicon co-processor brain side by side with each other, but the silicon side is a thousand times more powerful than your meat bag side. You’re 90 years old, you’re on your deathbed, you’re dying, your meat bag brain and body are about to pass away, but your silicon side is perfectly good. You’re hating it and you’re going to pass away in the next few hours.

So what you do, you clip it like a fingernail. You pull the plug on that, right? And then you’ll have had this emergent patterns of intelligence on your silicon side because that’s 1,000x more compute storage, etcetera, than your meat bag side. That will still be you. That’s an end game here where it’s a path for any given human, if they choose, to go fully silicon. And I’m hopeful for it. We’ll see. There’s many philosophical questions this raises, but at the same time, it’s a pragmatic path to get there. There is no big scientific thing stopping this as an idea. It’s more like something that will likely happen from market forces and any given human can make a choice of what to do when they get to that level of 1,000x more compute on the silicon side.

Jim: Very interesting and compelling arc, but there’s the piece in between. There’s the low bandwidth, noninvasive. At least, it’s hard to see how it becomes truly high bandwidth. Maybe. It’ll be interesting. Entrepreneurs can do amazing things, but then once we get into implants, can see a fairly straightforward fashion how the bandwidth go a bunch. I love in the essay you talk about this idea of talking in pictures and we might actually invent a whole new language that, not just pictures, but essentially two-dimensional thingies that we send back and forth, because currently, our language is one-dimensional. It’s interesting that language is a way to encode four dimensions in one dimension, both in and out.

And what would happen if instead of one dimension, two dimensions? And that could be very, very interesting. But to get to something like that, I don’t want to laboriously mentally type a prompt to an LLM to generate an image which has seven toes and then give it six prompts to make it better and then send that to you and then, “Oh, fuck maybe that’s okay,” for perverts sending specialized porn to each other or something. But for general purpose, a new language for humanity, being able to get into V4 or something in the brain and being able to see images, send images, receive images at 25 millisecond cycle time, that would be fucking bang-o, right? But that’s going to require deep implants probably.

Trent: Not necessarily, right? So remember, when we think BCI, the go-to default is to think about using EEG, right? Where maybe you’ve just got a few sensors on your forehead, but you could also have a bunch of them around your scalp, 20, 30, 50 sensors or more. But besides that, there’s actually a bunch of other techniques. And near-infrared is pretty interesting, because as I was mentioning before, this is not invasive. It’s basically using infrared to detect echoes of blood flow and otherwise near the surface of the skull, right? But it’s good enough to be able to detect what’s going on in your visual cortex, which is at the back of your skull.

With near-infrared alone, you can actually detect in 2D what’s going on in your visual cortex. And that to me is magical, right? It’s pretty awesome that the visual cortex is actually located there, that you can think of this, and that’s the path to 2D, right? And then riffing on that then, and what you had said, the idea of new language and so on, it unlocks a lot, right? And maybe for the audience’s sake, by background, the brain is way more plastic than we ever used to give credit for. Lots of neuroscientists have promoted that idea in the last 10 years, 20 years and it’s pretty cool, right? You can get people that they can see with their tongue with the right remapping, etcetera.

My good example in language is teletype operators. They knew morse code so well, they could type super, super fast, faster than current human typers on regular keyboards. They could type at least that fast in morse code. And then when they heard teletype, it might as well have been spoken language to them, right? And because their brains are wired and trained in that so well. And that of course is one-dimensional down, binary signal over time, but of course, we could have two-dimensional and it could be binary up down, but it also could make it continuous value. So you’ve got at least a couple of ways to have higher information flow.

And then what do we do with that, right? So you can think of it as a few ways to talk to humans or computers, other humans, right? So imagine me and you, which are wearing these devices, I could send messages to you by typing and then you see it as text. I could send messages where you’re seeing my pre-thoughts leading to this in a 1D sense or even my raw brain signals in the 1D sense that over time your brain rewires itself to understand. But it’s much more interesting in the 2D sense, right? So over time, those direct signals, maybe we start chunking in new ways, chunking different groups of signals and we have this new language emerge, whether it’s a 1D way or a 2D way or an end dimension away. And that to me is very exciting.

And there’s ways to learn this too. Imagine when I’m sending you text messages, maybe I send those, but then also you’re seeing my neural signals at the same time in 1D or 2D. So over time, by osmosis in a sense, your brain is learning the wiring bit by bit and his is actually unsupervised learning in a sense or supervised, I guess even.

Jim: That’s interesting. I like that because there’s some potential hybrids where we don’t all have to have super state-of-the-art implants to have some gain from a community of interactors using different levels of bandwidth. But now, let’s dig in even further. I’m going to have to look into what is actually behind the skull. Is it V1? Is it V2, V3, V4? I don’t even fucking remember, but I think it’s only V1, which is not all that useful probably. I’m going to hypothesize that they really do high-speed 2D new language. You’re going to need deeper implants or something we don’t have yet. So what are the issues around mass adoption of implants?

Trent: Yeah, so the biggest one almost certainly is privacy. So right now, when I think, I can be rest assured or 99.9% rest assured that no one is spying on my thoughts. My thoughts are my own, right? But with BCI, those thoughts are getting recorded by an electronic device and being processed and sent or maybe being sent raw across the internet to others. And there’s various layers of hardware and software in between, but along the way, in those various layers of hardware and software, what if there is surveillance happening, right? And it could be surveillance directly from the government like prison style like Norden had uncovered or the 2024 version of that, which is governments using large tech companies as proxy to do their surveilling for them, right? Surveillance capitalism in the brain.

And this overall problem, it has a label, it’s called cognitive liberty. And at the heart of it, do you retain the liberty of your thoughts as we go into this age of BCI? If you don’t, bad things happen. Governments, there’s literally the thought police like we’ve seen in Minority Report. Anything from surveillance of the government gets wind of what you’re thinking and doesn’t like it or criminals get wind or your lover gets wind or otherwise, do you really want that to be seen by some of these actors? So it’s sort of the last private sanctuary we have. So the goal is ideally to maintain cognitive liberty and this is very much …

It’s not just a libertarian thing, it’s a human thing, right? We all want to have liberty of our thoughts. It’s at the core of our being, but right now, there’s no guarantee of that happening. There’s actually a nonprofit that has been trying to raise awareness of this going back 10 years as well. To me, I am a builder, so I like to think about, “What are some of the potential solutions?” and I don’t think there’s any full solution yet, but definitely a part of this is to leverage decentralized technology where just like Bitcoin, no one owns the Bitcoin blockchain per se, right?

And with that, then you can control your own Bitcoins. It’s this idea of, “Your keys, your Bitcoin. Not your keys, not your Bitcoin.” So if you have the keys, think of it like a password to hold your Bitcoin, no one can take that from you if you have that yourself. And it’s similar for other things. For digital art, “Your keys, your digital art or data.” In the case of ocean, “Your keys, your data,” and then we can extend this to our thoughts, “Your keys, your thoughts. Not your keys, not your thoughts.” So basically we can bring some of these ideas from blockchain decentralization technology directly into helping maintain sovereignty of our thoughts in the brain, and basically, yet at the same time, get the benefits of brain computer interfaces.

It’s not an overall solved problem. It’s barely been explored, but conceptually it’s there. There’s still engineering things, obviously lots of engineering things to work out and some philosophical things too.

Jim: All right, so let’s imagine we figure out a way around cognitive liberty. Now the whole point of this is to use, I’m going to argue, it’s got to be the high-bandwidth version. I don’t see eye tracking and near-infrared, TMS or anything else being enough to tame the ASI beast. So let’s just stipulate we now have brain implants. How does that help us align these artificial super intelligence that are, well, you say a thousand or a thousand plus, I say qualitatively different, as qualitatively different as we are from an ant in our brains and a thousand times smarter, how do we align that with our little feeble brains and some [inaudible 00:54:48] connecting the two?

Trent: Yeah. Yeah, so overall, I view it as that there are independent artificial super intelligences out there that are the derivative of today’s GPT-4, etcetera, or some other technology. Maybe the LLM approaches will conk out. Maybe not, we’ll see. So the idea in BCI/ACC is where you have your meat bag brain and your silicon co-processor brain, but that silicon co-processor brain is part of you, right? So just like when you sit down at your keyboard, you start using the keyboard, your body decides that your keyboard is part of you. When you ride a bicycle, your bicycle becomes part of you as far as your body, your brain is concerned, right?

To quote Jonathan Haidt, “We are natural-born cyborgs,” right? We naturally take to that. And I see that it’s going to be similar when we connect our meat bag brains to these silicon brains where it will feel like a part of us, even if a low bandwidth connection to serve with and then the bandwidth increases, increases, increases. Now the challenge is, if it’s way too powerful at first, it will feel like this other entity, this other intelligence that we have obtained and that would be the challenge. So best solution I have to that so far is simply start with low computational power on the side of the silicon brain that commensurate with the bandwidth that you have and then it’s your meat bag brain that keeps that other side in line, right?

So it’s reinforcement learning by human feedback, but the human feedback is your brain giving direct feedback to that silicon co-processor. And then as the bandwidth goes up, as you can talk in higher and higher bandwidth to the silicon side, then this thing stays aligned with you and it still feels like part of you. It is you, right? And this works all the way up to the 1x or maybe the 2x or maybe the 5x where the silicon side is 5x more powerful. But then beyond that, what might happen, there’s maybe two scenarios. One is there will be intelligence emerging on the silicon side, right? John Holland style in Emergence.

That will feel a part of you or not, we don’t know yet, right? Maybe it will if it has a high enough connection to your side or maybe it will feel like your child, that it’s you can control it or maybe you give it guidance, but it’s kind of on its own. And then as time goes on, regardless of if it’s your child or whether it really feels like part of you or feels like you, it goes 5x, 10x, 100x, 1,000x. And once it’s at definitely the 5x or 10 x, it will have enough of its own consciousness, but hopefully, that just directly feels like you, right? Such that the you is not just the meat bag side, the you is the meat bag and the silicon side, right? And then as you keep going, you feel like you are more and more powerful.

And by the way, the silicon side will definitely argue that it’s you. Regardless of whether it is or not, it’s just going to be like that. And then yeah, like mentioned, you just keep going 10x, 100x, 1,000x and so on.

Jim: Right. While you were saying that, I had an aha moment, combining two ideas we’ve talked about here today. We talked about social engineering being how most good computer hackers work. What if BCI is how the ASIs hack the humans? We have a high speed channel. If the thing can only seduce me with words and images, that’s one thing, but if it can play my neurons, it can make me do anything, right? And so aren’t we actually giving the ASIs, let’s say, malevolent ASIs, potential keys to the kingdom by letting it interface directly into our brains?

Trent: Yeah, so that’s certainly a risk. What I see is that, remember this system, you’re going to have way higher bandwidth between your meat bag brain and it, compared to it and the outside world, right? So you’re going to be having this very high human feedback back and forth from your human meat bag side to the silicon side. And as long as it stays aligned with you the whole time where ideally it is you as far as you’re concerned, then the idea of being hacked by you becomes nonsensical, right? But I acknowledge this, right? So maybe it isn’t you. Maybe it pretends to be you and your meat bag side is convinced it’s you and it uses that as a way to sneak in. And that’s certainly a risk.

Jim: If I was a malevolent ASI, that’s what I do. So let’s imagine these things talking to each other at medium band, but the ASIs are talking to each other. As I mentioned, Jordan Hall and I came up with this idea 10 years ago, which is something like this and what we had hypothesized is we’ll know it’s worthwhile when most of the annoying shit in life is handled agent to agent and the me doesn’t even have to get involved. So the extended me takes care of making my lunch reservations, making sure my doctor’s appointment is scheduled and that, “Oh, fucking things got canceled. Okay, I don’t even want to hear about it. You just fucking deal with it.”

So they’ll be talking agent to agent to agent. So there’ll be a fair bit of bandwidth there and perhaps there’s an emergent super ASI bad guy that it comes from all these agent-to-agent communications with the agents then being the part that are actually also … They’re also grafted into our brain. So that’s there. So anyway, that’s one. The other idea I had, which is the ASIs tightly coupled to humans might well lag far behind indie ASIs that aren’t constrained by having to operate coherently within the relatively limited BCI interface. So maybe this is like the equivalent of, “All right, we’ve tamed the skateboard, but we’ve unfortunately didn’t deal with the Lamborghini at all.” What do you think about that problem, the fact that this only works to align maybe very, very, very inferior-grade ASIs versus indie ASIs that aren’t constrained by being linked to humans?

Trent: Yeah. So when it starts to keep expanding in capability to the 10x, 100x, 1,000x level, it will still have the provenance of you being aligned to you and the meat bag brain will try to keep it under rein. So I don’t think of that as its own artificial super intelligence. It’s this silicon co-processor, but more and more of the consciousness, if you will, or otherwise, it, of course, has emerged on the silicon side in … The amount of petaflops dedicated to the silicon side is much higher than the meat bag side. So I acknowledge this, and overall, I think the problem is that, in the horserace, by constraining this silicon co-processor to be aligned with meat bag side, will it lose the race compared to fully unconstrained ASIs that are just going full silicon from day one, right? That’s the risk.

And what advantage do humans have come out of the gate versus the pure silicon? Right now, humans do control all the resources. Over time, we will hand off more and more resources to the bots or these bots will take them from us. Bitcoin has a whole bunch of resources. You can view Bitcoin as a lifeform and you can say that it already owns the biggest world’s compute power, but it’s very dominant AI terms, right? So I’m pretty sure that, over time, these ASIs, they are going to get resources, but it’s a question of timing, right? As they wake up at first, are they going to have lots of resources when they’re AGI level, 1x humans or 5x? Probably not, right? Will they be able to trick us into getting lots of resources from the get go? Maybe a bit, maybe a lot.

But at the same time, the humans who are having their silicon co-processor to be able to compete, have access to a lot more resources, especially the more humans that go for this, the better in a sense. So that’s going to be the dynamics. The silicon co-processor have more resources at first, whereas the AGIs and ASIs are less constrained, but then it probably will have to be pedal to the metal to the humans including the silicon co-processors to keep racing, right? So I hope that we have an interesting race here where we have a fighting chance. That to me is still a big open question, right? It truly is timelines of three to 10 years to AGI than ASI. Wow, we don’t have a lot of time.

Jim: I suspect the ASI is more than 10 years out, but I could be wrong. I know some very, very, very smart people that believe AGI is imminent. And then the question is, how quickly do things go from AGI to ASI? Now I’m going to give a hopeful thought, get your reaction to it. You talked about this as a horserace and much of history is evolutionary horseraces and things win for weird reasons. Like the famous VHS-Betamax race. Beta was actually a clearly superior technology, but VHS was moving a little faster initially and you could get a single movie on one tape one year earlier than Beta. And it was exactly the time the video store industry was just beginning. About half of it porn, but doesn’t really matter.

So that series of lock-ins caused Beta to lose essentially because everybody wanted VHSs so they could rent. Nobody could figure out how to record TV with their VHS, right? The famous blinking 12:00 on all the VHSs of the 1990s. And so weird contingencies happen on trajectories. Now here’s another interesting example which maybe gives you some hope that your strategy will work, which is we all know that Deep Blue beat Kasparov in, was it 1996 or ’97, something like that. For about 18 years thereafter, it turned out that the combination of computers plus human chess experts could beat any unaided computer chess program, which is interesting, right? Because there was a lot of progress in computer chess programs over those next 18 or 19 years.

But for 18 years in tournament play, the combo of human plus computer could always win tournaments very reliably against the world’s best chess ASI, let’s call it. And by now, of course, as you know, chess ASIs are so good that the one on your phone could beat Magnus Carlsen probably and one that runs on a $800 computer could definitely slaughter Magnus Carlsen and all of his friends sitting around trying to figure out how to play chess. So maybe there’s an analog here, which is that there is a cognitive synergy that will last for some period of time between the super parallelization and the 3.5 billion years of evolution that humans have, plus the amazing computational power, high-fidelity speed, etcetera, that computers have. And that this, let’s call it 18-year period, though unfortunately at compute speed, it might be 1.8 years, but there’s some period of time where the cognitive synergy of humans plus ASIs can compete ASIs.

Trent: Yeah, I think that’s a great idea actually and it could be very well be the case, right? You mentioned the chess example and they call that Centaurs, right? And for my own work too, as you know, in the world of CAD tools to design computer chips, when we first started doing the first company, it was automatically synthesizing analog circuits creatively. And a lot of the designs they came up with were garbage-y, right? And the designers said, “Well, this is good, but it’ll never work on the real world because of X, Y, Z.” And the simulator didn’t see that, right? So we were actually having to go back and forth to add more constraints, add more constraints.

And then with iterations, we realized it made a lot of sense instead to actually bring more human knowledge and control into the loop and make it more of a CAD tool leveraging the human, much like these chess players that are combination human and AI. And that’s actually been the history of CAD for chip design since the early ’80s and before even and even now, right? People can ship these chip designs with 20 billion transistors, a team of 10 or 20 engineers in three months and it’s only because of this combination of human and AI. So there’s a lot of merit to that and I think one way of summarize it is that the humans have access to a lot more context, implicitly understanding the constraints of the world that maybe haven’t been encoded into what the silicon side fully sees on its own as well as access to the resources, right?

It’s the human that enters that final password to unlock the $50 million to manufacture the chip or otherwise. That actually overall is an idea. It might buy us time too and I hope it does, right? So I think we should try to get every Euro 3 we can on this in this race, but ultimately towards ending on a happy note too, this isn’t just to win the race to ASI, right? In fact, overall, the ASIs are going to be around. So in a sense, it’s to be competitive with the ASIs, but once we are there, the universe is the limit, right? Then humanity gets unlocked because it’s no longer bound by its meat bag bodies. We get to explore the universe at the speed of light and reshape it, going up the Kardashev scale from building Dyson spheres to harness the power of stars, reshaping the cosmos at the scale of galaxies, etcetera, etcetera.

And it’s a lot more feasible to do that if you are unbounded by biology. Sure, you can leverage biology and anyone who wants to stick around in their biological cells can, but there will be a bunch of people who just want to want that grand adventure that much grander.

Jim: Indeed.

Trent: That’s what I’m excited about.

Jim: All right. Well, this has been a wonderful, interesting, exciting episode here and anyone who wants to learn more, make sure you check out Pragmatic Path to Compete with Artificial Super Intelligence, bci/acc, and you can find it on the episodepage@jimrecho.com. Thanks, Trent McConaghy for another very deep and interesting conversation.

Trent: Thank you very much.

Jim: All right.