The following is a rough transcript which has not been revised by The Jim Rutt Show, Ben Goertzel, or Trent McConaghy. Please check with us before using any quotations from this transcript. Thank you.
Jim: Our guests today are Ben Goertzel and Trent McConaghy. Ben is one of the world’s leading authorities in artificial general intelligence, also known as AGI. Indeed, Ben is the one who coined the phrase. He’s also the instigator of the OpenCog project, which is an AGI open source software project, and SingularityNet, a decentralized network for developing and deploying AI services. He’s also CEO of the Artificial Superintelligence Alliance, which we’ll be talking about today.
Trent is a serial entrepreneur having built multiple successful companies. His current major project is Ocean Protocol, which is at the intersection of AI, data, and Web3 technologies. It’s cool stuff. His most recent project is Ocean Predictoor, T-O-O-R, a crowdsourced prediction feed at the moment focused on near-term crypto future pricing. Trent is also a thinker and writer on cutting edge issues confronting humanity. One of my favorite essays of his is titled Nature 2.0: The Cradle of Civilization Gets an Upgrade. He is also passionate about upgrading human capacity via human computer interfaces. Welcome guys.
Ben: Hey, good to be here.
Trent: Hello, ditto. Great to be back.
Jim: Yeah, both are, indeed, returning guests. Trent’s appeared twice previously on EP 2/22, where we did talk about human computer interfaces. Very interesting conversation and way back yonder on EP 13. Ben has appeared many times, including EP 2/17, EP 2/11, Currents 072, Currents 025, and he was my third guest when I didn’t know what the fuck I was doing in EP 3. As always, you can find links to all the above, including their people’s various projects, on the episode page at JimRuttsShow.com. So with that, on with the show.
Today, we’re going to talk about the big crypto merger of Ben’s SingularityNet AGIX Token, Trent’s Ocean Protocol, Ocean Token, along with Fetch, a third crypto project to create a new token, the artificial Superintelligence Alliance token. I should mention to my listeners, full disclosure, I’ve been a long-term holder of the tokens of both AGIX and Ocean. I got both of them at or near the ICO, and I have purchased none of either in the last year. So I have a vested interest, but not a current pump and dump.
So guys, I did a little research on other crypto mergers and I could turn up a couple of small ones, but this looks by far the biggest, is that true?
Ben: Seems to be the case. Yeah.
Jim: And the current market cap of the combined tokens is something on the order of $4.2 billion. Does that sound right?
Ben: Yeah, changes every day, but that’s the general order, right?
Jim: Very cool. The other thing I noticed in the news is that there’s a proposal and subsequently a vote to bring in two more, at least, vote to bring in one of the two, two more tokens, NuNet and CUDOS, C-U-D-O-S. Tell me a little bit about what you’re thinking about the future. Do you see this as an ongoing roll up kind of merger where you’ll bring in more and more projects?
Ben: Yeah, I mean, not willy-nilly, but in general, our motivation for pulling together SingularityNet, Ocean, and Fetch into a larger, tokenomically combined larger heterogeneous network. I mean, this motivation doesn’t stop with these three networks that we initially combined. I think there’s substantial synergy between Fetch, SingularityNet, and Ocean in terms of our visions and aspirations and our software. And the same holds for a variety of other projects.
So I mean, we don’t need to expand more to fulfill our mission, necessarily. But on the other hand, I could look at a substantial list of existing projects with listed utility tokens, which would complement what we’re currently doing in ASI lines. And the CUDOS was one example where they’re strong in the sort of decentralized hardware provisioning side, which complements the things that Fetch, Ocean, and SingularityNet are doing. So I think there’s certainly a significant number of other promising potential targets to merge into ASI Alliance, and it may well happen.
I would also say the crypto world is erratic, insane and full of surprises. So we are not going to make any exact promises about what will happen. But, yeah, there’s a lot of utility tokens, right? And I think it’s been an ongoing discovery process on the part of the crypto community. What are all these utility tokens? Do you really need them? How should cross-chain compatibility work? And part of the motivation for doing these merges is saying, “Well, seeing how things are actually panning out in terms of the software we’re building, the utilization of this software, the concepts of products we’re making, maybe we’d do better off to have fewer tokens in order to better drive forward the decentralized AI software we want to build.”
Trent: And maybe just to add to that, in SingularityNet, Ocean, and Fetch joining forces at the token level, it allows us to punch at a higher weight class from a financial perspective, be more visible, be open to the things like ETFs, et cetera, that are out there. So punch at a higher weight class financially, but also punch at a higher weight class with respect to technology and scale.
We’ve seen these hyperscalers and fast-growing startups in centralized AI land leveraging the scale to get ahead. And so by drawing on the tools that you can merge tokens and then align incentives, we can actually start to get the benefits of scale in decentralized land as well. So yeah, punching at a higher weight class, both financially and product-wise, is a lot of the big reasons to do the merge.
Ben: I think the interesting, or one of the many interesting things organizationally is we’re trying to get the advantages of merging together into a larger entity without some of the disadvantages that often come from regular, corporate mergers. So we’re not merging SingularityNet Foundation, Fetch Foundation, Ocean Protocols Foundation into a single, mega crypto foundation with a single management hierarchy over all of them or something. So we’re remaining separate organizations with a common crypto token used in all of our products.
And there’s certainly an added incentive for cooperation because, for example, Trent and I have known and respected, liked each other for a long time. We’ve talked about doing collaborative work among our different products, and not much of significance happened just because we’re busy doing our own shit in each case. So having a common token, I mean, it gives an added incentive to actually take the time and effort to do things together, which is significant.
On the other hand, it is also significant that we’re not trying to put me or Trent in charge of each other’s team or something, or any of us in charge of Fetch’s team or Fetch in charge of our teams. I mean, we’re trying to keep the agility and creativity that comes from having smaller organizations, but get some of the added oomph that comes from being a larger organization.
And, of course, you’ll never be able to balance these two opposing factors a hundred percent perfectly, but it’s interesting to see how the different ways that crypto entities are structured, gives you additional possibilities for sort of a obsoleting, some of the dilemmas you have in ordinary corporations when they do M&A stuff.
Jim: Yeah, I’m glad you brought that up. I had it in my notes to bring up later. When I first read the early draft of your white paper, I actually made the mistake of assuming that you were talking about a standard corporate merger, and you guys corrected me on that and I said, “Whoa, this is quite interesting.” It’s sort of analogous to suppose three companies instead of merging, just combine their stocks together into a single stock. And I go, “Hmm.”
Ben: You could of course do that. You could say, we’re going to put all of our stocks into some holding company and then just let that holding company do some shared marketing. That sort of thing happens in Asia more often than in the West, actually. But then you have a lot of other cultural and economic factors that make things very different than the crypto space.
So I would say we are sort of breaking new ground organizationally, and in terms of financial dynamics as a footnote on the path to trying to break much more important new ground by creating AGI and superintelligence. I would add that, while I’d thought about doing mergers with SingularityNet at some points in the past, and probably Trent had also, it was Humayun from Fetch who provided the big oomph to make this happen. I mean, we’d known him for a while, and we’d been talking about doing various joint projects with Fetch, and they also hadn’t quite happened. It was Humayun who brought up maybe we should do this tokenomic merger.
And I think it had been bouncing around in my head and that of others for a while. Much of the reason we hadn’t pursued it is it seemed like fucking pain in the ass. And it has been in many ways. So at this point I can say it was worth it and it’s going to become much, much more worth it. But the reason we didn’t pursue this before wasn’t because we didn’t have the idea that you could get more synergy and scale by merging together with some other projects out there that had quite similar aspirations and complementary code bases and teams. It was just mergers in the regular corporate world are annoying enough and there’s a concrete playbook that everyone can follow.
Now doing this for the first time, you just have to deal with a lot of complexity and confusion that I’m sure in a couple of years from now you’ll have a whole bunch of M&A activity in the crypto world once we have paved the way and shown people that it works and doesn’t lead to some kind of disaster.
Jim: Or it does, and then, there’ll be a warning.
Ben: It’s working so far, though. I think we’ve gotten through the scariest part, which was just how do the communities of the different projects react? And the reactions were wild, crazy, and all over the place, as you expect from crypto, but it’s sort of settled down and, I mean, hasn’t led to an amazing financial windfall or something yet, but nor has it led to a terrible disaster.
So I think, by now, it’s looking pretty clear that disaster did not happen, and now what’s still to be determined is can we use this to move more rapidly and emphatically toward dramatic success than we would have otherwise? And, I mean, obviously, I’m optimistic, but empirically we’re going to find out during the next year or two.
Trent: In terms of mental models, we were talking about existing M&A approaches of existing companies and stuff. And one thing Ben had mentioned in the past, he had thought about doing M&A’s, and so had I, with the SingularityNets of the world and many other crypto orgs that we know well. And the thing that held us back was we were mostly thinking about the traditional mental model of merge the organizations and merge the stock/token or whatever you have depending on the industry. And that’s very heavy.
And we saw, I think that the novel thing here is that as a baseline is just merging the tokens and then everything on top of that in terms of alignment of incentives or alignment of leveraging synergies is all a bonus. It’s all gravy. But even from that baseline of just merging the tokens, you already get the shared liquidity and everything around market making, and you’re much higher up in the list of that we’re a top-thirty token and all of that. Then you get all this gravy, this bonus, the more synergies you identify, et cetera.
And then the question is, “Okay, what does this structure look like? What are similar parallels in the corporate world in the past to think about?” And one way to think about it is, not a conglomerate, more like a vertically integrated company with independent business units. So think like GE as a conglomerate, it’s got whatever, 10, 20, 30 different business units with, in most cases, a few synergies among the different business units. So it’s just sort of more size, but not really much shared benefits from one thing to the next to the next, right?
Compare that to Apple, it’s fully vertically integrated, but it doesn’t really have separate business units. At the end of the day, it’s selling phones and some other services, I guess, too, but mostly not separate business units. Microsoft has separate business units. Each unit has its own profit and loss, but it’s not quite fully integrated. And there’s probably better examples yet.
But overall, the way to think about ASI Alliance, Artificial Superintelligence Alliance is it’s kind of like a vertically integrated organization where each sub-organization has its own profit and loss. The sub-organizations along the stack right now are Fetch, SingularityNet and Ocean, and we’re adding more. CUDOS vote just came through. And within that stack, in terms of the vertically integrated, we’re focusing on the AI stack. What does it take in order to take AI to scale? That’s the heart of it.
And so at the very top level, you have the last mile applications, whether it’s querying agents to do jobs like booking a plane or whether it’s trying to do time series prediction for the weather or for crypto, or whether it’s know chatbots, AGI style, really souped up chatbots and beyond, whether it’s robotics. And then one level down, you’ve got the different infrastructure around that in terms of software, the development of the algorithms for AGI and the token incentives. And then one level below that, is the crypto networks themselves that house the transactions. One level below that, you’ve got the decentralized compute and storage. And then below that, you’ve got silicon and energy and all that really, really little infrastructure.
So we already have some nice complimentary offerings at the higher levels of the stack in terms of applications, in terms of software infrastructure around data sharing, around agents and intelligence. And at the level of the chains, we’re not as fleshed out there and at the level of decentralized compute, we’re not as fleshed out there, but this is where adding more and more of the projects come in, such as CUDOS and more.
So this is kind of how I see it overall. It’s basically taking an AI stack and fleshing it out to scale, top to bottom. And ultimately with these top level goals, things like going for artificial general intelligence and beyond, which is ASI, artificial super intelligence, as well as the more mundane, day-to-day stuff of today. So from the today to the tomorrow, leveraging the benefits of scale.
Jim: All right, that makes a lot of sense. I’ll know you guys have arrived when you, like one of these recent other project fires up your own nuclear power plant. One of the tech outfits is in some partnership to restart Three Mile Islands because of how much power is necessary. So that’d be the bottom of your deck. Though I want Ben to come up with a matter/anti-matter machine. You have to find an anti-matter mine someplace, right? Of course. That’s impossible.
Anyway, one final technical thing before we move on to more substance, is that currently, best I can tell, the merged token is still trading as Fetch, FET. So anyone who wants to track what this is all about, check FET. I assume, and I saw in part two of this project, that eventually a new name will come out.
Ben: We’re intending to change the ticker symbol to ASI. Yeah, that was always the original intention. And it’s just a matter of lining up all the third party organizations in the crypto ecosystem, like centralized exchanges and such, in order to agree to it. So if you look on Binance and other exchanges, you have these perpetuals and other futures and FET, they’ll have to stop and restart all these futures contracts.
So there’s a bit of accounting mess involved in getting all the centralized exchanges and such to change the ticker symbol. But yeah, it will happen well before the singularity in the emergence of super intelligence. If you look at Matic, which was a major Ethereum layer two project, they changed the name to Polygon, and there was a bit of a delay before the ticker was changed to POL from Matic. They’ve finally gotten it done. I think our change will be much faster than theirs was, actually. But for now, look at FET, which we can think of as an acronym for fucking excellent technologies or whatever you like. And yeah, sometime before too long it’ll be ASI ticker.
Jim: Yeah, it’s interesting you got the crypto industry made a mistake that the financial industry managed to dodge, probably by accident, which is they provided one level too few of indirection. In stocks, there’s a thing called … and bonds, something called a CUSIP number, which is the actual identifier. And things like symbols and names are just attached to that. So they’re actually easy to change tickers and names because the actual transactions are done on the CUSIP number.
But I’m sure people didn’t think about that when they were devising these crypto networks, and so they lacked that level of indirection to make that change sort of totally trivial.
Ben: Yeah, I mean, it’s a chaos actually. There is no central arbiter of ticker symbols, but nor for crypto projects. But nor is there a rational participatory democratic process for it. So it is just people use whatever ticker symbol they want in their smart contracts, then they tell centralized exchanges to use it. And you can list things on DEXs.
You could have a situation where I list the shit token on one DEX and someone else makes another token named shit and lists it on the other DEX, and then people would’ve to compare some code numbers to see if they’re the same thing. I mean, it’s a bit of a mess now. There’s no major problem case like that that actually exists right now. So it’s sort of working, but it’s indicative of the general wild westitude of the crypto world, which has pluses and minuses.
Trent: I would say. It doesn’t have one level of indirection. It has 10 levels of indirection. Technology really isn’t a constraint. It’s more like it has too few constraints, right? But then the real constraint is really with contracts, that last three months, six months, and more among the perpetual traders and more. But technology-wise, this could all be done in a day. That’s just how it is. But it’s not just technology.
Ben: I mean, changing the ticker in the smart contracts is not a big code change. Altering the code doesn’t take long. So it’s really about having centralized entities involved in the ecosystem, which may not be so much the case a few years from now, but it’s the reality now.
Jim: All right, this is enough about the mechanics and the tokens. As Ben knows, I have only a certain amount of tolerance for talk about tokens. It’s a topic that eventually causes my eyes to glaze over. And so let’s move on a little bit to the substance. What is it that the reason this merger is done other than tokenomics, visibility, top-thirty, all that stuff? And I know both of you guys are passionate about artificial general intelligence and ASI.
So let’s move our focus a little bit to there, and maybe it’d be useful for one of you to redefine for our audience what AGI means and what ASI means, et cetera.
Ben: Sure. So neither of these is an extremely well-defined term just as intelligence itself is not an extremely well-defined term. But qualitatively, what we mean by an AGI is a software or hardware system, which can generalize, so take leaps of speculation and imagination beyond its knowledge and its programming. And when we talk about the human level AGI, we mean a system that can generalize and take leaps beyond its programming and knowledge, roughly, at least as well as human beings can do.
And clearly, current LLMs, for example, are very smart at some things, but they don’t do that, right? I’ve been playing a bunch, like many people, with the new o1 model from OpenAI. It’s awesome. But if you ask it to do programming in an obscure programming language, but there’s not a lot of examples of that language on the internet, it does very, very badly. And it’s quite hard to get to generalize from existing languages to a new one. And people are not so much like that. If any human had mastered all those programming languages that o1 had, that human would not have such a hard time pivoting to a slightly different, new weird language, right?
Trent: Polygram is a wonderful essay on smart versus wise people, and the idea of a smart person is you’re really, really good at one or two topics and you might fall flat in your face and a lot of other stuff. Whereas a wise person is not shitty at everything. They always figure out a way where they can get their way through.
So traditional AIs, narrow AIs are smart. They’re really good at one or two things, often way, way better than humans. One way to think about AGI is that it’s the first time we’ll have AIs that are actually wise, that are not shitty at everything that humans are okay at. So that’s one possible way of thinking about it.
Ben: Yeah. Well, I think I could quibble a lot on that because I think wisdom is a different thing than generality of intelligence. But on the other hand, let me go on to superintelligence before we dig into wisdom, which is-
Jim: No, before you do that, I’m going to … because we have to, because we’re talking about AI. I got to get your guys’ individual forecasts on when we’ll get to AGI. I’ll let Trent go first on that one.
Trent: Three to six years. My rational, logical, linear mind thinks 6 years or 10 years. But my exponential mind, I’m watching Nvidia’s announcements of 50x more efficient compute, seeing the race for Microsoft with a nuclear power, and Amazon with a nuclear power and all this other stuff. There’s so much money to be made, and that money is just piling into AI, and there’s that chance that just scale with just money will take us there. We don’t know for sure, but there’s a chance. So three to six years is my guess for AGI.
Ben: Yeah, I would say basically the same. And we haven’t discussed this before, but I would say the same thing. I mean, getting there in one to two …
Ben: I would say the same thing. Getting there in one to two years seems out of sync with the sort of product roadmaps of organizations out there. And I think for it to take 10 or 15 years, there will have to be some really big weird obstacle. You need quantum mechanics to get to AGI or something, which seems quite… Or World War III or something hits and just pauses tech development. You can’t rule those things out.
But it seems like if our current understanding of intelligence and computing is roughly right and the world economy and society keeps going vaguely the way it is now, which includes a bunch of warfare and mayhem and stupid shit, if things keep going vaguely as they are now, I would say three to six years is probably it. And if I was going to make a bet on a prediction market, I would just go with Ray Kurzweil’s 2029 because he plotted a bunch of curves and why not? And that’s five years from now, which is roughly within my range anyway.
If I look at what we’re doing in OpenCog project, I could say we could get there in two years. We’re probably not going to get there next year, but we could get there in two years. But then things often take longer than I think. On the other hand, with OpenCog Hyperon so far, most things have gone roughly on the schedule I’ve projected rather than taking a lot longer.
So I think there’s three lines of evidence you could look at, which I mean one is Kurzweil and just looking at exponential progress on the amount of processing power and all sorts of allied things in the industry. The other is just naively what the AI systems are doing now. The LLMs are not AGI, but what they can do is quite awesome compared to what AI systems could do before.
And then looking at specific projects like my OpenCog hyper round project, I can see a concrete pathway that looks like it could get there. So there’s a bunch of converging lines of evidence, none of which is a proof. But it’s also interesting as Trent says, big tech companies in US and China and in Russia for that matter, although that’s somewhat sidelined by political issues now, big tech companies are run by CEOs who are tech-savvy and also think AGI could come in this timeframe and are making investments commensurate with this intuition, which is interesting and quite different from previous times in human history.
Now, the timeline is super intelligence after you get human-level AGI is a whole other thing. Ray Kurzweil thought we’d get to human level AGI 2029 and then to super human super intelligence 2045. And I’ve never thought that was likely, it seemed to me it will be much faster. Once you have a human-level AGI, it can be your scientist, it’s a mathematician, it’s a hardware engineer, it’s not necessarily that within five seconds or five days it will upgrade to massively superhuman level.
But it seems like purely through software improvements, even without hardware improvements, a human-level AGI should be able to upgrade itself to a significantly superhuman AGI just by doing better AI development than humans are able to do. And then you get into the new hardware and the new factories that AI will design, which could take a couple of years to roll out in practice.
But it’s seeming it’s months to years to super intelligence from human-level AGI, unless that human-level AGI chooses not to advance that fast for reasons that will be more clear to it than to us. There, you’re into obviously an even greater level of speculation than you are when you’re talking about the path from here to human-level AGI.
Trent: It’s funny, just like my prediction was in line with yours before or yours was in line with mine, vice versa here too. You said months to years and I also see one to three years after AGI is ASI. And the baseline reasoning is we’re seeing about a 10x increase in AI capacity from know floss perspective, intelligence perspective or so, 10x per year for the last five to seven years.
And so that’s one thing and it’s partly from Moore’s law and hardware and data and compute and partly from algorithm improvements roughly equal contributions or so. And that’s going to keep continuing like mentioned before there’s so much money to be made and power to at least not be lost, that’s the reason that China and USA are going out from a geopolitical level.
And then there’s this rule of thumb in tech, if something is 10x better in some way, it often shows up in a qualitatively better way. So if something is 50% better, it’s just quantitative. It’s just like, okay, my battery lasts another hour or whatever. But if you’ve got 10x better, it’s often qualitatively better, some qualitative change. So if we go 10x more compute slash intelligence compared to AGI, that might constitute super intelligence.
But for sure if we do a thousand x, and so 10x is one year, a thousand x is three years, that’s why I say one or three years. So it’s 10x to a thousand x more compute slash intelligence compared to AGI. And that will definitely take us to a super intelligence where it’s just running circles around every human. That’s how I see it.
Jim: Interesting. The rut view is that the takeoff from AGI to ASI indeed ought to be quick, but for a completely different reason is I’ve spent a lot of time studying human cognition and cognitive science and humans aren’t very smart. We are just barely over the general intelligence line.
The examples I like to give are our working memory size is seven plus or minus two, which is actually not quite true, but let’s call it seven plus or minus two. Five is George Bush and nine is Einstein. But there’s absolutely no reason I can think of that the working memory size for an artificial intelligence can’t be a thousand or 10,000 or a million where instead of having to page bits and pieces of information in and out of your brain to try to understand something, it shouldn’t be that hard to build an AI that can suck in a whole book and actually have the whole book in its mind simultaneously.
And in fact, Gemini sort of does that. Now of course, it doesn’t do with total generality, but it has some ability to bring in a whole book and do various things with it. And our memories are not high fidelity. They’re very low fidelity and every time you retrieve them you mess with them. So you combine just those two fixes, much bigger working memory size and high fidelity memory, and there’s huge amounts of room over the bear line that mother nature barely staggered over with humans.
So on the other side, I’m less sure about AGI and I was thinking about that this morning. What kind of sharp example could I give about maybe how far we are away from AGI? And that is self-driving cars, something I’m quite interested in, had several leading people on the podcast about it. And I stopped and realized how much huffing and puffing and money being spent to get a self-driving car, which still is nowhere near general. Still requires ultra mapping and some people back at the central office to intercept and take over every five minutes and this stuff.
And I compared that to a pimply IQ 90 16-year-old who can learn to drive in a week and after a year, with maybe a thousand road miles of experience, is a pretty good driver. That is a long way from spending tens of billions of dollars, millions of road miles, hundreds of billions of simulation miles to get to something that still requires a back office person to intercept the decision every five minutes. So maybe the true generality is going to be harder to get to than we think.
Ben: Yeah, you can’t refute definitively that perspective except by succeeding and creating the AGI. But I think that self-driving cars could be difficult for a couple of reasons that are interesting and relevant. One is just that this is where safety is super important so that the amount of experimental learning you can do is much less than in some other domains.
I’ve found that with various AI applications I’ve worked on that domains where you have to be very, very careful are just super annoying from an AGI R&D view. We have these humanoid robots you’re familiar with, so the Sophia robot, then we have the Grace robot, which is Sophia’s little sister doing elder care and the Desdemona robot, which is Sophia’s little sister that recites crazy poetry and sings in a rock band.
And you can guess which one, Sophia, Grace or Desdemona, is more fun for experimenting with radical stuff because saying weird shit and waving your arms around like crazy, falling down on stage in a rock performance context is often a feature, not a bug as long as it doesn’t destroy the robot.
Whereas for elder care, for medical stuff, you have to be so careful not to say, well, grandma, maybe your life isn’t worth living anymore. I mean, the robot has to be… And it can’t recommend, well, maybe you should take some extra pills. So you have to constrain very much what can be done. And the result is that the rate of improvement of your intelligence is constrained.
And I think self-driving vehicles are, it’s a bit like that because the training, the data you need to gather is from driving actually out in the road where people are. And then you can only gather that very slowly because you have to be so safe. So that’s one issue there. And if you have general intelligence worthy of the name, you would presume that you could teach it in domains, which in the early stages are not so safety critical. And then it can generalize to the other ones.
For the same reason that little kids in a preschool, you’re creating an environment for them where they can do a lot of random stuff and it’s hard to hurt yourself badly. And so the early stages of during to be a general intelligence seemed to require some sort of freedom to experiment.
But the other thing with self-driving cars it isn’t clear is we don’t know how AGI hard driving a car at the human level is. With the notion of AGI hard is an informal notion, meaning it’s not really tractable to make a software system do it without making that software system basically human-level general intelligence. We’ve done very badly at predicting which things are not AGI hard. We thought chess was, we thought GO was we thought holding a human conversation was, none of them were. Now, maybe in the end nothing is.
Maybe there’s some specialized system that can do any particular thing that people do without having general intelligence, but it’s not quite clear. So it could be that for human-level driving, there’s a long tail of situations where the most tractable way to deal with it is just make the thing of general intelligence already.
So there’s a couple interesting things with that example that you raised. But of course, there could be hidden rocks that we’re not anticipating. And my reasons for thinking there aren’t really come down to technical particulars of cognitive architecture and AI learning and reasoning algorithms and such, which we dug into a bit in some of our previous podcasts.
I would also wonder, and I haven’t looked at this lately, I would wonder what do Kurzweilian curve plots suggest about self-driving capability and their exponential advance because clearly, the capability of self-driving cars is advancing exponentially. You remember the initial DARPA grand challenge for driving? It wasn’t that long ago, 2015 or something.
And before that, people thought self-driving cars might be a hundred years off. Whereas now pessimists about self-driving or saying, “Well, Elon is full of shit, this is 10 years off.” So it’s easy now to say, okay, well, it’s 10 years off rather than two years off. These guys are full of shit. But we have to remember 15 years ago, most pundits were saying this stuff was infinity years off.
Jim: There’s still the low data learning problem because again, hundreds of millions of driven miles, hundreds of billions, maybe trillions now simulator miles. And keep in mind an IQ 90 16-year-old picks it up in a week.
Ben: This gets into the weeds of AGI because one of the things that I think we get from OpenCog Hyperon, and this sort of AI method we’re applying there where we put evolutionary learning and probabilistic reasoning together with deep neural nets in a common metagraph based framework, one of the things we get from there in many contexts is learning based on less data.
And this is really valuable in some bioinformatics stuff that we’ve done, we just don’t have that much medical data. And it’s a bit subtle because LLMs can do few shot learning too, which can be amazing, but they can do few shot learning only within contexts where they’ve already done learning based on a huge amount of data.
And personally, I think you need a different sort of AI paradigm to be able to do medium shot learning in a domain where you don’t have a lot of exposure. And totally, the amount of data that Tesla or Chevrolet, whoever has regarding self-driving cars, the amount of data they have now should be enough for some AGI system worth it so to learn self-driving.
But that’s not what the companies are doing now. Tesla is taking a strictly deep neural net based approach. So there’s no one really trying with a broader variety of AI tools on these data sets at this moment because of the herd animal nature of the commercial AI field.
Trent: Yeah, I’d say there’s a couple of things, though. One would be neural networks right now these days have won the AI field. You can say it’s the connectionists beat out the symbolists for now. But even within adjacent to the connectionists, we have more broadly soft computing, which includes evolutionary computation, fuzzy logic, all this stuff, and arguably even machine learning where it’s softer. Like you were mentioning in OpenCog but otherwise.
And a lot of those approaches don’t need much data because they’re actually more focusing on optimizing and generating things rather than trying to build a model as an input open mapping of some form. When you’re doing optimization, one thing you can do is you can do adaptive learning style optimization. You take a few samples, you build a model, and then from that model, you figure out your next few samples to take.
And that’s active sampling but biased towards optimality. You want to optimize what you also want to explore and then you simulate that you get more data, however it is, active sampling, you build a new model and you keep iterating, iterating, iterating. And with a small number of samples, you can train something extremely well. And this was the paths we took and building all these chip design tools in the past where we could get the accuracy as if it was a billion Monte Carlo samples with only 10,000 samples, 10,000 simulations in just the right place.
And similar in philosophy to that is things like the reinforcement learning of AlphaGo where it’s this Go player that learned to teach train itself, and they’re starting to apply that to self-driving cars too for creating synthetic data. It’s all similar. So simulators in the loop can really help here as the sort of thing in between.
And this is also used for robots building models of themselves. And you often get 10x, hundred x, even a thousand x efficiency improvements when you do this sort of intermediate step of active sampling, simulating the loop, all of that. And I can see that this is going to be a path overall.
So I’m not as concerned about the data bottleneck personally and also everything you brought up before Ben and Jim too in terms of where the AGI bottlenecks could be, that could very well be the case. And maybe I think we all think it’s probably a lower chance we should also though there’s this sort of implicit assumption that we can only get to super intelligence if we do AGI first. But AGI assumes it’s human shaped intelligence. But when you talk about super intelligence, it actually opens up the space of possible intelligence much more broadly.
So human intelligence is just one tiny point in this vast pace of possible intelligences all with at least par for peta FLOPS or whatever if you want. So we could have vastly alien minds that have way more compute power than us, that are way smarter than us in their own way. One where I talk about a lot is imagine you have a weather prediction system for the whole planet, every square kilometer on the planet where it’s radically more powerful than any of the AIs of today, but it’s benign.
All it’s good at is modeling the dynamics of the world. You could call it super intelligent. It’s narrow in some ways, but maybe it’s super intelligent in other ways. Or to take other examples, more alien-ish species that you might come across in sci-fi novels that people don’t even portray in movies or TV because they’re too weird.
So there’s a chance, I don’t know if it’s a good chance, but there’s a chance that we arrive at super intelligences like that before we arrive at a super intelligence that’s more human-shaped.
Ben: I would agree with that. I think that there are two main reasons to try to go through human-like general intelligence before you get to super intelligence. One is, if we want a super intelligence that connects with us empathically and is compassionate toward us, having it grow out of a human-level, intelligence seems to provide at least a clear potential route to that. Being human-like, you can instrument to have empathy toward us in vaguely the same way that people on their good days have empathy toward each other.
And for a fundamentally non-human-like entity to have a different species of empathy and compassion toward us is quite possible. We’re just plundering into a bunch of uncertainties that are more uncertain and more vague and weird.
The other issue is we have a decent understanding of human-like intelligence. I mean, we have cognitive science and a bunch of allied disciplines, so we have something to go on there. When you’re talking about fundamentally non-human forms of general intelligence, there’s just less to use for guidance.
On the other hand, the plus of pursuing fundamentally non-human forms of general intelligence would be, the hardware that we have now is not very much like the human brain. So when you’re trying to implement human-like general intelligence on current compute hardware, you’re a square peg in a round hole.
And then in OpenCog, we’re trying to reconcile that by making a high-level cognitive architecture that’s somewhat similar to the human mind’s cognitive architecture, but filling in the various roles identified by that cognitive architecture with algorithms that are well-suited to current computing hardware.
And so you can try to split the difference and harmonize things, but on the other hand, you could certainly the best way to achieve a given of general intelligence, whatever that means on current computing hardware, is going to deviate a lot from the human mind, which after all evolved to run and all this wetware.
And we also should come back to the point that there may be many different kinds of AGI popping up around the same time, and some of which may be more human-like, some less. And in this context, whether it’s weeks, months, or years from AGI to super intelligence makes a significant difference because if it’s weeks, then the first AGI to achieve human-level AGI will become the first super intelligence because there’s not going to be time in those few weeks for other paradigms to come up competing or cooperating AGIs.
But if it’s two years from human-level AGI to super intelligence, during those two years, you may see a bunch of other human-level and maybe radically non-human-like AGIs pop up. And then the super intelligence will come out of the cooperation and or competition of these different human-level AGIs. So there’s a whole bunch of fascinating and in some ways, unnerving uncertainty when one starts to think about that hypothetical future phase of AI development.
Trent: And if it’s unnerving to us, imagine how hard it is for everyone else.
Ben: I’m one of the least easily unnerved people on the planet.
Trent: Exactly.
Ben: And I know about this topic.
Jim: Yeah. It’s interesting. I think this discussion actually shows us that the AGI to ASI metaphor may actually be quite misleading. I was listening to Ben, I thought to myself, all right, so let’s imagine we have a series of super intelligences that we would think of as narrow, but in very impactful areas.
Like let’s say we have an ASI that is not an AGI, but it’s super in some domains. But those domains are beating the stock market, manufacturing drones at low cost and coordinating swarms of drones in a way that can defeat any conventional army.
Trent: What if I told you that already exists, Jim? How do you think we had 50 years worth of Moore’s law? And this is the beating horse, the backbone of all of human progress for the last 50 years. At the heart, it’s been AI.
When people talk about digital compilers, it’s actually taking human specifications and converting this into silicon and instructions to manufacture silicon and so on. And this has been going on for decades. And then I was right at the heart of this writing AI software to drive the next generation of the silicon.
And that’s one place, but for sure, all these other examples you give for sure, for sure, but it’s actually had a big impact on silicon. And we’ve mentioned Kurzweil several times. He often points to silicon as a sort of silicon minus touch. If anything else touches silicon, it gets its own exponential. So maybe the root cause is that AI is able-
Trent: Maybe the root cause is that AI is able to help catalyze it, right? And vice versa, AI can catalyze the silicon, silicon catalyzes the AI. So yeah, I’m in full agreement with you, and we already have an example case that’s been driving the backbone of humanity for decades.
Jim: And it may well be that even if it can beat the stock market, build itself an army and wage war, it still can’t learn to drive in a week’s of instruction by its grandpa. So it’s not really an AGI, but it is nonetheless a scary motherfucker.
Trent: Arguing over whether it is an AGI or not, and meanwhile it’s taking all of our money and our resources and starting to enslave us. Who knows, right? So hopefully not that part though.
Jim: Yeah, it turns out AGI may not actually be relevant to this story, but it might because I’ve always taken the argument that it’s an existence proof, right? And so far there is no other existence proof of something that can learn to drive a car as an [inaudible 00:48:56] 90, pimply, 16-year-old. And if something could with that level of data, it would be one of the things that would allow us to say it was an AGI. So anyway, let’s now move from this very interesting conceptual big picture thing to some more focused things. We’re talking here about a merger of three tokens. I think it would be useful for the audience to understand what the three organizations actually do in relatively short form. And then let’s start with Fetch, since that’s neither of yours. Who wants to volunteer to say what Fetch is and what does it do?
Ben: I mean, Fetch is a decentralized AI agents network. And of course the term agent is so ambiguous in the software world as to almost be devoid of meaning. But I mean roughly an agent as they mean it, it’s a software process that interacts with an end user using some sort of intelligence. And I mean, Fetch is a platform built on the Cosmos blockchain that lets developers create their own software agents and put them online on any machine that they like.
And then when an end user wants a software agent for some reason, they can put out a probe into the Fetch decentralized network and find the agents there that may do the thing that they want. And there’s the notion of a multi-agent system there that the aspiration is you have a whole bunch of different agents running on Fetch and the agents are cooperating with each other and then interacting with each other. So you’re getting some emergent intelligence among those agents, which I would say that emergence part is not yet realized by the collection of agents that are on there, but it was part of Humayun Sheikh’s vision when he started it.
And Humayun, I guess, got his start in the AI business world by being one of the seed investors of DeepMind. Before it was sold to Google, he met the DeepMind guys in Cambridge because he lives near near Cambridge, and I think he got his feet wet in AI investing in that era. And then it occurred to him that AI didn’t all have to be corporate like DeepMind became when they’re acquired by Google, but it could be more like self-organizing and decentralized. And then, yeah, that’s the crux of Fetch, which is quite similar at that level of description to SingularityNET really, which is part of what led to this merger.
I’d say on a software level, there are many detailed differences between SingularityNET platform and Fetch platform. But at the high level, aspiration level, I mean, SingularityNET, again, it’s intended to let anyone put an AI processor service online on the machine anywhere they want, and someone who wants an AI service can put a probe into that network and find those AIs that say they can do what you wanted, right? And then we similarly had the aspiration. There should be a nonlinear, self-organizing, multi-agent dynamic. And similarly, we’ve made some simple examples of that, but haven’t really gotten emergent multi-agent intelligence demonstrated on any amazing level.
I think we’ve been more sophisticated into the computer science than Fetch was. So I mean, we have a nice dependent type functional language that AIs can use to describe what they do to other AIs in the network. And we’ve done more cross-chain stuff among Ethereum and Cardano. We’ve launched our own ledgerless blockchain called HyperCycle. But SingularityNET has also put a lot of work into just AGI R&D, and we’ve been the main entity building out the OpenCog Hyperon AGI platform and then tried to architect OpenCog Hyperon, the new version of the OpenCog AGI platform so that it can conveniently be rolled out on our decentralized infrastructure.
So probably the biggest difference of SingularityNET and Fetch is both SingularityNET and Fetch have built these decentralized agents platform running on blockchains. SingularityNET has then tried to go both above and below that level in different ways, right? So we’ve tried to build an AGI system that can leverage this decentralized network rather than just saying we’ve built it and they will come and build AIs on top of it. We’ve also gone below that level in developing HyperCycle, which is a ledgerless blockchain and NuNet, which is a decentralized orchestration layer, right? And now both Fetch and SNET have been trying to go below that level in terms of working on hardware provisioning and setting up server farms and so forth, which leads into QDOS.
Jim: One last thing, there’s a visual on your artificial super intelligence alliance website called the Tech Stack, and it has all this agent stuff, but it also has a Fetch layer one network and Fetch Compute, very briefly on what those two things are.
Ben: Yeah, yeah, so Fetch layer one is really, I mean, it’s a utilization of Cosmos. So Cosmos is a layer one chain, but it’s a meta chain that makes it easy to spin up your own blockchain. They can then easily interact with all the other Cosmos chains. So I mean, Fetch layer one is a customization of Cosmos. There’s some particularities about the consensus mechanism that they’ve done to optimize it for AI agents. And from an ASI view, not everything goes on Fetch layer one, right? There’s Fetch layer one, SingularityNET platform mainly uses Ethereum now. We’re in the midst of porting to Cardano and Hypercycle [inaudible 00:55:23] exist in a bunch of different networks. So ASI token is very cross-chain, right? Not just that the token itself has versions on multiple underlying blockchains, but that the software platforms using that token are hosted on a variety of different blockchains.
Now, Fetch Compute is a bunch of servers that Fetch is buying and putting online to host agents running on Fetch platform. And we have a similar initiative, Singularity Compute, where we’re buying a bunch of supercomputers and we want to use them to host models and agents and services running on SingularityNET platform. Now, this, in a way, is centralized, which might seem weird when you compare it to the decentralization aspiration of these networks, but we’re not requiring that all Singularity or Fetch agents have to run hosted on these server farms that we’re building. We just want to provide that as an option alongside the core option, which is anyone can put their own AI agent or service on their own computer and just have it join our network.
If you look at things like Hugging Face or Credibase or something, say these centralized AI frameworks, which let people fine-tune their own deep neural models and then put them online and put them on the back end of their applications. I mean, Hugging Face lets you host your own fine-tuned Llama three model or something on Hugging face and many developers like that, right? So we’d originally thought when we launched SingularityNET and Fetch, the main use case would be let a developer put their own AI model on their own machine that they’re renting somewhere and just let that join our network.
What we found is that most AI developers don’t know fuck about cloud computing. Most AI developers are not actually competent to put their own AI on their own AWS instance or their own machine running in their office, and they would rather be able to spin up an AI agent and we host it for them and they pay something for that hosting, right? So basically, Fetch Compute and Singularity Compute are oriented to provide that hosting for those who want it. Now, anyone else could also make their own server farm host SingularityNET or Fetch agents. They don’t need to ask our permission, right? It’s decentralized in that sense, but we’re aiming to kickstart a hosting ecosystem for SingularityNET and Fetch agents that way.
Jim: All right, Trent, let’s hear about Ocean.
Trent: Yeah, sure. Just maybe to do a quick three sentence recap on agents. I mean, your listeners have probably heard a lot of talk about agents in the last year or two. It’s the new sub-hype cycle of AI, right? But a lot of the agent stuff people talk about is basically AI, Lama or something else being run from a remote server and then it’s just an API being served up and that’s got the label agent. That’s not in line with the traditional idea of agents in the world of AI where it’s much more like a feedback control system that has autonomy and also its own economic abilities and so on. But actually what Fetch and SingularityNET have done is actually preserve this long-term AI vision where the agents truly do have autonomy. They can be self-running, they can be self-hosted, they can have their own wallets, all that, right?
The Fetch label is autonomous economic agents, for example, right? And that’s much more true to the true AI vision and also can lead to a lot of pretty cool unlocking of things. Imagine running an evolutionary algorithm of say, a thousand individuals or each individual is its own smart contract instance that’s running on Ethereum or otherwise, and then if it runs out of gas, it dies. But if it actually manages to do well and make money, it can make babies and so on, or [inaudible 00:59:41] optimization, other emergent intelligences. So this is where we can go. And of course, OpenCog has its own path using agents in this fashion, right? Okay, so that’s agents. I just want to talk about that because decentralized agents are really special compared to the rest. They’re actually true to the vision of what AI agents are rather than marketing bullshit, right?
Okay. So for Ocean, by background, before I did Ocean, I was doing two things. My main thing was the startups. That was the bread and butter, basically helping companies like the NVIDIAs and TSMCs of the world develop the next generation chips focusing on process variation for their memory chips, their GPUs, their analog circuits, all this stuff, helping them to verify it and so on, and learned a lot about taking things to scale there as well as learned a lot about the problem of data. Complimentary to that, I had actually started out in the world of creative AI actually for analog circuit design. This is when Jim and I got to know each other in the early 2000s.
With my friends we’d hang out and talk about AI being creative, friends within the AI world at NASA and otherwise. But then the mainstream people, just my friends who weren’t in that, they’re like, how can AI be creative and stuff, right? And so to me, it was a frustrating conversation trying to teach people that AI can be creative. So I love that since things like DALL-E have come out, they can see that AI can truly be creative in new ways and I don’t have to convince them.
But the reason I’m mentioning all of this is, I was coming from this background and of AI at scale for chip design and creative AI, evolutionary algorithms, all of this, very unconventional thinking for AI at the time. And then I had spent a few years in the world of blockchain NFTs on Bitcoin, and all this long before the NFTs, the idea blew up in a big way, and I was missing AI. AI is an amazing field, especially old school AI from … I got deep into it in the nineties and Ben even a bit earlier than me and stuff. But it was really an amazing field where everyone is weird in the best sense of the word, right? So I was missing AI, but I still loved blockchain.
So I started asking, how can blockchain help AI? And this led to things like autonomous economic agents, self-driving cars that also own themselves and swarms of them around and self-owning forests and all that. That’s the nature 2.0 stuff. But then also AI DAOs, DAOs that just own themselves and collectively organize humans and so on. And you can even view Bitcoin itself as its own DAO population of zero that’s coordinating much of humans, right? And that comes to token engineering, incentive design, all this, and that’s what led to Ocean, right? I started writing about this stuff, a few essays around this, and I realized, okay, can blockchain truly help AI somehow, right?
And at the time, one of the big challenges in AI was the problem with data, right? 10 years before, Google had published this paper, The Unreasonable Effectiveness of Data, how data just … You get 10X more data and you can reduce the error by 2X or 4X. You don’t need a PhD thesis or 20 PhD thesis to do better. You just have more data, which is super embarrassing for an AI person, right? Because you want to be the cool kid who came up with this cool new algorithm and suddenly it’s just more data and more compute, right? So then it’s can you get your hands more on data? And we talked about that earlier. These LLMs of today, they’ve swallowed the web. There’s not as much to get from there, but of course there’s lots of other ways to get more data by physical simulations, by video and by self-learning and all this, right?
But that was definitely a huge problem when we started Ocean. So we said, okay, can Ocean help get data? And at the heart of it, we wanted to have a decentralized data exchange. Okay, how do you build that? Well, you need to have decentralized access control. You basically need to have, think of it like a market like Amazon where people can go in there and buy data and sell data, right? And there’s centralized versions of this now, things like Snowflake and stuff, and Amazon has its own even now, but back then none of that existed either. And we really didn’t want the idea of a middleman controlling all of that for AI training data and so on. So that was the original vision for Ocean.
In order to build it out, we had to dive deeper and deeper into the tech, which included decentralized access control and specifically token gated. So if I have 1.0 tokens for Trent’s DNA, then I can come along, go to the network and say, Hey, I’ve got 1.0 tokens for this. Please give me access to Trent’s DNA, and then I can access it, maybe download it. Although I don’t really want my own DNA out there in the wild being downloaded. So what if I can just run compute against Trent’s DNA, right? Or run compute against a million different humans DNAs and learn against that? And that’s the concept of federated learning where all the data stays in the edge. But traditional federated learning is there’s a centralized middleman like Google that can kind of peek, right, unfortunately? But what if you have no man in the middle that sees all that, right? So that’s an application. And this is what we were looking at with Ocean, basically unlocking all of the data for the world for AI researchers.
A friend of mine, he couldn’t get his hands on more than 1,000 or 10,000 samples of humans for cancer research. And the thing is, he was working on where he would’ve 10,000, 100,000 input variables. So he could barely do sparse linear models, let alone deep neural networks, right? Or Google themselves was trying to get data across healthcare in London across a bunch of hospitals in London and the UK, and that got shut down for privacy concerns. But what if you can actually do that where you can wrangle together all the data together, learn against it where there are privacy concerns? And that’s really what Ocean is about, is getting all the data for AI training where there aren’t privacy concerns, where you handle that by proper decentralized technology to handle privacy, et cetera. That’s what Ocean has been about. That’s what we’ve built up over the years. And we’ve got traction in a few places working with the [inaudible 01:05:22] of the world for their work on automotive and others on self-driving cars and whatnot too.
And then more generally, things like Gaia-X, European-wide data sharing initiative among many enterprises and governments, and then the also consumer level and crypto DJN level and so on. And with Ocean Predictor, this was really around the hypothesis of, okay, a lot of people aren’t willing to pay for data. You look at people hoovering up the web for the data, they don’t pay for it. They spent some money, but they just suck it up for free, right? So are there places where people will pay for the data? And we realized, well, the most valuable data is predictions, right? Lots of people have been buying OpenAI, API access, but that’s data in the form of predictions, right? You given your sentence, your query, and it spits out an answer, but that’s data. But that’s the very last mile compared to actionable usage, whether it’s for research and otherwise, right?
And so we realized that this new hypothesis that the most valuable data is predictions, okay, let’s do something there. And that’s where Predictor came from for crowdsourced time series prediction, where we’re starting off with … Well, we have started off with predicting will Bitcoin go up or down five minutes from now, et cetera, and working backwards, but also use cases for weather prediction, energy prediction, and more, right?
So overall, to summarize, Ocean is at the heart a decentralized access control protocol, think like token gated access to data, for AI especially, with use cases like federated learning for health data, for automotive data. And then among the various Zoom in use cases, we really zoomed in on crowdsourced time series prediction and other things too, but that’s really what Ocean is about. So we have this core base layer, but then we’re zooming in a lot more on very, very specific applications and driving to volume on that. So Predictor itself, for example, typically hits about $100 million dollars in volume every month these days. And we just released this thing called Ocean Nodes and about six weeks ago, and it’s up to already, I think 24,000 nodes being run. So yeah, we’re starting to hit scale in pretty cool ways.
Jim: Cool. That was going to be my next question. You answered it for Ocean, which is some measure of scale for these networks today. How many dollars worth of transactions go across them in a month or how much crypto is staked into them?
Ben: There’s a lot of crypto staked.
Jim: How much real work is going on in these networks?
Ben: So far for the whole crypto industry as a whole, there’s fairly little utilization of crypto networks outside of utilization for decentralized finance, which is usage to support people buying and selling cryptocurrencies, right?
Jim: Hell yeah.
Ben: Whether you want to count DeFi as real work or not is interesting.
Jim: No, no, no.
Ben: Do you count Wall Street and the stock exchanges as real work or high-frequency-
Jim: No. Absolutely not, pure fucking parasites.
Ben: Yeah.
Jim: Pure fucking parasites.
Ben: So yeah, I think if you accept DeFi-related things, there’s not a tremendous amount of utilization of any crypto network out there, including Ethereum or Bitcoin or even the biggest ones that there are, and that is the current nature of things. I mean, there are Web3 games, most of them aren’t very good. Some are okay, but none of them is a top ranked game or something, right? We have some apps running on SingularityNET platform doing sound processing and music processing and stuff. I mean, they do have hundreds of thousands of users, but they’re not top ranked music apps yet at this point. And I mean, Trent could speak for Ocean, but I mean it seems like Predictor is also a thing primarily at this point, but not by architecture, right? You could use it for any kind of time series.
Trent: Yeah. So yeah, maybe I can talk to a few of these things, crypto more generally, and then also within ASI Alliance and Ocean. So crypto more generally, I’d say, I think it’s actually unfair to discount anything related to finance. It’s like saying, Hey, we want to talk about building companies, but we’re not allowed to talk about anything about raising money for companies or people being able to invest in companies or have bank accounts or make payments or whatever, right? What are we going to do, barter for bread? So I actually think it’s ridiculous to discount that. I’m going to push back on you, Jim, right?
So for example, we watch our governments stand up in front of national media and gaslight with things like price gouging or here in Germany they turn off the nuclear power plants. They literally just blew them up and instead they turn on the coal plants. So this thing is utter ridiculousness, and what happens from that? Well, the dollar goes down because there’s so much money being printed and allocated badly. The Euro goes down for similar reasons. And so I want to be able to hedge against that with long-term stores of value. And gold is nice, but what if there was a gold that was relevant to the modern era? And that’s exactly what Bitcoin is. Bitcoin has become digital gold. That’s the best use case, right?
And then what about payments, right? I spent a month and a half in Argentina a year and a half ago, and on the tip of everyone’s tongue was inflation. They had 100% inflation per year, right? I had to pay for my hotel bill with a backpack full of cash every week, and you can only pay one week in advance, once again, because of inflation. So actually stablecoins have become a top use case for Argentina and most of South America because of inflation and related reasons. This is really moving the needle for people.
Even now, in general payments wise, I’m actually using a credit card that accesses my crypto account and then pays directly whenever I have to pay with a Visa card or a debit card. Now I go straight from that. So it’s an alternative financial system that has been built that is not basically fully beholden to the traditional powers that be that want to gaslight us and inflate our savings, right? And also Wall Street, right? Wall Street traditionally has been pretty unfair to anyone …
Trent: Also Wall Street, right? Wall Street traditionally has been pretty unfair to anyone not part of it. And the whole idea of DeFi is to open access and open it up to anyone, anyone who wants to try and take advantage and have yield and all this, and I think that’s really healthy. So we shouldn’t discount that. And in fact though, if you want to have a successful company, you go where the traction is and then you build and you get strong. So I think it’s great that the initial use cases for crypto started as decentralized assets in Bitcoin like Stablecoins and loans and then, going beyond and trading as well. These were the starting points, but that became the foundation for everything else that’s followed. And now there are Web3 games that have millions of users. There’s a Twitter clonish thing called Farcaster that I don’t know the number of users, but probably at least a hundred thousand these days and it’s pretty good, right? There’s some other ones too. Twitter itself has something called Bluesky. They spun it out. It’s okay-
Ben: But it’s most of the people talking about DeFi, I mean-
Trent: Yeah, yeah, but overall, we shouldn’t discount this. We were making fun of people 10 years ago who were discounting self-driving cars. So you have to start somewhere and it makes sense to start where there’s traction and this is why Predictoor, yes, we did start in DeFi because that’s where traction is, but we are going for energy and we are going for logistics and marketing and weather and all these things. We just have to be smart about it. Does it make more sense to go for weather directly than fall flat with a project? No, right? So it’s one step at a time.
And I think sometimes the problem is these technologies, they’re sort of like databases in many ways where if you have a database startup, it’s going to take you 8 years or 10 years or 12 years to really win because you infrastructure’s at the lower level. And crypto had to build all this infrastructure at the lower level first. And then finally, once we have good enough infrastructure at the lower level, we can build these last mile apps and that’s finally happening. Things like Visa Card, it’s Gnosis Pay, right? And Gnosis has been around since 2017. They were only able to release Gnosis Pay about a year ago ’cause the infrastructure was finally there, right? It takes time to build this. So I don’t know.
Overall, obviously I’m very bullish on crypto, but I think also crypto in general has got a bad rap because everything is so transparent. So there’s wallets into the [inaudible 01:14:07] everywhere. If you’re a reporter, you’re going to definitely report on the negative of crypto instead of the negative of TradFi, which is all hidden. It’s pretty rare that we see Deutsche Bank whenever they do their $5 billion money laundering scam, we only hear about one in five of those or one in 10 of those and get swept under the table. But with crypto, it’s all exposed, it’s all super antifragile. We hear about all of it and its survives no matter what. You just can’t get rid of it, it’s a cockroach. So that’s that.
And then specifically with an ASI Alliance, we’re getting there. Ocean has taken, we’ve been conducting tons of hypotheses of where the traction is and so on. And finally we’re getting it, but we’ve been at it seven years as well, and only in the last year and a half have things really clicked traction-wise. So I think SingularityNET, I mean you guys are going for it for AGI. That’s awesome, but you don’t expect traction in AGI until you have it. And of course you’re doing some bread and butter stuff along the way, great. And Fetch also, they’re really going for it. So the good news is tokens allow for a treasury for a longer term runway to really go for larger problems and really swinging the bat. I think that’s great. We shouldn’t discount this. Instead, we should celebrate that we’re able to go for longer-term projects and have a way to finance it by piggybacking on a baseline of financial. I’ll stop there.
Ben: You could do DeFi itself as a video game. I don’t think it’s any less valid than any other way of sitting in front of your computer on the internet and entertaining yourself together with other people. Sure. Jim, though a point I’ve made a few times is ultimate plan with SingularityNET is to roll out AI, which is just smarter and more useful than what Big Tech is rolling out. So if you can roll out a ProtoAGI system that’s much cleverer than GPT-5 or 6, you happen to roll it out on the decentralized backend, I mean then people aren’t going to refuse to use it because it’s on a decentralized backend. They may not care whether it’s on a decentralized backend, just like they may not care whether the underlying code is open source or not. But if it’s doing something smarter and it happens to be running on a decentralized backend, then it will get a lot of users. The decentralized backend just has to not get in the way.
And then, if you can do that, then the development of the follow-on ecosystem becomes radically different than what you would have if you’d rolled out the same thing in a centralized way, through a traditional venture-backed company. The thing is, right now using a decentralized infrastructure makes things less efficient and it makes things less usable. And it also brings regulatory issues that matter if you’re a large enterprise and don’t necessarily matter if you’re a random end user. Now, all these things are getting better and better, like crypto networks get faster and faster to use and the usability increases. But still these are aspects in which crypto networks are worse than the alternative of just deploying stuff on AWS right now, which means, so if you’re doing DeFi or Web3 gaming, then people are already bought into using crypto networks and then it’s fine, it is usable. It does work, right?
But if you’re looking at just music or image processing or supply chain or something and you’re just rolling out a product for users in that domain, there’s not, at the moment, a strong reason why those users would want to use a product back-ended on a decentralized network rather than a centralized network. But I think if we can roll out something that works better than anything Big Tech is doing, then like I said, what we need is crypto networks to just not get in the way. And I think Fetch adds a lot there and they’ve been good at making user interfaces and getting usability to be okay on a decentralized backend. And we launched the HyperCycle, a NuNet spinoff of SingularityNET, just trying to make the plumbing of blockchain more scalable so that you could serve a whole bunch of end users and AI services on a decentralized backend and it won’t bog down when you get a bunch of users.
I don’t say that’s the only path. I think we can get usability and scalability to the point where the decentralized backend works about as well as centralized and then you could have a whole flourishing ecosystem across different vertical markets. I’d just say given my own fixation on building human level AGI, my point of view is more like, let’s get the fucking AGI built, we’ll roll it out on decentralized networks, then people will use it because it’s amazing. But then what do you have? Well, the plugin ecosystem, it is not like the open AI plugin ecosystem, right? The plugin ecosystem is okay, put stuff on ASI and you can put it wherever you want and it will communicate by decentralized protocol with a whole network of different hyperon instances running all over the place.
And the regulatory landscape also looks different because there’s not one company that is responsible for the AGI running on the decentralized network. It’s open source code running on a decentralized protocol, and there are going to be multiple instances that are hosted by multiple people in multiple countries. To me, the payoff of the decentralized infrastructure will come big time once we have like the first, let’s call it ProtoAGI running on the decentralized network, then the world will be like, “Oh, shit. This thing is the first ProtoAGI and it’s running on a blockchain network, huh? What do we do now?”
Jim: Yeah, interesting. Well, first I wanted to mention for Trent’s benefit, of course, I was being facetious and I denounced all finance, obviously much of my business career was building technology for finance and I-
Ben: Well, that’s how you know what a fucking shit show of corruption-
Jim: It’s like nobody ever eats in a restaurant where they work, right? My friend, Peter Wang looked this up for me yesterday, which is finance in the US is still capturing 30% of all corporate profits down from 40% in 2007-
Ben: Wow.
Jim: … A goddamn scandal, why the guillotines aren’t out on Wall Street, I don’t know. So I don’t want more finance, put it that way. I want a lot less.
Trent: There is a guillotine for Wall Street, it’s called DeFi.
Jim: If it works. And I will also say, and this is the big bet you guys are making. I agree with you that at the end of the day, people aren’t going to care where the technology is hosted if it is in truly a cost-benefit perspective better. But I have been pitched, God knows how many distributed web projects, distributed social media projects distributed Twitter clones, and I go dig into them. I go, none of this shit’s going to fucking work, dude ’cause the slowness, the overhead, the difficulties of doing parallel searching. It’s just like why would you do it, you’re just making life more difficult for yourself. So I think at the end of the day, the only way you win here is if there is some synergy that allows you to get to that cost-benefit winner before the big centralized players.
Let’s exit on your guys’ view on why you think it’s plausible, I’ll just leave it at plausible, that something could emerge from these decentralized networks that actually will produce the winning functionality, the winning cost-effective functionality.
Ben: I’d like to address that, but let me first just make the peripheral point that the complementary perspectives of Trent and myself on DeFi, AGI, decentralized networks is part of the strength of the merged ASI Alliance I think because we have these different entities within the ASI Alliance and they’re taking different approaches toward AGI using the same token and the same network. We have a spinoff Singularity dial, which isn’t merged into ASI lines right now, which is doing DeFi using SingularityNET. And Trent has been pushing on Predictoor, which in its own way could be seen as moving toward part of a non-human-like general intelligence. SingularityNet is doing OpenCog Hyperon. Fetch is doing their own more commercial oriented stuff. And it’s cool to me to have this sort of diversity of approach within the same tokenomic network. And there may be a lot of cross-pollinations between these that we can’t foresee.
So if you start applying Predictoor to a bunch of other sorts of time series beyond just financial, and you then have a hyperon network that’s interacting with a whole bunch of people all over the globe carrying out a bunch of different sorts of interactions and judgments, it may be drawing on the Predictoor network as a sort of Oracle and it may be making itself a bunch of predictions on the Predictoor network. So it’s not hard to envision how these things can intersect with each other, but to quickly answer your question in a more foundational way, I think there is a route to AGI where you just have a bunch of different agents running on SNET, Fetch, Ocean, ASI, whatever. Each of them is doing their own little thing. They coordinate with each other by the decentralized protocols and then, the intelligence pops out in a way that none of the authors of these individual agents could have envisioned.
So I think we’re creating an environment in which that could happen, which is quite interesting, and it gets more and more plausible as LLMs and other tools make it easier and easier for people to write AI code, and as we make tools, making it easier for people to deploy that AI code on our decentralized networks. And that’s not my personal number one most plausible approach to get to AGI. My primary focus as an AI developer rather than as CEO of ASI Alliance, my primary focus as an AI developer is on putting OpenCog Hyperon system out there doing neuro-symbolic evolutionary AI, deploying it on decentralized networks. And then what I think you can get, which is very, very interesting, is instead of a traditional plugin ecosystem where you have this individual plugins kind of accentuating and leeching off hyperon, you will get a sort of self-organizing decentralized cloud of plugins that can talk to each other and talk to hyperon network.
So you may have a decentralized network of agents, which is sort of ceded by and in some sense dominated by one particular sort of AGI sub-network, which could be OpenCog Hyperon, and that’s what I’m looking at as the most plausible way to get to human-level AGI. But as I said, that’s with my AI developer hat on, with my CEO of ASI Alliance hat on, what we’re making is a decentralized network of decentralized networks. It’s a platform, anyone can upload anything, they can interact with each other, anything can happen, and we don’t know what will emerge from it.
Trent: Yeah, and maybe I’ll riff on that too. So from the perspective of ASI Alliance, you can view that it has at least three hypotheses towards hitting AGI. There’s the OpenCog approach, which is really true AGI, and that’s probably the front-runner of possibilities within AGI Alliance. There’s basically taking decentralized crowdsourced prediction to scale where incentives can take you a long way, and I’ll get into that in a bit, and that’s basically scaling up Predictoor for large-scale weather prediction and other things too, time series in general and all that. And there’s the emergent intelligence from swarms of agents, just like ant colony optimization does intelligent stuff, et cetera and that’s what Fetch is softly pursuing, right? And there can be more too as we join. So overall ASI Alliance, that’s sort of the medium-ish, long-term-ish stuff. But then along the way, it’s bringing in pieces of the stack to help power that from the lower levels, as well as bread and butter applications along the way.
That’s my businessman hat. I’m not a CEO, I’m just a guy. But from the perspective of AI researcher and myself, I used to spend a lot of time crafting objective functions and constraints and the design space around that and then coming up with algorithms that would suit that to search that space efficiently, whether it’s evolutionary algorithms or model building based ones or otherwise. And then, when I went into blockchain, I thought for a while that I was abandoning it. And then, when I started doing work on Ocean again, I started to design incentives. And part of the reason I thought a lot about incentives was the most powerful computer in the world by far is Bitcoin. And yet it was initially coded by one guy or one team and rolled out, shared in a forum and then people started downloading that software and running more and more Bitcoin nodes. And now, thanks to just the power of incentives, people have convinced themselves that Bitcoin is valuable and then the network is basically paying people to run it. So it’s convinced the world that it should be run as a network, et cetera.
And then there’s this realization, okay, well that’s the power of incentives for Bitcoin, can we apply this idea of incentives and incentive design, mechanism design, token engineering to solve other problems? And you can to things like decentralized learning to decentralized inference and the stack below it of course too for compute and storage and bandwidth and energy and all of these things. That’s why I’m very hopeful because we talk about USPs for blockchain, how does it compete against centralized. That’s the superpower. Incentives are the superpower of decentralized technologies, and this is ultimately how decentralized technologies and ASI Alliance could run circles around anything centralized. And that’s what I’m hopeful for. So it’s this decentralized network of networks, but then also incentives is a superpower, which could really unleash a lot of amazing capabilities for humanity.
Ben: Yeah, the potential is there. And SingularityNet has launched or is in the process of launching a bunch of spinoff projects aimed at using incentives to get traction for decentralized projects in various verticals, like Rejuve that gives you tokens for contributing data about your own body and health through decentralized data commons and Jam Galaxy to give musicians tokens for uploading their tracks to train AI models collectively on many people’s music and so forth. So I think the potential is very much there to use tokenomic incentivization to overcome tragedies of the commons regarding data processing and participation in various niches.
I would say so far that potential is not yet grandly realized on a significant scale except in DeFi. And so then the question is what will happen first, will we get to AGI first and roll it out on decentralized networks or will we get tokenomic incentivization to drive the emergence of flourishing businesses in different vertical markets on a blockchain basis. Will that happen before we get to AGI, right? And they could both happen at the same time. There’s different sorts of uncertainties in each of those and the cool thing with the sort of heterogeneity of the ASI ecosystem is we have different groups of people pushing quite hard on both of these directions.
Jim: Yeah, it’s interesting. I take certainly Trent’s view about incentives very seriously. If you think about financial incentives, they’re quite analogous to attention ’cause they are the signal which causes things to have a focus upon them and like attention, they come and they go. We know that venture capital companies get funded and most of them don’t make it. The money, it just goes poof, the money evaporates. And the same could easily be, and will for sure be true, in this distributed world of incentivized attention. So there may be some possibilities there. But I will warn both of you guys that emergence is hard. I have been working on and off in evolutionary computation for, scary to think now, almost 30 years and truly understanding emergence, even in the simple toy block worlds of evolutionary computing, is really difficult.
Ben: Jim, one of the lessons of what we’ve seen with transforming neural nets is you can make something work without truly understanding it.
Jim: Yeah, that’s marriage, I’ve been married for 42 years, right? So there’s an example. You can make something work without understanding it. So anyway, I see where you’re coming from. I think there’s a plausible argument there, but I’m going to suggest that the, just bring it down to a single phrase, you guys can push back on this, can you get emergence to happen via attention through incentives?
Trent: Yeah. Incentives are all you need.
Jim: I’m going to say you need emergence too. Incentives, all you need might be you get a bunch of ants running around eating the sugar that somebody spilled on the ground and no more than that. The ants may never build the star ship and go to Mars.
Ben: Well, the ants have emergence too, Jim, so then we need to get further into open-ended intelligence, right? There’s certainly emergence in an ant colony and an ant colony has individuation and self-preservation, but it doesn’t have progressive self-transformation, which is the other part of Weaver’s notion of open-ended intelligence, right? You need individuation and self-transformation both as emergent properties and cooperating together, then you get to open-ended intelligence, which an ant colony has only a part of, it’s kind of a closed-ended intelligence. And open-ended intelligence in that sense is definitely what we’re after, and I think a decentralized network like ASI is architected the right way to support that. But it does come down to either using it as an infrastructure for an AI agent system that we’ve scripted and orchestrated that then has emergence or using incentives to pull humans and AIs together to make an emergent network on this infrastructure or both, but which combination of these things will emerge remains to be seen.
If we go back to the beginning of this conversation where we were talking about how many different approaches to AGI are possible, some more human-like, some less, we could have many different approaches to AGI pop out even within the ASI network. We could have some that are based on an OpenCog Hyperon network and some that are based on groups of humans motivated by cryptoeconomic incentives to create agents that interact with each other. Both of these approaches to AGI could succeed again to human-level AGI in and around the same timeframe within the ASI ecosystem, even apart from all the other cool stuff happening in the rest of the internet.
Jim: Cool. I want to thank Ben Goertzel and Trent McConaghy for an extremely interesting, wide-ranging conversation about the Artificial Superintelligence Alliance. Thank you, gentlemen.
Trent: Thank you very much. It’s been fun.
Ben: Thanks a lot, Jim.