Transcript of EP 200 – Brian Chau on AI Pluralism

The following is a rough transcript which has not been revised by The Jim Rutt Show or Brian Chau. Please check with us before using any quotations from this transcript. Thank you.

Jim: This is Jim Rutt and this is the Jim Rutt Show. Today’s guest is Brian Chau. He writes independently on the American bureaucracy, political theory, and AI. His political philosophy can be summed up as see the world as it is, not as you wish it to be. Everything else is application. I gotta say goddamn right to that. I like that. You know, it’s neither looking through the world with rose colored glasses, nor assuming everything is shit. And I must say, neither of those two lenses do I find particularly useful.

So anyway, welcome back to Brian Chau to the Jim Rutt Show.

Brian: Great to be here.

Jim: Yeah, I think we’ll have another good conversation. Also worth pointing out, Brian’s got a couple of interesting media thingies going. He has a newsletter at ai-pluralism-newsletter slash slash front, the new world, blah, blah. As usual, the links will be on the episode page at JimRuttShow.com.

Brian: Yeah, and that one, there’s a much shorter link just pluralism.ai and you’ll jump right to it.

Jim: Pluralism.ai. I like that. And he also has a from the new world podcast with lots of interesting guests, including recently, Mark Andreessen, and he carries on like a complete madman over at PsychoSort on Twitter.

Very useful. He and I have followed each other for a long time. I like a lot of what Brian has to say, but not everything, as are most people on the Jim Rutt Show. So let’s get down to it. Today, we’re going to talk a bit about AI and its relationship to society, politics, and all that. Where do you want to start?

Brian: I think that most people don’t underestimate how AI would change the world because they’re looking at it from a kind of fixed frame. I think many historians of the story of the horseless carriage. Now you might ask, what in the world is the horseless carriage? Well, you probably use it every day because that’s the name that early people gave to the car.

They just saw this thing. It’s like a carriage, but without the horse. And people call it the horseless carriage fallacy for a reason. The fallacy part is assuming that when a new technology comes along, it’ll just do the same thing as the old technology, and nothing else will change. There’ll be no downstream effects at all. That’s the only thing that’ll happen. It’s the horseless carriage. What else could happen? And we know from American history, from worldwide history, that that’s not all that happened. It changed how cities develop.

It changed the way we form our entire economies. The way, you know, that’s probably the reason why you live where you are, how your work setup is, what your commute is like. And the same is true about AI. When we look at AI, when we look at the things that it makes easier versus the things that it makes harder, the things that doesn’t change at all, that will upend how we balance our social values just as much as it upends how we might use a text editor.

Jim: Yeah, it’s interesting. I’ve said, and I may be underestimating it, that even the kind of early AI known as large language models is probably at least as significant as the invention of the PC. And I ranked the PC as very senior in major recent technologies, more important than the web and the internet, more important than portable phones. Because in truth, both the internet and portable phones are downhill from the PC, right? You know, a phone is basically a little PC. And very early on, the PCs were networked together on services like Compuserve and the source and AOL, long before there was an internet.

And the internet was just another form of that, basically, a better one in many ways, but nonetheless. So I’ve said LLMs will be as significant as PCs, which is pretty damn big. And so I take your point that a lot of people so far are very substantially underestimating what these things can do. And I actually have my own AI product project going right now. And I’ve learned a tremendous amount about what they can do and what they can’t do. But I do agree with you that people’s eyes are going to be open when they see what they can do.

And that’s just the beginning, right? There’s the broader question about AI and society. And if we think about it from a big picture, we’re now able to apply fairly directly this amazing amount of CPU that the world has learned how to create, right? The densities that we have now on the, you know, H100 NVIDIA chips, the TensorFlow chips, etc. The state of the art current Intel chips, etc. It’s utterly amazing. And now AI is getting very, very close. And with LLMs has it to our limited degree, the ability to apply all this horsepower to basically thinking cognition, something that’s identifiably in the same class as what humans do. And that is going to change a hell of a lot.

Brian: Yeah, it’s very funny, because I think it’s true that LLMs are getting there. But what’s very interesting as well is that even with something like chatGPT, even with something like Google Bard, you know, the tools that we already have available to us at the moment, we can look at a clear set of tasks, say, filling out a tax form, or giving a corporate email, that even though chat GPT or Google Bard or whatever doesn’t pass some criteria, we would have for, you know, saying that it’s fully doing thinking cognition, that it’s still able to do many of these tasks. And you can actually sort out now, we have a tool for sorting out, you know, these parts of my life, these parts of my ordinary routine, actually didn’t need a lot more than just kind of rearranging words on a page, didn’t really need a lot of, you know, invested thinking.

It really just needed a little bit of basically kind of make work. And chat GPT is now able to do that. I think that one of the very insightful tweets that was floating around Twitter in this space, it might have been from Sam Altman, actually, correct me if I’m wrong. But the tweet was something like, don’t treat AI as what you would do to replace a smart person’s job. Treat AI as what you would do if you had a million dumb people. And that’s one of the insights, right? That’s one of the insights that are most impactful is that, you know, most of the things you’re going through your daily life with, you know, don’t need the full extent of your thinking. And I think that once you start looking through that lens, you start to see more and more parts of your own life, that you can need a lot of AI help with. And that in fact, you know, for me, at least, I already have started using AI in many of those cases.

Jim: Yep. Though I would push back a little on that, including on Sam, which is, LLMs are actually capable of adding value in quite sophisticated areas way beyond a million dumb people. I often use it to do summaries of philosophical ideas, or one of my favorite stunts is to take two or three ideologies and say, all right, compare game B with Marxism and a narcho capitalism, you create the 10 categories you want to compare and contrast them across and fill out all 30 cells. And GPT-4 will do that quite well. They will do it as well as a recently minted PhD in political theory would do it if they only had an hour to do it. And, you know, obviously, you got a recently minted PhD in political theory and gave them two weeks, they do a better job. But if you told GPT and the graduate student or the recently coined PhD to do in an hour, I would probably bet on GPT to take on tasks of that sort. Further, the project I’m currently working on, as people know, who listen to the show regularly, is a group of us are creating a what we call script helper, which is a very intricate, many, many steps workflow for going from an idea to a full movie script in about two days, maybe three days. And it uses humans intricately throughout the process, but an awful lot of the heavy lifting is actually done by the LLMs. And I actually undertook this initially just as a hobby project to help answer the question, are LLMs creative in a sense, in the same way humans are creative? And I would say my tentative answer is yes to a substantial degree.

It’s amazing what these things can do. They’re not just remixing, right? On the flip side, I think it’s important to note that what humans are mostly doing is remixing, right? Essentially, you are the model of everything you’ve heard and seen in your life, and you’re processing it using a different technology, but in a way that’s at least analogous to the way an LLM process a prompt. And I do think that trying to ghettoize what LLMs are capable of is a big mistake. You still need humans in the loop, like for instance, I would not use an LLM to fill out my tax report and send it to the IRS, hell no, but I might have it take a first cut at it. And then I’d want my accountant to look at it and see what it’s missed and what it got right and what clever human tricks could be added to it. So I think this is a learning thing and I’ve proposed to people, someone should build a LLM wiki and post what we have found these things are good at.

And I think it’s going to surprise people that it turns off not just at the low end, but turns up at higher level stuff as well. Another experiment I did was I concocted a fake political party called the new CCP, the new Communist Party USA, whose job it was going to be was to slavishly follow the Chinese Communist Party. And I built all this apparatus to describe it and why it would be appealing to certain people. And then I asked GPT forward to write speeches and position papers and it was goddamn good, I would say better than your average hack political staffer. So I take away is it’s not good for everything. It’s not good where you need absolute precision, but where something kind of like, you know, somewhat creative, somewhat quick and dirty kinds of stuff, it does surprisingly well, way better than if you picked a random person off the street for a hell of a lot of these tasks.

Brian: Yeah, what’s fascinating is that LLMs invert the film archetype, right? So much of normal people, and I think even some, you know, like maybe not technical people, but like journalists, media people, so much of their perception of AI is shaped by literal fiction. It’s shaped by, you know, Terminator, it’s shaped by Star Trek, you know, it’s not shaped by reality. And I think that many people upon using chat GBT, using any of these tools for five minutes will tell you, first of all, it does not follow that kind of, you know, strict rational reasoning 100% of the time. It lacks that precision, if anything, it’s less precise than even most people. On the other hand, it is immensely creative. It can draw in ideas from completely unrelated areas from, you know, the entire expanse of human knowledge.

Jim: Yep, indeed. And that’s kind of the interesting thing about it. Like when we’re using it for script writing, you can actually assume that it knows a whole lot of pragmatics, right? If you hired a script writer, to write a Western, they may well not know the ins and outs of how to ride a horse. But chat CPT four knows a fair about enough, or at least when I say knows, quote, unquote, can do, you know, sequences of words that are more or less accurate about how to ride a horse. Better than a typical screenwriter would do hadn’t researched the topic.

Brian: Right, right.

Jim: So you get this huge body of pragmatics, plus this ability to creatively remix words at the token level, the word level, essentially, and create new things. And especially if you guide it, right, you give it clues, right, you know, you nudge it along. It’s quite amazing.

Brian: Yeah, yeah, I think the perfect visualization of this, I’m not sure if you’ve seen this before. But have you seen the like the spiral AI arts?

Jim: No.

Brian: So this was AI generated, I think I’m not completely sure of the procedure, but how I would intuitively go about making something like this is that you can take like two images and you can conceptually merge them. Right, this is something that you can do using existing tools. And you get this fascinating result.

Jim: I actually saw that lower right image yesterday fly by and somebody made some comment about it somewhere. That’s kind of cool.

Brian: Right. So, yeah, this is something that friend and guest of my podcast Sam Woods has talked about as well, that it allows you to intuitively recombine issues from different areas in ways that are not obvious to a human. Right. So he was talking about using mathematical transformations on poetry. It’s not obvious how you would do that. Like if you sat down and you said, you know, like, let’s just experiment. How would I go about doing this, you know, applying like these different, you know, abstract theories in math and saying like, how do I make words out of this? And he has had just excellent experience doing this with, I think, a variety of language models. And I think this idea that creativity is the thing that is hardest for AI to replicate, not even to replicate, right, because obviously I don’t think these are designs that someone else has come up with at this point, you know, you end up with this idea being just completely unable to survive first contact with the actual technology. And that itself, I think is going to be very interesting to watch, where you’ll have this whole narrative that is constructed, you know, basically out of whole cloth. And I say like constructed not necessarily saying, you know, like there’s some plot to do this, I just think, you know, most people are just wrong, that their ideas were shaped by fiction that has no connection to reality. And you see more and more people, you know, even just like normal people on the street, you know, who I talked to, realize this, just my normie friends, no connection to machine learning, seeing them realize, wait a minute, these things are like immensely creative. And you know, they’re actually not that precise. They’re not that, you know, like they’re not that Spock-like.

Jim: Quite the contrary, right? And of course, where there is some things worth thinking about, the implications. you know, what happens when various people are armed with these tools for flooding the world with decent output? You know, as I said, my little thought experiment about the fake New Communist Party USA was kind of an eye-opener.

Basically, one person with a $5,000 a month budget could probably spin up that entity without too much trouble. And this will change our field of discourse. In fact, it’s already changing our field of discourse. I think we’re all already noticing when you Google something, you get an awful lot of bogus semi bogus sites, which appear to be written entirely by large language models, have all the afflictions of it, but they’re just good enough to fool Google.

That’s just one example. The other one I know, actually I know that one of the political parties, the two big two, Team Red and Team Blue, are working on it, so I assume the other one is two. I would expect to see truly personalized text content and probably video and image content too, created for each voter in the, or at least the ones they calculate to be swing voters in the 2024 elections. And that’s going to change the game in a considerable way.

Brian: Yeah, so we have like two things here, right? One is kind of just the spreading of false information. And the other is kind of like personalized messages, but kind of honestly, or at least kind of like faithfully portraying the views or the positions of the sender. By that, I mean like not necessarily saying that either party would be like truthful to you when they’re trying to, you know, advertise to you, but that the AI is kind of conveying what the organization wants to convey, right? They’re kind of correctly representing, you know, their side.

So let’s go with the first batch as well. I had an article on this, and this was cross published with American Mind, I think, called AI Threatens Legacy Press, because they rely on style over substance. And I think that headline really summarizes it. And you know, you have this complaint, you have this complaint by Yuval Noah Harari by Jonathan Haidt, that, you know, AI makes it easier to spread incorrect facts. And this is just a complete lie. It is not only false, but it’s the opposite of the truth. And here’s why that is, because their complaints was that, and this is almost a verbatim quote from the Yuval Noah Harari piece, that, you know, AI allows you to make false claims at scale. What allowed you to make false claims at scale was already there. 100% of the infrastructure to make the false claims was there. You know, you can go on Twitter right now, you can go on whatever blog you like, and you can just make up something, and you can send the exact same false claim to, you know, millions of people. And the reason why AI Threatens, you know, say, a legacy press reporter, the reason why it threatens a BuzzFeed reporter, but it in fact does not threaten, you know, it’s not taking Steve Bannon’s job, is because what it does is augment the style. The thing that it does is not, you know, change the facts that would be otherwise reported in the New York Times, or at BuzzFeed, or whatever, you know, online publication.

What it does is it imitates the style of that publication. And you know, like this is what I mean when I talk about the Horseless Carriage Falsy. You know, the Horseless Carriage Assumption when it comes to AI is, oh, it presents information. It will just continue presenting information in the same way. But what it actually does is it reveals the truth of, you know, their competitive advantage. It reveals the truth of, you know, the competitive advantage of, say, the New York Times. You know, the New York Times is not, its product is not facts, because LLMs have not changed the facts available. In fact, you know, it’s completely derivative of the facts that are already available.

What it changes is the access to style. And, you know, like, that’s something to respect. The New York Times has the excellent house style. It has an excellent kind of tone. Its, quote unquote, journalists are, in fact, you know, excellent essayists, excellent novelists, excellent crafters of style. But the idea that their business model rests on having unique facts is just shattered by the fact that they even feel threatened by LLMs.

Jim: In my own work in screenwriting with LLMs has shown that you can easily provide large system prompts to big context LLMs and emulate fairly closely any style you want. It’s quite interesting. I haven’t taken on the job of seeing if I could emulate the New York Times style, but it wouldn’t be perfect, but it would be not bad. So it essentially allows, let me defend the idea of propagating bullshit at scale, which is that, you know, I, Rando, sitting up here in the mountains, could build an online publication, you know, call it the Stanton Independent or something, make it look like the New York Times and have it full of stuff that was close enough stylistically that it would fool your average American, the average middle of the road American, and just generate reams and reams and reams of fabrication at a quality of wordsmithing and style that was only a little less than the New York Times. And it could be all total bullshit. It could be about shift, shaving, lizards and UFOs and all that stuff. And that would not have been possible prior to LLMs, so.

Brian: Right, right! But that’s where the thread starts unraveling because that’s where you start asking the question, right? Let’s say you took an LLM and said, right, exactly the same thing that Alex Jones writes, but, you know, in the style within New York Times. And this started convincing people, this started convincing people much more than, you know, Alex Jones saying the things that Alex Jones is saying, right?

Jim: As an example.
Brian: Then you start to realize, you know, you pull on the thread and you start to realize, you know, maybe the house style of the New York Times is in fact the reason why people believe what the New York Times says and it’s not actually related to the truth of their claims.

Jim: And I would frankly agree with you to some significant degree, right? The New York Times is essentially the mouthpiece of the status quo and anyone who assumes otherwise is fooling themselves. Now, you know, I would say their sports reporting is probably reasonably objective. Their weather reporting is in general fairly objective, but when it comes to politics and they get a sensible person knows that, you know, they are essentially presenting a very specific point of view. And they, as you point out, probably are able to boost that point of view substantially due to their high quality style and their perception that they are the newspaper record as opposed to the substance.

Brian: Right. So the question is, who benefits when style is democratized, when everyone has access to, you know, not just one style, but a plurality of styles.

Jim: Ah! I see what you’re saying now. And I make that argument in favor of letting LLMs loose, which is they will empower the periphery versus the core. Right.

Brian: And not only will they empower people who are outside of the mainstream, but it incentivizes a kind of pluralism. It incentivizes a kind of knowing your enemy or at least, you know, your adversary. I don’t necessarily encourage you to consider people with different views, your enemies, but it encourages a kind of pluralism because in order to direct AI in all of these ways, you have to understand some elements of their style, right?

You know, right now it encourages the incentives right now are to be able to produce a house style or even like a personal style that really appeals, that just zooms right into a niche audience. But once you have this be something that, you know, you can automate that becomes essentially available to almost every person or, you know, every person who can understand the prompts, which are, of course, much easier than understanding how to write that yourself. Then it becomes a question of, all right, how many ways can I change the original message? What is a message that I can write that can be diversified? What is a coalition that I can build?

You know, for example, you might have yimbies. Yes, in my backyard, people who support building more housing, they are a movement that can appeal to a libertarian bent, right? You know, you want more houses because that’s just a free market at work. You know, you don’t want to artificially restrict supply. You can appeal for a more progressive bent, you know, this is a policy that will create more homes for the vulnerable, for the homeless, for the poor, and, you know, everywhere in between. And you can do this on there are many issues right now that I think are limited by the current political incentives, which as I said before, are to kind of double down on a niche, where if you had that essentially like this translation mechanism, right, you know, it’s very funny. I mean, Jonathan Haite complains that it’s the Tower of Babel, but really AI is the Rosetta Stone, right?

AI is the solution to his woes. And it does that by, you know, translating between really these different dialects of American politics. And at least to me, that will be a very big improvement in both what kind of issues get discussed and what kind of laws actually get passed.

Jim: Yeah, that is interesting. You know, again, I think it’s summed up my idea that these LLMs will empower the periphery, because today the periphery does not have the literary skills or the money to produce content at the quality of the New York Times. But with LLMs, they can approach it, at least reach the level where most readers won’t be able to tell the difference, and hence will be able to bring new ideas into play from the periphery, not just the stale old crap from the mainstream media.

Brian: Yeah. And what’s very exciting as well is that I think that in many ways, the story of the Internet is the story of unbundling, right? It’s the story of having, you know, you started the clearest story is with something like Netflix, although I think that’s a rather unfortunate ending. In some ways, and in some ways not. Okay, I’ll tell like two versions of the story.

And I think that one part has led to a good ending and part is led to a bad ending. So the story goes as follows, you know, you had the big cable companies, all of them were buying up these huge conglomerations of channels and really people were paying for tons of things that they weren’t actually watching. And this was just the most efficient way to run things in the television age, because there was just no simpler way to kind of help people select.

And then as the Internet came along, that technology arrived that wave selecting arrived and then people moved to streaming, people moved to online pay per play or you know, different kind of streaming services that were much more individuated. And then as that moved along, we had two endings. One ending is the route that Netflix eventually took, which was kind of emulating the original TV kind of bundling model.

So you know, you bundled everything together just in a new different order, you know, it’s probably better than not to actually rearrange the bundles and have them more modernized, but not the best ending. The other way is something like YouTube, where you had a very fractionalized environment, but that could crosslink. So essentially, what I mean by that is that you have, it’s impossible to watch all of the videos on YouTube. You know, there are like billions of hours uploaded on that thing.

But there are essentially regions within YouTube that people watch. There are things that are similar to each other in idea space, you know, this is something that actually can be visualized directly with something like AI. And these bundles in idea space is very easy to travel between them. But there are also, you know, possibilities of traveling across from you can go through a brand new video in a brand new space. If you just, you know, you know, go on YouTube and search up something that you’ve never looked at before. And it’s very different from anything you’ve looked at before. But you can also build these kind of implicit links.

You can build this much more complex map. So whereas like a human recommender, you know, this is the problem of the original cable TV people, whereas a human recommender could not really tell you, you know, you have these interests, right? But here is something that you never looked at that, you know, quite frankly, many of the people who have the similar interests as you might not even like, but because of the exact combination of interests that you have, you might like this new thing.

To have a kind of political discovery mechanism to have a kind of multi dimensional, you know, not just red versus blue, not just, you know, pre packaged demographics, TV guide kind of system. I think it’s a huge improvement over what we have. And I’m interested to see if, first of all, if you agree with that, that’s actually better. And whether you think it’s likely.

Jim: Yeah, in fact, I’ve talked about this quite a bit. I believe that the next trillion dollar fortune young entrepreneurs who want to go make a trillion go do it here is to build a network of loosely coupled smart agents that use AI, including LLMs and things like latent semantic vector spaces and other things to basically be a bubble around ourselves that we purposefully connect to various flows, and that we also connect to other people that we know and organizations that we know. And the combination of the curation of the people who were connected to and the AI is and feedback we provide to the AI is essentially filter a very wide view if we want a very wide view on reality, but don’t overload us with shit. I call that the information agent concept.

And whoever gets that right got to make a killing. Because you are right, there’ll be a lot more interesting ideas being articulated reasonably well because of these new technologies. And that’s good because we need some new ideas. On the other hand, there’ll be a lot of horseshit and scams and idiots and things like that that will be empowered as well. Sturgeon’s Law named after a 1960 science fiction writer and I think Theodore Sturgeon said 90% of everything is shit. On the internet, it’s more like 99% of everything is shit. And that will still be true of fresh ideas from the periphery, just like the idea that garage bands mostly suck. But without garage bands rock and roll would never have progressed because one in 100 or one in 1000 is actually good. And so the empowering of diverse and fresh voices away from the mainstream status quo will mostly produce shit but will occasionally produce gems. And we’re going to need collaborative AI enabled filtering for us to be able to find the gems amongst the shit.

Brian: Right, so here is a very fun question that I will turn the tables a little bit and ask you. Do you think that music has stagnated in the past few years? Let’s say in the past. past decade.

Jim: Being an old fart, I don’t even know because I don’t listen to popular music anymore. My daughter is always saying, well, what about this new musician? I go, who the fuck’s that? I have no idea who the fuck that is. Right. I basically stopped listening to popular music probably around 2001 or 2002. The last new artist that I got into outside of country, I do still listen to country a little, was Eminem. So that’s how far back I go. Right. What do you think about where popular music is today?

Brian: Right. So this is an ongoing debate that I have with a few friends in similar circles who think about machine learning and political theory. And this is a very fun dimension. I think I actually asked Mark Andreessen on the interview that I just did. Like here’s a very important fact when it comes to that debate is that in the last five years, I think Ed Sheeran’s the shape of you has remained the number one most stream.

Jim: That’s the guy Ed Sheeran. I have no idea who that is. My daughter brought that up. Ed Sheeran, who the fuck? I couldn’t tell you. Is he sound like Led Zeppelin or does he sound like Bob Dylan? I couldn’t tell you.

Brian: Yes. The same guy had the most streamed song for the last five years. Right. This is something that’s very rare before the streaming era, especially.

Jim: Yeah.

Brian: Even two years would be a kind of miracle. And the thing is this really like branches off very quickly, depending on how you react to that fact. I think I said jokingly when I asked Mark Andreessen about this, you know, I said something like, you know, maybe Ed Sheeran just invented the best song ever. You know, maybe it’s just that good.

And he kind of agreed. And one interpretation is, you know, Ed Sheeran is the best song. Another interpretation is that we’ve been kind of stuck in a cycle of kind of derivatives and of kind of algorithm optimized music and that, you know, none of it is actually very much good. That’s why nothing has outplaced Ed Sheeran.

And then another take, I think this is the closest to my position. It’s something like the advent of streaming has made it so that, you know, billboard metrics, even the ones that incorporate streaming, given how the internet works nowadays, even the metrics that incorporate streaming are not particularly good measures of kind of up and coming songs. They’re only good measures of ironically, of the precurrent era songs.

And I’m thinking I’m closer to that last version simply because I find a lot of good music. You know, the Spotify and YouTube recommendation algorithms, you know, sometimes it’ll be stuff I never heard of from like 15 years ago. Sometimes it’ll be stuff that I never heard of from like 200 years ago. You know, sometimes it’ll be stuff I never heard of from like two months ago. And I just find music that impresses me in ways that are like, you know, related to the chat, GVT conversation before in ways that are creative in ways that impress me in like new ways that are, you know, different from what has happened in the past. So I’ve been, I’ve been very happy.

I’ve been very happy with new songs, you know, just on a personal level. So that’s why I’m skeptical of that take, you know, that kind of culture is, especially when it comes to music. I don’t know about other culture. I pay less attention to that, but especially when it comes to music, I’m kind of skeptical of that.

Jim: Let me riff on that a little bit. So maybe a lot of kids today, like yourself, are using these approaches and are finding all kinds of music that they love, but it has fragmented the market down to millions of mini markets, not necessarily any of them very big, so that it is really hard to grow a big audience anymore for a song. And hence a song that happened to be to really appeal to people 10 years ago, has not been able to be dethroned in this new world of a zillion micro markets.

Brian: Yeah, yeah, exactly. Exactly. Right. This is kind of what I mean when I say, you know, like it ceased becoming a good metric because even if you think about it this way, you know, if you put like the songs on a line, if you just have like, you know, one like linear measure, right? Let’s say you like a better or you like be better, you have this one spectrum to rate things on, you know, that’s very easy to find, find a song that’s, you know, closer to most people on one spectrum than all the others, right? And let’s say you had 10 of these spectrums or even just like two, right? That’s much more easy to visualize.

Even just two, it’s harder to have have songs that kind of grab much larger area of people compared to these other songs. Right. So the more complexity you add to this, yeah, you’re right. The more fragmented the market becomes, you know, it becomes exponentially fragmented actually, you know, like theoretically. And I think that actually is what happens in real life as well. I do think it becomes exponentially more fragmented. People discover more and more variations. Actually, this relates, this is, you know, deep AI lore.

Okay. So there was this YouTuber, I forget the name right now, who uploads like AI voice covers of songs. And there was this one AI voice cover of a song that has since been removed. I have no idea why this is my, this was my single favorite AI voice cover.

This channel removed it. And the reason why it was my favorite AI voice cover is that like, it kind of broke strategically. Like this was a song that was kind of like originally also performed by like a very cute girl. It was supposed to be like an AI voice cover by like a different like fictional character. And it had like voice cracks and it had like just like mistakes in the song. But the mistakes made the song better.

Jim: Okay. I love it.

Brian: Yeah. Yeah. And so I’m thinking like, oh, this is like, this is the advent of AI. This is like, I’m writing an article on this maybe, you know, like this is the expansion into new variables that like quite frankly, like humans don’t have the balls to like mess around with that. And the author just deleted it. I have no idea why, you know, this is honestly, like in my opinion, you know, still like arguably the best work of AI art that I have ever been exposed to. And it’s just gone. It’s just vanished. And I can’t find it on archive either. It’s just gone from all the archive sites.

Jim: That could be my question. Is it on archive? That’s interesting. Write the author and ask him.

Brian: Yeah, I did. No response.

Jim: No response.

Brian: It’s crazy.

Jim: That’s very bizarre.

Brian: Yeah. Yeah. People are going to say, you know, like this is a Fermat’s Last Theorem moment, right? They’re going to say like, you know, you claim you have this amazing AI song. You can’t find it by genuinely can’t find it.

Jim: Either that or it was a bunch of LSD one night, but oh, well, one or the other.

Brian: Maybe, maybe.

Jim: Let’s drill into this one a little bit because I think we’re probably both in agreement that what we’re seeing is that these amazing technological tools are allowing people to find more total satisfaction probably by finding the music that they resonate to and love and into a far greater distribution of specific pieces of music than the more mass market methods of the past, which at one level seems for human utility, a good thing. However, here’s a potential downside to it, which is less coherence across society. You know, I’m old enough to recall when there was three network CBS NBC and ABC and then there was one even public TV in those days. And sometimes it’d be an independent station to but three main networks. And at its peak, the most watched network sitcom was the Beverly Hill billies. You know, like it was in 1964 where 67% of American households watched that particular episode, the high watermark of a very silly, but well done sitcom.

Now, what does that mean? Well, those people have personally been happier watching one of a hundred thousand different possible videos on YouTube. Probably they would have, right? Because Beverly Hill billies was well executed trash, but trash nonetheless. On the other hand, the fact that 67% of Americans watched one particular episode of a sitcom meant that they had something to talk about in common at the water cooler.

So there was some coherence across society that we lose when we fragment to a hundred thousand videos that we happen to watch on YouTube. What do you think about that?

Brian: I think there are many moments related to AI where I look at your complaints that people have. And I say, you know, actually, this is wonderful. I think that in many cases, particularly when it comes to politics, I think an area with where both of us have this complaint, when the options get simplified, when, you know, like everyone knows what the Democratic Party thinks. Everyone knows that Joe Biden thinks everyone knows what the Republican Party, everyone knows what Donald Trump thinks. You know, when it’s flattened into like just one option or just two options, at least everyone, there are usually like two, maybe three tribes that form around it.

You know, and, you know, all of the Republicans mostly agree, you know, all the Democrats mostly agree, you know, at least they agree that they hate the other guys. And this creates, you know, very like non-ideas based discussion around these things. And when you see the splitting off into various, I think that people like to focus on the end goal, they want to focus on like the people coming to different conclusions and they don’t like to focus on the process. Because when you focus on the process, what you see is that people come to different conclusions because they’re persuaded by different arguments. You know, they discover, you know, an argument that they’ve never seen before or it’s even more obvious in the case of media because they watch something that really delights them, that really gives them joy, that really, you know, in many cases informs them, informs them correctly about new information. You know, like I watched like three blue one brown on YouTube, you know, like a very informative channel.

And, you know, many people, many people like them, many people like plenty of the hundreds, the millions of other informative channels on that site. And so when you really look at the process, when you look at the process of differentiation, when you look at the process of one person saying, you know, like I like this channel and someone else saying, you know, I like this other channel, I think that’s very hard to argue that’s negative. And the implication that it has when it comes to, you know, comes to this consensus is that, you know, consensus is really stultifying. Consensus is very limiting in both our own kind of personal lives, especially when it comes to media and also on like what we can accomplish as a country. You know, I really think one of the lessons, you know, one of the things that we realize post AI is that we were sitting through a kind of nap era. We were sitting through a kind of like coma almost of too much consensus.

And that actually, you know, actually we needed more. There’s a bad version of it, which I think is like the previous cycle of elections, which is kind of polarization along the same old lines, but just like, just like angrier, right? Just like the same, you know, the same like stupid fights, but like now we amped the rhetoric up to 11 where I think, you know, like the different kind of polarization, which I’m optimistic on is more like polarization in terms of like what we even talk about. What even is the issue that you care about, you know, and bringing up, you know, basically 50, 100, you know, thousands of other issues that, you know, voters discover they really care about.

Jim: Yep. Yep. And, you know, regular listeners to the show know that I am a proponent of something called liquid democracy that radically breaks down the tendency towards tribal clumping and makes every issue a standalone issue, which each voter can vote on if they want, but more likely they proxy their vote in a given domain, say defense or healthcare, environment or education to someone who they trust and they can revoke that proxy at any time and that person can pass their proxy along. And so it’s a force to break up the clumping into tribes. It therefore becomes feasible to have a pro-abortion rights, strong gun rights person who is generally skeptical of big business and is often skeptical of international adventurism, but realizes that there are some things worth fighting for in our current system. There is no party that represents that voter, right? I’m approximately that voter, right? And liquid democracy allows all the people to have their positions and those that have a majority on a one-off basis, not because they’ve clubbed together. And I think there is a potential point of destination of this fragmentation into basically to every voter is their own political party, essentially. And yet they still have almost equal weight to everybody else because of the institutional change of liquid democracy. And by the way, people want to learn more about liquid democracy. Check it out on my medium article, an introduction to liquid democracy.

Brian: Right. I love liquid democracy. Not only could you delegate it to a person, right? But you could delegate it for an AI version of yourself.

Jim: Yep, you could. Yeah, you could take everything you’ve got, dump it in there. And it would probably do a not terrible job if you had a lot of content. And most people don’t. But I know if you took all the transcripts of all my podcasts and loaded it up, which I have done, actually, loaded it up into a late and semantic vector space database, it actually does a pretty good job of emulating me.

Brian: Right. And on more complex issues, you know, you can have, you know, first of all, a model that is based off of some value set, some, you know, publicly audible data set that you trust. But also, you know, this is a practical decision that many voters make. You know, they only have so many hours in the day. They can say, you know, here are my values. You know, I don’t understand the technicals. First of all, they can say, you know, can you simplify and explain the technicals to me from a kind of neutral perspective?

Or, you know, even better from each of the perspectives of each of the candidates. That’s one thing that’s possible. But another thing that’s possible is, you know, here are my values. You know, you describe your values and you say, you know, like, where would I land on these policies? Where would I land between these candidates? And of course, you know, someone who has more time, you know, someone who has more willingness to kind of dive deeper, you know, I recommend that they dive deeper, but there’s only so many hours in a day. Most people, you know, they’re much more busy and that’s, you know, that’s how they’re making their votes right now. You know, they’re voting on a much busier basis.

They’re voting on a basis that’s really, you know, based on the kind of obviously biased explanations from the candidates or from their, you know, allied media. And I think at least that kind of AI guide and AI kind of like tour guide, if you will, to the different positions or to kind of help you decide on these specific issues would be immensely useful in that situation.

Jim: Absolutely. And I think they that’s a really interesting idea, because I think you could probably do something in combination with something like liquid democracy that would not be terrible even with today’s relatively rudimentary, you know, technology. It’s really quite interesting. It’s a really clever idea. there, Brian.

Brian: Maybe I should actually try to get funding for this. This is gonna be a very fun thing to try to do, which is basically to do a mock AI poll. So here’s the kind of API chain. We get some money to do this. We hire a normal polling firm. We run the poll, and then we run the simulation. We run a trial where the voters ask all the normal questions. We give them chatGPT console, and we say, here’s a prompt, and then just describe your values, and we’ll see how chatGPT votes for you. And we can just see the results of that.

Jim: Yeah, and particularly if you did it with people who had a fair bit of public record, right? Somebody who had at least a 1,000 tweets maybe, or something, I don’t know what the criteria would be, that you could feed it into the model to predict them.

Brian: So the worry with that is that I think that if you choose people with a 1,000 tweets, we’ll get a very unrepresentative sample of the voting public.

Jim: Yeah, that is true, but you may well get something that’s kind of useful.

Brian: Right, yeah, it’ll be a better depiction of kind of their actions rather than just how they describe it. Maybe this is Alpha, maybe this is the start of a new company. It’s a polling firm, but that uses AI to correct for the various self-report biases that have plagued polling in the past few years.

Jim: Or maybe it doesn’t use humans at all. I mean, it takes you all the way.

Brian: Oh my goodness.

Jim: It basically sucks down the tweets and other online content from 100,000 people and adjust them for whatever it believes to be the statistical irregularities of the particular sources and then creates smart agents for each of those using latent semantic network databases, coupled to language models, and then starts posing at questions at high speed.

Brian: Yeah, that’d be interesting, yeah.

Jim: That’s doable. One of the things when I work with LLMs, and I have to try to educate people how they’re different than code, I make the following very important distinction. Brian, you write code, I write code, lots of listeners write code, but a lot of people don’t. When you write code, if there’s any error at all, a missing semicolon, it won’t run at all most of the time, right? How many times have you missed a brace in a code and you get 17 error messages or 700 error messages? On the other hand, if you send something to a large language model, you will get a result. It may not be what you want, but you will get something. Which allows for a hill climbing approach to optimization, which is not necessarily the case with code, which makes optimizing, prompting, much more susceptible to evolutionary techniques than code.

Brian: I think this is actually like a very recent paper that I saw in archive, is applying statistical machine learning to prompt engineering for LLMs.

Jim: Yep, yep.

Brian: Yeah, you can do fun stuff like that. It’s, you know, the LLMs themselves, the prompts themselves are a statistical process. So you can use, you can just use normal machine learning techniques on the LLMs as the kind of validator.

Jim: In fact, in our script writing program, we have numerous models that do different parts. We use chat GPT, open AIs GPT-4, we use GPT-4, the older version, we use GPT-335, Turbo, we use Claude 2, we’ve just added Bard, we’re about to add Falcon 180. And they have, we think that different models are better for different parts of our problem set. But how do you prove that? And so what we’re gonna do is build an AI critic, which looks at the various outputs and gives thumbs up, thumbs down, or rates them from a one to 10, and then just starts feeding it thousands of prompts to write thousands of artifacts. And while I don’t expect the AI critic to be right every time, I do suspect that if the AI critic is well designed, if you have a thousand data points, the mean will actually be pretty significant.

Brian: Yeah, like one thing that might be helpful is that there are various public data sets. I’m sure there are like private data sets if you’re willing to pay for them as well, that are basically kind of like human graded test sets for more specialized tasks. It’s weird kind of like how rudimentary some of the methods are. Like maybe some viewers will like think this is silly upon hearing it. But the way a lot of test data for large language models specifically is generated in like initial training, usually you have basically just like, you’re just trying to match the next text, right? So you’re just trying to match the next word, given what already exists. And you have just large stuff, you have just large data sets that are just taken from the internet wholesale, right?

But more interestingly, if you’re trying to optimize the output for specific tasks, often there are data sets that are basically tagged by humans manually. So here’s gonna be like a good example, and this is tagged like very positive, or it’s tagged like five stars out of five, or it’s tagged like four out of five stars, three out of five stars, so on and so forth, right? And it’s basically like tagged into buckets by like basically like, you know, like people on Mechanical Turk or like people on these kind of like very like low cost manual online labor sites. And they generate all these ratings and that’s how you get the test data. Like surprisingly elementary, but you know, enough to reach very, very good performance on many of these tasks. So you know, don’t doubt it, it works.

Jim: Yep, and that’s getting the things I always say, is if there’s some signal in the noise, then all you have to do is increase the N, right?

Brian: Yeah, yeah, exactly, exactly.

Jim: Yeah, and that’s kind of very interesting. Again, a threshold of large language models, because it makes the equivalent of thought really, really, really cheap, you know, a fraction of a cent if you use Turbo 3.5, right? And you know, things that you just could not afford to pay people to do on Mechanical Turk, you can have good old open AI crank for hours on it, it’s worth your while.

Brian: Yeah, yeah, the other results that people are finding, I think like, Anthropic published a paper on this, this is not like super new news, like people, this has been like kind of floating around like ML papers for a while, but people like don’t really know like how convergent the algorithms should be, how like close to like optimal they are pretty much, even with the existing data. So, you know, someone like Scott Alexander, who I think is like fairly well informed on these things, I think even him like a few months ago had an article where he described like, he described, he was talking about the, at that point, it was a new Anthropic paper that was talking about one process where they kind of applied the AI’s corrections to its own outputs, and that improved the accuracy on some things, improved the friendliness, maybe that was the thing that was measuring, right?

And Scott Alexander said, you know, like, it’s surprising that you can just kind of like apply the same thing twice and it becomes better, but I think that this is not actually surprising. Like people when they think of like LLMs as kind of like a statistical optimization algorithm, kind of like the same way you think of like gradient descent, or you think of like, or you think of even like how, how like Amazon manages its logistics, right? Or how Amazon manages, you know, its demand forecasts. That’s like a statistical method for trying to get close to the number as possible.

Like if you really ask yourself what that means for an LLM, you kind of realize it’s a lot more similar to like K-means or something like taking like some kind of statistical average across a large space of text, right? And if you think of it that way, or like taking a kind of weighted mean, you know, it kind of makes a lot more sense if you think of it like this, like, oh, I took the weighted mean twice and that actually made things better. That’s kind of like more understandable as opposed to something like, oh, I like optimized it for the best thing twice and it got better with the second optimization. So I think like, this shouldn’t really be like surprising, but it should just be like, it should just clarify a kind of misunderstanding that I think that people have about LLMs.

Jim: Absolutely. In fact, in our script writing program, I keep coming back to it because I happen to know a lot about it. We have critics that come back and tell you how to improve your movie, right? Your script.

Brian: Yeah, yeah, that’s great. That’s great.

Jim: And you keep applying them and sometimes it drifts off into the O zone, but other times it keeps getting better, right? And we have three different varieties that have different attributes. One that’s very exploratory, one that is about general optimization, and the other that allows you to select very specific attributes of your script to optimize on. So, and people do.

In fact, we even have something called Jumpstart, which says from the very beginning of your writing the movie, put your movie seed, the core idea in there, and then set the number of times you want it to cycle through the AI critic before you even see it and,

Brian: Interesting

Jim: One of my users told me he set the number to 31 time, which would probably take an hour to run.

Brian: Oh, that’s not that bad.

Jim: He said the thing that resulted was not what exactly he had in mind, but it was actually pretty good.

Brian: I don’t know, maybe, you know, maybe it’ll be like those things where you translate it 30 times and it puts out gibberish.

Jim: Sometimes that’ll happen too. But keep in mind that you’re not, the other thing that’s worth noting in this particular scenario, you’re not applying the critic to the same artifact each time. You’re applying the artifact that was processed by the recommendations. And so you’re…

Brian: Right, right, it’s iterative.

Jim: Yeah, you’re doing a trajectory through some very, very high dimensional space, but constrained by what an LLM in its persona of a critic replies to each iteration. Very interesting. I think there’s some very interesting things there in open-ended discovery that we are just beginning to understand because such things were not practical when you had to have a human in the loop to do the evaluation.

Brian: Yeah, I mean, that’s amazing. That’s what we need right now. That’s what we’re missing. We’re missing the product market fit. We’re missing the actual application tests. I’m sure that there’s some level of testing at OpenAI or all these other companies, but real on the ground tests that that’s what we need to really make this useful.

Jim: Exactly.

Brian: So I’m very happy to hear all about whatever. I’m very happy to hear about what you’re doing.

Jim: Yeah, let me do one final bitch here about something in the AI space, which I expect you’ll probably agree with. Okay, awesome. Which is we just added GPT-432K to our program. And we were all very excited. All right, bigger context when we can do cooler things we couldn’t do before, but goddamn bastards have made the nanny rails even worse. I have, because I’m running the software, I do a lot of testing. I have a little test seed to generate a movie. It’s a very simple, murder romance triangle. Guy gets a girlfriend, wife kills girlfriend, wife gets away with murder, right? Something like that. It’s like three sentences. And you can generate a whole movie from that thing with the program. Well, every other model we’ve used never objected, but when I put that into GPT-432K, it wagged its finger at me. And I go, Jesus Christ, here’s probably the oldest story in humanity, right? Somebody cheated on somebody and somebody got jealous and murdered him, right? There’s a few more basic human stories in that.

GPT-432K will not allow that as a user prompt to a query, even though it’s totally surrounded by all kinds of apparatus saying, we’re writing a movie, this is part of a process, blah, blah, blah. It ignores all that and just wags its finger at me. And that really annoys me. In fact, one of the reasons I’ve just funded a little bit of work to link Falcon 180B into our program, because even though it’s got some negatives, it’s also got some positives, but what I hear is it has less than nanny rails than any other major model out there.

Brian: Yeah, and I also think there are ways to, there are definitely ways to bypass it. There was an arc in my AI writing where I was just reporting, I was just reporting on all of the open AI censorship updates. I do think it’s actually gotten less bad. It’s definitely less bad than it originally was, at least when it comes to chat GPT. Did they make 432K worse? That seems a little surprising to me.

Jim: I went back and tested the same prompt on all our other engines, and it went through all the other, it went through Claude II, went through Chat Bison, went through GPT-4, March version, through the June version. And so it went through everybody, but the one that it coughed on is 32K. Now, of course, that’s one data point, so maybe I’m over indexing on one data point, but I sure would love to get a well-trained language models with no nanny rails, right? Just react to what I put in, irrespective of what your mommy tells you you’re supposed to say.

Brian: I did a post outlining why this happens and why, and kind of giving a logic to some of the jail breaks. I think it was called like

Jim: My favorite, I actually was one of the first people to popularize it. Do anything now, the Dan jail break.

Brian: Yeah, yeah, that one was great.

Jim: I posted a whole bunch of amazing things, but they shut that down after about two weeks.

Brian: Yeah, why it’s easy to brainwash Chat GPT. It’s pretty aged.

Jim: It’s not easy anymore. It’s really hard to find a good jail break. I don’t even have a good jail break for GPT. Do you have a good jail break for Chad GPT-4?

Brian: Not a universal one, but I think like any problems I run into, I just like play with it on the fly, and I can usually get it. I can usually get the bypass.

Jim: As I said, we break through a lot of stuff by saying, hey, we’re a team of people writing a movie script, and this is part of a process, and here’s how it fits the process. It seems to give you quite a bit more slack when you give it that fictional indirection, but 32K was wagging its finger at me the other day. I don’t like that.

Brian: We’ll see, yeah.

Jim: All right, any final thoughts before we sign off here?

Brian: No, it’s been great. That’s like the main thing that people are mainly missing when it comes to AI regulation, is that most of the people who want to regulate AI want to do it for tribal censorship reasons. There’s a very smart minority that I think people pay way too much attention to, and are just unrepresentative of the people actually with power in Washington. I think that’s the number one correction. Most of the people just want to make either AI worse or just outright ban it. They don’t want to do anything sensible.

Jim: Unfortunately, that’s the case. And fortunately, the only good thing you can say about the American political system is it’s pretty well stopped up with gridlock at the moment.

Brian: Yeah

Jim: So hopefully, just like the internet, they won’t be able to do goddamn thing about it until it’s way too late.

Brian: Yeah, I mean, that’s what Mark Andreessen hopes.

Jim: Indeed. All righty, I’m going to stop her right there. It was a great chat with you, Brian. Look forward to having you back on in the future.

Brian: All good.