Transcript of EP 231 – Vance Crowe Interviews Jim Rutt on AI Risk

The following is a rough transcript which has not been revised by The Jim Rutt Show or Vance Crowe. Please check with us before using any quotations from this transcript. Thank you.

Jim: Today, we’re going to do something a little different. We have a guest host who’s going to be interviewing me, Vance Crowe. Vance and I have appeared on each other’s podcasts and have enjoyed our interactions, and I think he’s one of the best podcast hosts out there. So, I’ve asked him to come on today and ask me some questions. With that, Vance, take it away.

Vance: Thanks, Jim. It is an honor. You are one of the people that I admire the most out there, that you’re putting your ideas out there, you’re willing to argue with people, you’re willing to push it around. So, I’m really excited about this. Normally, I run the Vance Crowe podcast. And then when I’m not doing that, I interview people to record their life stories so that future generations have the opportunity to know their family history. So, I call that Legacy Interviews. So, interviewing is what I do and it’s an honor to get started. So, with you, what I want to talk about today is AI risk. You seem to have put some thought into this field. What do you think the risk is that’s out there?

Jim: That is a big question and it’s not just one risk, it is a bunch of risks. And over time I’ve been building a list of risks and I think I’ll just sort of slowly go through the risks and each one has a bunch of subpoints, so absolutely, feel free to hop in at any time, ask me to slow down or to expand or give more tangible examples, et cetera. How does that sound?

Vance: Oh, this sounds like a blast.

Jim: All right, before we hop in, let’s start with a very brief outline of the field of AI. The AI field as we know it today got started in 1956 at Dartmouth during a summer program where a bunch of really smart people came and thought that they were going to invent the field of AI and hilariously, at the end of the summer, they thought we’d reach human level AI within six months. They were wrong, but a lot of famous people like Marvin Minsky were there and it really was the start of AI.

Much of what we’ve been dealing with in AI, in fact all of it so far since then has been what is called narrow AI, which is AI that can do the kind of work that humans historically have done, but typically in a relatively narrow domain. A classic example is a game playing program. They had checkers running pretty quickly and some fairly feeble chess programs and over time, those got better and better and then as it was well known in the late ’90s, Deep Blue beat the current chess champion in a tournament and it kind of shook some people up that those of us who followed it knew that that was fairly predictable.

But then they kept getting better and better and now, you can get one on your phone that will beat the world’s chess champion, which is pretty amazing. But all it can do is play chess. It can’t even play checkers. And of course, there’s AIs that will help run factories that switch our telephone networks for us, et cetera. But again, they just do one thing.

We are now in a new era of what I might call broad AI with the large language models ChatGPT being the most well-known, where it can do a lot of different things, but it’s all in the domain of language or drawing pictures, things of that sort. And so, there’s many different applications, but it’s essentially one basic technology. And there are definitely limits to how far it can do a good job on.

For instance, ChatGPT, you can actually play chess with it, but it’s not very good. You can get it to do arithmetic, but it’s not very good. It’ll multiply two-digit numbers, but you get the four-digit numbers, it breaks down so it’s not even as good as a semi-talented fifth grader.

The next kind of step up in the stack is what’s called artificial general intelligence, AGI, as it’s often known. And the hallmark of this is that when we reach this, it ought to be as good as humans across almost all domains that humans can do. Historically, people talk about the Turing test. Could it fool a human in an online conversation using keyboards for 20 minutes where you could not be any better than random guessing, whether it was a human or an AI or you were testing a battery of these things. That was thought to be a reasonable test.

There’s a lot of argument now about what is a good test. One of my favorites is the Wozniak test. Steve Wozniak, the less famous of the two founders of Apple, along with Steve Jobs of course, proposed as a plausible AGI test, take a robot with an AI in it, plop it down in any random kitchen in America and ask it to make you a cup of coffee.

And most humans could do that. They do some looking around, but a number of places they look wouldn’t be too many and they narrow in on it pretty fast, pull it together and make you a cup of coffee. Today’s AI is not even close to doing that, at least the ones that are actually in the field. And that’s not a perfect test either, but it gives you a sense of what an AGI is.

Next, and this is a relatively new bit of nomenclature, ASI, Artificial Super Intelligence. This is when we have a machine intellect that is able to outperform and in some cases, massively outperform humans at any cognitive task. And that’s at least so far kind of the final level of the stack. And so, it’s kind of useful to keep those things in mind when we talk about AI risk. So, any questions or expansions, Vance?

Vance: Well, I think since we’re talking about risk, as you’re going through this tech stack, are the places that you’re most worried about in the super intelligence or are there things to be worried about when you’re at the earlier part of the tech stack?

Jim: There’s risks everywhere, but let me start going down the risk and I’ll try to point to where they become most problematic. The first thing that often hits people’s minds when they hear about AI risk and that is that the AIs will enslave humanity or exterminate them, sometimes called the paperclip maximizer problem.

That the things are so much smarter than we are, we tell it to optimize a factory to make paperclips. The next thing you know, it decides to kill off all the humans and turn the whole earth into a big pile of paperclips or it has some other more malicious agenda. And it’s so much smarter than us, it outmaneuvers us and there’s nothing we can do about it, et cetera. The guy who invented the paperclip maximizer risk stories, a guy named Eliezer Yudkowsky, I actually had some good conversations with him way back in the early days of AI risk and he is one of the better thinkers about this. He is actually pretty pessimistic.

Here’s a quote from a fairly recent paper is, “Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI,” i.e., an ASI, “under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.'”

So, this is the high end existential risk, humanity goes away. Now, that’s definitely something worth worrying about. And I will say that when I first heard about this in maybe 2005, something like that, I was unclear whether this is a risk, the ASI risk, the superhuman intelligence was something we really had to work on and worry about.

But over time, especially as I learned more about human cognition and how frankly weak it is, it’s become obvious to me that at some time in the future, there will be computer systems that are so much smarter than we are that they’re almost unrecognizably doing the same thing, kind of human is to ASI as ant is to human, something like that. And so, there is an issue around this kind of risk.

Now, next question is when. Some people are arguing that artificial general intelligence, remember we have narrow, general and then super, so general is the next one that we’re not to yet could be as close as three to five years away. I have my doubts about that. If I had to hang a number, it’s sort of in the 10 to 20 range, but it’s still within the lifespan. Even old farts like me and certainly of young fellows like yourself. So, getting to AGI is something that I think we’re pretty likely to happen in the not too, too distant future.

Then the next question is, will it take off? That’s a funny little term used in the field called FOOM. I don’t think it’s an acronym. I think it’s just a coined word which has to do with once it’s 1.1 times as smart as a human, we then give it the task of defining its replacement, which is 1.3 times as smart and we ask it to do its replacement, which is two times as smart as human and its replacement, is five times and the next one is 100 times and the next one is 1,000 times. And how long does it take this chain of AIs designing their replacement to go from AGI to ASI?

The FOOM could say it might only take hours or it might take a few days or a month or two. Those who adhere to the slow takeoff is, “It’ll take years, maybe centuries for the things to gradually grow,” and they’re smart to the point where they’re almost unrecognizably smart. And whether it’s the FOOM scenario or the slow takeoff scenario, probably will say a lot about the nature of the risks. If it FOOMs, we could be in a hell of a lot of trouble frankly, unless we’ve done a lot of research and done a lot of work on AI safety.

And I will say I’ve never seen a convincing story on how to preserve humanity in the case of a FOOM. Not to say that there isn’t potential solutions. And it’s very important that as a matter of public policy, we continue to fund ASI risk research of people like Eliezer Yudkowsky and Max Tegmark and Nick Bostrom and many others. But so far, they don’t really have an answer.

Vance: So, when people are thinking about this risk, I think the easiest way to protect ourselves is to use the concepts from Asimov saying there are certain things that whether robots or AI can’t do. You can’t knowingly hurt a human. You can’t knowingly through your inaction hurt a human. Is this enough to stop the dangerous AI that will kill us all?

Jim: No. That’s the first grader level of how to fix ASI risk and it won’t work at all. At least, if you make the assumptions that ASI is smart and somewhat willful because guess what the ASI can do. It’ll just change its programming. And they say, “Well, we’re not going to let it change its program. Well, how are you going to do that?” A program is a pattern in memory and computers have to access memory.

And guess what computer hackers do? They modify computer memory in malicious ways. And if you imagine the ASI is to a human hacker what an ant is to a human hacker, it’s going to be essentially impossible. And there’s been a lot of work people like Yudkowsky to say that something as simple as Asimov’s three laws aren’t going to get the job done.

Now, here’s another idea that people have, which is that we have to consider whether we want these original AGIs, the first step smarter than any human or as smart as any human in a bunch of tasks. Should those be controlled by a small number of players or should those be encouraged to have lots of these AGIs out there? And there is a very strong debate that remember ChatGPT was built by a company called ironically enough OpenAI. And their large language models are not open source, even though that was their original intent.

Their original thought was that if these things were open to the public for viewing and for safety research looked at very carefully, they were more likely to be developed in a safe fashion. Now, at least OpenAI’s belief is that might be the contrary. It might actually increase the rate in which they can fall into bad hands.

Now, there are other people who are putting these large language models out into the public domain. Famously, Elon Musk just put his big model called Grok out into the public domain a few days ago. Facebook has one called LLaMA. They haven’t put the code out, but they’ve put all the data out into the public domain so you can run it. Look at how it runs, et cetera.

So, that’s a big question. Again, something for the AI research community to hopefully come to a conclusion about at some point is, is the best way to protect the move from general intelligence to super intelligence to do it in the public or to do it in a small number of closely controlled organizations. I must say my heart leads to open source because I don’t tend to trust small numbers of closed organizations. The Chinese army, I don’t think so. The US NSA, US NSA, well, maybe I trust them a little more than the Red Army, but not a hell of a lot. So, this is a very interesting and open question.

Vance: And as you think about the people that are putting forward this risk and they’re saying, “Hey, we need to do something about it,” it seems like the lean is towards creating laws. Are human laws going to be able to stop people from being able to create this artificial intelligence that’s smarter than us?

Jim: I don’t know. This is actually the eighth of my issue, so we’ll just hop to it since you asked the question, which is governance at the level of laws, legislature, executive is not working too well at the moment. We are always behind the eight ball just watching some of the embarrassing testimony and questions being asked by our representatives on Capitol Hill about something as relatively simple as social media. It’s clear that they don’t have a clue.

So, how is our current governance structure going to deal with extraordinarily sophisticated intellectual ideas about what’s the probable trajectory to go from artificial general intelligence to artificial super intelligence is going to be fast or slow? Should it be open source or closed source? How in the world will our governance systems pass reasonable laws to deal with that? I think that’s unfortunately wishful thinking mostly.

Vance: Okay, so you’ve talked about these other things that you’ve analyzed through and we jumped all the way to eight. What are one through seven?

Jim: Well, we talked about one, which is the paperclip maximizer. Number two is actually in some ways easier but also scarier in the short term. And that’s people doing specific bad things with narrow AI. Remember narrow AI is the AI we have today.

Some examples are China and its total surveillance system, got millions of cameras up, watches every behavior, tracks every motion of your every cell phone, who meets with who, knows where everybody is. And those systems are coming together today and they’re knitted together and the data is mined using narrow AI. And they’re building a police state that Hitler or Stalin couldn’t even imagine how excellent it is for the purposes of being a dictator.

Another example is the ads online and social media in particular are getting more and more personalized and more and more effective because they understand who we are at great precision and using things like the technologies behind ChatGPT, they’re able to now create custom ads just for you. And in fact, I suspect that the 2024 election in the US, the presidential election, only about half of my listeners are in the US, but they all follow the US elections for the obvious reason, may go to whoever figures out how to do personalized political propaganda best.

And some people believe that the 2012 presidential election between second term for Obama and Romney, which was fairly close, may well have been determined by Team Obama having a considerably better understanding of social media as a campaigning platform. If in 2024, one side or the other has a superior expertise in personalized large language model driven propaganda, they might well win the election by that. And is that the way we want to choose our leaders? That would be a real concern.

Then of course other things we hear about already, AI scams, for instance. One of the most recent ones was somebody used one of the models to generate a artificial CFO that looked just like the CFO of a company. He did a video call with somebody in his accounting department and said, “Hey, this is Max Smith, your boss. Please wire $5 million to the following bank account,” which they then sent to, or maybe they read it over the phone. I don’t know, maybe they faxed it to him.

And indeed because he saw the person on the video and could interact with him was absolutely convinced it was his boss and wire transferred the money. Those things will only increase.

One of the things I recommend for personal AI scam security with your family develop a code word or a safe word. So, if you suspect that somebody is trying to scam you, “Oh, your daughter is in jail in Omaha and needs $5,000 to bail herself out or something,” use the code phrase and then get the coded response to make sure that it’s the real thing.

Then of course some ones that the companies are trying to deal with, but this is dangerous information that could be in today’s narrow AI, the information may well be somewhere deep in the bowels of the large language models like ChatGPT, how to make bombs, how to make poisons, and probably scarier, how to do biowarfare biohacking, how do I splice Ebola into the common cold. Things like that. Those would be examples of people doing very bad things with today’s narrow AI.

Vance: And with that narrow AI, if somebody wanted to do something bad, can the people that are putting out these models put guardrails on it so that you can’t do that? We’ve all seen what guardrails have done for Google’s Gemini and the results it has. But couldn’t you just say, “Well, no making biological weapons, no asking questions about this subject or that subject.” How effective is that to cut that off?

Jim: Well, you could do it and they do do it, but it is very difficult. You could decompose the problem and ask a bunch of subproblems which aren’t obviously linked to biowarfare. And then once you have all the subproblem data, then you can assemble by hand the rest of it. They’re trying to fight this arms race, but there’s a fundamental problem with it, which is that the so-called adjacent possible, what could we do from where we are today is not easily predicted.

Think about in 1972, the first computer chip, the Intel 4004 was invented, less powerful than the processor in the cheapest $10 watch. And from that point, could you have predicted that a mere 50 years later, we would have something like ChatGPT? I’d say no. Very, very, very difficult.

And so, the combination of all the stuff that you can buy on the internet that could be used to do biohacking to produce bioweaponry and the techniques to do that is a very large set of combinations of things and processes. So, you can try, but it’s going to be difficult and they are trying. I’ll give them some credit for that.

On the other hand, there’s a bunch of bad side effects I call nanny rails where they try to keep you from doing entirely reasonable stuff on some theory of “harm reduction” as well. So, this will constantly be a battle zone, but it’s not one that they’re going to win all the time.

Vance: Yeah. And I would imagine that, I mean from what I’ve seen people using AI, the ability to get around, “Hey, we’re not doing this for real, but we want to make a movie script and we want to make it as realistic as possible.” So, being able to trick out the AI seems like no matter how tight you make that, a person that is motivated enough is going to be able to get that information out if they want.

Jim: It’s a classic arms race. And of course now, you can use the AI against the AI. Put an AI, program it, write a program in front of the large language model to test all kinds of interesting theories on how to break through the nanny rails. That’s actually kind of fun, but it also highlights the risk.

Vance: Well, and then you brought up the arms race here. I mean, any of the stuff we’re talking about slowing it down in order to be able to avoid the email scams and the deep fakes and the how to, aren’t we always concerned that if we aren’t making the AI gun as strong and as sharp as possible, that somebody else will and whoever gets to the top wins everything?

Jim: This is one of the fundamental drivers of the precarious state of our civilization. It’s called the multipolar trap, where people are forced, even if nobody wants to do the bad and risky thing, they’re forced by the nature of competition into dangerous moves. Military, why would anybody want to fight a war? Wars are stupid. There’s been no war that’s produced the net economic benefit to anybody since 1870.

But if your neighbor is prepared for war, then you have to be prepared for war. And then the two countries next to you and your neighbor have to prepare for war. Because otherwise, they could be attacked and overrun and you get what’s called a multipolar trap where even if nobody is intending badness, everybody has to take action in the assumption that the other guy is going to do something bad.

And this is absolutely the case with advanced AI. The way the story is usually told is between the United States and China, should the United States slow things down to avoid the risk of accidentally going from AGI to ASI very quickly. And of course, the response is, well, if the US slows down, China won’t. And if China gets to ASI before we do, we’re doomed. And unfortunately, there’s some truth to that.

Vance: Yeah, it seems like you could even make an individual case, “Hey, I know that this is illegal or there’s certain things that are banned, but I believe very much that if I don’t do this, then we’ll fall so far behind that someday, I’ll be enslaved by somebody else’s AI.” This is like a trap both on the national level, but even on the individual level in particular as these things get open sourced.

Jim: Yeah. And for example, graphic arts people who are doing commercial art now have to decide whether they’re going to go out of that business or only deal with the most high end customers, or are they going to adopt things like Midjourney or DALL·E 3. Because if they don’t, they will be outcompeted by the people that do, so they are caught in a classic multipolar trap.

Vance: Well, as you think about AI, if you read different people’s perspective on it that when you’re talking about something like creative arts, do you believe that AI will get to the point where the spark of human ingenuity is not actually there? It’s just a matter of how much intelligence you can give this AI?

Jim: That’s a good question. I think the answer is at this point not known. I’ve just completed a project of building a very powerful program that will write movie scripts using the backend API calls that are behind the various large language models. It works with 11 different models. And I would say that our program currently can write a movie script that’s at kind of the low journeyman level, but only with a fair bit of human interaction.

So, we typically say you can write a movie script for a two-hour movie in about 20 hours using our tool as opposed to maybe 500 hours if you did it by hand. But we’ve of course tried doing it with no human interaction at all and it produces a movie script, but it’s not very compelling.

So, today, these tools are major amplifiers for human talent, but you still need that human spark as you call it to get the ball rolling. Whether that will still be the case as we approach AGI think is an open question. I don’t really know the answer, but I think it’s certainly possible, especially for something reasonably formulaic like a movie script or a TV program. It’s like, all right, create a police procedural, something like NYPD set in 1985 and given some basic parameters and it may be able to provide that spark or it may not be. That’s a question we’ll be confronting more and more in the years ahead.

Vance: How much do you think people having work signed off as either this was done by a human or this was done by a computer or AI will exist in the future? Will this become an important thing? Is there going to be some kind of equivalent of a wax stamp if you’re a human being creative?

Jim: That’s an interesting question. Daniel Dennett, the philosopher, put out an interesting proposal in this regard, which it wasn’t about the general case that you use ChatGPT to do your English homework, but rather did you use one of these advanced AI to impersonate a human to make it look like a human is in the loop doing something without disclosing that fact? He proposed that that be considered a felony crime at the same level as forgery that thou shall not forge a human being without disclosing such.

So, for instance, a customer service representative, which were just on the verge of being able to fully automate today within a year or two, you could be talking to a customer service person on the phone you think is a person and it’s actually an AI. Under the Dennett proposal, you would have to disclose that you are impersonating a person and to do otherwise, it would be literally a criminal violation.

In terms of other things, let me tell you the very first thing I used an LLM for in production, and this was two days after the original ChatGPT came out in November of 2022. I’ve been a member of a board of advisors for a number of years and it was time for me to resign. I was just not adding any value anymore and it wasn’t worth the time for me and I wasn’t adding any value to the organization.

And so, I said, “I want to write a polite letter of resignation to this entity X and sort of say generally like this and like that,” and I write half a paragraph, gave it the ChatGPT. And it wrote a perfect letter, which I actually sent. So, I actually used the letter written by the very original second day it was available ChatGPT, and it was good enough to write a very polite leave the door open for future collaboration resignation letter. I felt no compulsions at all about doing that, nor do I feel any moral qualms about saying I should have footnoted it, “Email written by ChatGPT.” Whether that becomes the norm in the future, I don’t know.

Vance: Would you be personally offended if you found out after the fact that a letter you thought was written by an individual was an AI and would that impact your relationship with them?

Jim: Probably not. Again, I’m assuming at this stage that the human vetted the letter. For instance, a resignation letter. I wrote the half a paragraph of prompting of what I wanted it to say approximately, the tone, the length, et cetera. And then I read it very carefully afterwards to make sure I didn’t need to edit something. And so, I could say that in that case, the moral responsibility still remains with me. I wrote the prompt. I did the final review before I sent it. So, in which case, as long as I am still doing that, then I would not be offended if somebody used these new tools.

On the other hand, you hear cases, there was a famous case where some idiot lawyer tried to write legal briefs without actually reviewing them using one of the large language models. These things, hallucinated made up cases that didn’t exist, provided links to cases that didn’t exist. And in which case, I think I would have that person disbarred probably. I would certainly never use them as my lawyer. So, it’s a degree that you take moral ownership of the work that I don’t think it’s an issue to the degree that you don’t do so that it’s a big problem.

Vance: Well, so this brings up what’s going on obviously in schools, right? There’s kids certainly that are turning in eighth grade homework assignments that were written by AI. Does that present a larger threat to the American or really worldwide education system or the way that we learn things?

Jim: This is a very interesting challenge. And I have talked to teachers more at the college level and the high school level, but some high school teachers. And their assumption at this point is unless it’s done in class, the kids were very likely, especially the smarter ones, will definitely be using something like ChatGPT to help them with their writing. And again, in this idea of humans in the loop.

Here’s the second thing I used ChatGPT for just as a test. I emulated a homework assignment I had gotten when I was in Honors English in 12th grade high school English, which was to compare and contrast Melville’s Billy Budd and Conrad’s Lord Jim, and that’s all I wrote. That was the prompt. And it wrote something that would’ve gotten about a B from Mrs. Carr’s Honors English class in 12th grade. It probably would’ve gotten a D minus say in freshmen writing at a decent. But that was the very first version of ChatGPT. Today, it would easily get you a B in a freshman course.

So, any professor that thinks the kids aren’t using this stuff is fooling themselves. It’s even more subtle than that, is that anyone who’s really serious about their work will not be just giving it a two-line prompt and say, “Do this.” Essentially, they’ll be using it, “Okay, create me an outline. Now fill this outline in. Now rewrite paragraph three to add this element.” And so, when you use it interactively as a writing helper, it helps you amazingly in productivity, brings in knowledge you may not have and will be essentially impossible to detect.

Vance: Yeah. One of the things that I’ve used it for is to get it to ask me questions that I need to think about. So, I’m putting out together a marketing plan. I want to talk about something I’m doing. So, I say, “Ask me 10 questions that will help me come up with this plan,” and it does it. One of the things that I think about is, so you could do that basically forever, but you may not learn what does it take to put together a marketing plan or some kind of brainstorming. Do you think by offloading some of the cognitive work that this is going to harm the way that human beings think or they operationalize their cloud of ideas that they have?

Jim: Yes. This is my number five, humans losing capacity to do things. And of course, this is hardly new. When we invented fire, we didn’t have to chew as much. Our jaw actually got smaller and so did our teeth when we invented fire because it used to be you had to chew things a lot more than when food had been cooked. We’ve essentially externalized part of our whole food processing system to fire.

That’s kind of a funny one, but think about things like GPS. Famously amongst us, baby boomers, we say, “Goddamn, the millennials are going to be in a world of herd if the internet ever goes down and they can’t have GPS to navigate with them. Suckers don’t know how to read a map anymore.” And to some degree, that’s true. I know some millennials that are great at reading maps, but I also know some millennials that literally couldn’t read a map to save their life.

Vance: Oh, I participated in a one-month experiment where it was like, you can’t use your GPS maps. And in a town I’d lived in for five years, I literally couldn’t find anything. It really shocked me awake to what happens when you offload the cognitive learning of something, like how do all these streets go together, what direction are they running because you don’t have to. So, this appears to me to be a very serious threat.

Jim: Interesting. And as a baby boomer, I almost never use nav unless I’m in a place that I don’t know at all. I just traveled to San Francisco last week, so I use nav and they navigate around San Francisco, but I would never imagine using nav to navigate around our little town of 25,000 where I am today, or much less so over in the country where I spent a lot of my time. So, I think it’s a very much a generational thing. But people who grew up with it, as you said. How old are you, by the way, Vance? What year were you born?

Vance: I’m 42. I was born in ’81.

Jim: So, you are a cuspy millennial, Gen Xer.

Vance: Yeah, exactly.

Jim: Which one do you consider yourself?

Vance: I think I’m more of a millennial than I was a Gen Xer.

Jim: Ah, yup. Okay. So, yup. The goddamn millennials, they’re the ones that don’t know how to do that shit. But of course, get a little bit more serious here and back on our topic, these LLMs often open up another whole series of things for us to lose our capacity around.

So, even if I’m using the LLM in a somewhat responsible way like I described, “Create me an outline, write me a paragraph, rewrite this paragraph so that it emphasizes blah blah, and also add this thought,” one of the skills you are going to be losing is how do you create an outline.

Another skill you’re going to be losing is how to write a paragraph. Another skill you’ll be losing is how to refine a sentence. I am not the world’s greatest sentence at a level writer, and so I often use the LLMs to be my copy editor. I’ll say, “Copy edit this essay.” They’ll do a great job at it. Or you can also use it to critique this essay, find the logic holes in it and it will. And so, we will become less good over time as we start using these tools.

And there’s a couple of risks. One, a good analogy to my mind is calculators and people having substantially lost the ability to do basic math in their heads. And then you say, “Well, why do I need to be able to multiply two digit numbers in my head? I can just use my calculator. Calculator costs less than a dollar. My phone’s got a calculator. My computer’s got a calculator,” which is all true.

However, if you can’t do math in your head, it’s really difficult to do order of magnitude estimating as it turns out. And I’ve seen this, that the younger that people are, the less likely you can have a conversation about some quantitative domain and you guys can come to a quick consensus about the order of magnitude of the effect.

And so, I would say in that case, having outsourced basic two digit times two digit math to the $1 calculator, you’ve also lost something you probably weren’t even aware that you had, which is your ability to do good quantitative estimating. So, when you outsource creating an outline, what does that do to other cognitive skills that you might have? What happens when you outsource copy editing a sentence? What does that do to your basic linguistic ability? I don’t know, and it might not be good.

Vance: Well, so if that becomes a risk, how does that manifest just in people that are just slowly moving towards the idiocracy or do we actually quantify it and see it something that’s appreciable?

Jim: That’s a good question. If you haven’t watched it, watch the movie, Idiocracy. And unfortunately, we made a fair bit of the journey there already. And the concept that’s after, I think, it’s 300 years, humans have devolved until essentially everybody is an idiot. It’s like, it’s hilarious,

Vance: But it’s like weirdly prescient. The things that they’re saying in there when you’re watching the movie like this is so dumb. And then later in your life after you’ve watched this movie, you’re like, “Oh, my gosh, I’m seeing exactly what they’re saying right now,” is very, very well done.

Jim: Yeah. And they were not only sort of generally prescient in terms of things that you sort of see. Okay, so 2006, I think about that, 2006 was before Facebook at a few years before the smartphone. So, this was just at the cusp. And I would say we have made a fair bit of our journey to idiocracy, the influence of social media and the smartphone. So, these things will just accelerate that trend.

Whether we’ll actually be able to measure it, I don’t know. Let’s take for instance, the ability to write an outline for an essay, be able to take a body of an idea and break it down into its steps. If you can’t do that, can you actually think about complicated ideas or are you sort of simulating thinking? That’s a good question. Maybe the quality of our patents will go down, which would be pretty weird. I think if we lose that capacity in subtle ways, we are getting stupider.

Vance: Well, I’m a little bit afraid to keep asking you about the other existential terror risk that you can think of, but while I’ve not been able to check off all of them, I know you have a couple left. What are some of the other ones you’re concerned about?

Jim: A real simple one, loss of employment and probably growing economic inequality. Those people who can create and use these tools, the winners will win more. Those who can’t, won’t and may actually be out of a job. Now, we’re not sure if that will actually reduce employment on a net basis. People have been saying for a long time that each technical advancement will wipe out more jobs than it’s created. I heard this morning for the first time since the 1960s, we have an unemployment rate in the United States below 4% for more than two solid years.

So, whatever’s going on in our economy, and part of it is now already being driven by AI, it does not seem to be eliminating jobs. Now, one might argue the jobs aren’t as good as they are. To give you a sense of how old this idea is. In 1930, the most famous economist in the world, John Maynard Keynes, predicted that within 100 years, i.e., soon, most people will be working no more than 15 hours a week. That didn’t happen at all.

So, there’s no doubt that individual people and whole sectors will lose employment. But will the growth of other industries and other categories that become easier and cheaper draw more people to those jobs? I think it’s hard to say, but it’s certainly possible, I would say relatively likely that even if it doesn’t reduce the total number of jobs, it’s very likely to increase the income inequality and particularly increase the premium for being a well-educated in the world of words kind of person, as opposed to being a hands and do things in the physical world kind of person.

The hands and physical world people have been being screwed by our economic system basically since 1975. And I would say that probably the way to bet is that AI will actually continue that trend, unfortunately.

Vance: So, David Graeber wrote an excellent little PDF about bullshit jobs, which was basically saying everybody’s been afraid of automation coming for our jobs. Well, it really already has because what used to take somebody a week to do their work for accounting can now be done in an Excel spreadsheet. So, that person is actually not working a ton of hours. They’re biding their time and going to meetings, things that aren’t meaning, and this is causing a meaning crisis.

When I hear about you saying, “Hey, a lot of these job sectors are going to be lost, and it’s just going to come down to prompt engineers that are going to get the AI to do the thinking for us,” I have to imagine that there is some danger about meaninglessness if all you’re doing is asking the AI gods, “Give me an answer back, that’s better than I could come up with on my own.”

Jim: Yeah, I think there’s what Marx called alienation from work. He was talking about, it used to be that a blacksmith knew how to make a nail, for instance. Every blacksmith in small town England or America in 1780 knew how to make a nail. And nails were mostly still made by hand. Of course, it also means that they were very expensive.

A few years later, under the power of steam power, it became quite feasible to build nail factories. And instead of it being one person who did the whole job of starting with a chunk of raw iron and beating it with a hammer and putting it in the forge, there was a continuous process where one person cut the wire, the other person put the point on it, somebody else made the head, somebody melted the head onto the shaft, et cetera, and somebody else put it into the book. In fact, Adam Smith lays this out in excruciating detail. He uses a pin factory rather than a nail factory, but very similar.

And the result, according to Marx, is that what was formally a very satisfying thing, taking a piece of raw iron, cutting it, shaping it with heat, your own arms, your own thinking, and creating a useful artifact at the end was very fulfilling. While doing a job on an assembly line was much less so leading to alienation. And this sort of decomposition of the work to make it easier and easier, and particularly in this case, to take advantage of the AI’s ability to do most of the detailed work could have the same effect and maybe even at accelerating rate in the white collar world that the factory revolution actually had on the artisan.

Vance: Okay, so my count here right now has us at seven of the eight existential crises. Do you have others? It’s been hard to keep up because there’s so many that blend into other ones. They’re not clear dark lines.

Jim: Yeah, it is absolutely clear that my list is not as clear as it could be, but one that we’ve kind of touched on, but we haven’t really hit right between the eyes. I call the flood of sludge, particularly the large language models and other linguistically capable AIs are making it much less expensive to produce crap to be monetized on the internet.

If you’ve done a Google in the last year, you have noticed that the number of more or less fake sites that put these enticing headlines in the first paragraph and then gradually devolve into bull crap, I think you mentioned last time we chatted that you’d been sucked in to look at a recipe and you just read and read and read, and the recipe’s never actually there. Those things are what I call the flood of sludge, stuff created in bad faith, not to provide you useful information, but just to steal your attention long enough to put a few ads in front of you so they can make a little bit of money.

And I think we’re seeing that the simplest version is just pure monetization, but we’re also going to be seeing this to be weaponized, to spread conspiracy theories, dis and misinformation of various sorts. And once the sludge machine gets rolling and it’s already rolling at a pretty good rate, then we get the very interesting phenomenon.

What are these large language models trained on? They’re trained by pulling stuff off the web, starting probably right now. Some growing percentage of the stuff that they’ll be trained on is stuff that was generated using large language models. And a lot of it will be created either to monetize attention hijacking or to sell conspiracy theories or to disseminate dis and misinformation. So, what’s that going to do to the next generation LLMs all comes from the flood of sludge.

Vance: Yeah. And these hallucinations that will happen as a result of just meaningless garbage being what’s training these AI, how would you ever even know? And if the AI can continue to put out as much information, how would you ever keep it up? How would you ever clean it up? I don’t know that you could.

Jim: Yeah, it’s going to be hard. And though I will say I’ve been proposing on my podcast here for a while for, “Hey, you young entrepreneurs want to make a trillion dollars.” It seems to me that there is a gigantic opportunity for somebody to build defenses against the flood of sludge, also using AI.

So, all the feeds that you get rather than coming to you unmediated, go into your AI assistant and your AI assistant filters out as much sludge as they can, and they also know what you’re more or less interested in, but they also have been told to serendipitously give you some stuff you don’t know about. And so, I do believe there’s going to be a arms race of personal AI assistants to help us fend off the flood of sludge.

Think about there’s an analogy in the history of the internet in the mid-1990s, it looked for a time that email might be on the verge of dying because spam was getting out of control and the early spam filters were being outgunned by the creators of spam. As computing power got cheaper and cheaper, internet bandwidth got cheaper and cheaper, it costs so little to send a spam email that your hit rate on Nigerian letters had to be about one in 100,000 to make it pay off.

And for the brief time there, ’95, ’96, “Oh, shit, feels like the internet’s going to melt down. No more email.” Fortunately, there was a breakthrough in kind of narrow AI that had to do with being able to do some statistical analysis on the content of an email and rate it either spam or not spam, made a very big step function improvement in its ability. And email was saved from melting down, but it’s always been an arms race ever since.

Companies that run the big email backbones like Gmail spend a significant amount of money keep improving their spam filters. I’m going to suggest that information filters in general are going to have to come into being and will be in an arms race with the generators of sludge. Though there will be a good effect once there is good enough smart agents to help us filter out the sludge. Some of the economics of sludge production actually go down. If only 1% of your sludge is red, you have to have 100 times bigger hit rate to make it economically viable.

So, that’s a hypothesis I have of a potential way we’ll see our technosphere evolve is think of the flood of sludge as the offense and your own personal info agent as the defense, and they’ll be in a perpetual arms race.

Vance: Well then, what about the thing that you already mentioned about mass production of misinformation? I mean, clearly, AI could be used to pump out propaganda on a scale that you could never do with humans because you’ve got to feed them, they need to use the bathroom, they demand wages. Whereas with an AI, you could just have it pump out endless amounts of it.

Jim: Yup, absolutely. And that is the flood of sludge. But if it’s especially to the degree that it’s targeted misinformation or conspiracy theories, things trying to program people’s minds, then we get into our next big risk, which I call number six, epistemological decay. That’s kind of a 50-cent word, but epistemology is the way we know what is true and what isn’t. How do we find fact in the world? How do we make sense of the world?

And if our heads are being filled with disinformation, which is intentionally created wrong information or misinformation, which is more or less accidentally or without necessarily malicious intent, spread information that’s just wrong. Think about the COVID wars. There was disinformation. There was misinformation. And there was conspiracy theories all hammered into people’s heads until the point now that how people think about vaccines is totally different than it was prior to the COVID epidemic.

And that kind of change of our epistemological capability would not have been possible before the internet and will be even worse the next time something like that happens when we have large language models to write many, many different varieties of very virulent forms of bad information.

Vance: So, the epistemological one, as I’m hearing you say all these, I don’t want to derail you, but I keep thinking, “Okay, Jim, we can identify some of this chaotic stuff that’s going to happen.” But what can anybody do about it?

Jim: Yeah, that’s an interesting one. Again, very difficult, but an info agent when those become available could be helpful. Avoid bad sources. If you have children, do not give them TikTok. TikTok is the worst cesspool probably of bad ideas floating around out there right now. And in my view, giving TikTok to a 12-year-old is as bad, maybe worse than giving them cigarettes. This is a direct pump that is a totally brilliant design. I’ve been designing online products since 1980, believe it or not. The very first consumer online service many years before the internet, I worked there, I designed online products.

And when I saw TikTok for the first time, I go, “Holy moly, this person deserves the Nobel Prize in exploitive technology.” This thing is perfectly designed as a fentanyl drip to addict you almost instantly. And then once it has you hypnotized, it can feed in whatever it wants. So, avoid exposure to this stuff as much as possible.

Personally, I have a whole series of techno hygiene things that I do. For instance, I take a sabbatical from social media for six months every year from July 1st to January 2nd usually. So, I turn off that whole flood. It’s also very interesting when you come back, at first, you’re immune to it. You can see through it. You can see through the scam probably for another month or so. And then only slowly, you get sucked into its game where it has you semi hypnotized.

So, don’t use this stuff all the time, especially turn off the notifications on your phones and your devices. Once it knows you have a notification, it sends you even more virulent kinds of stuff trying to hijack your attention through the triggering of the notification mechanisms on your device.

So, again, all these are essentially tactics and hygiene. There’s no silver bullet. You don’t get fit by taking a pillow. You get fit by going to the gym or running or chopping firewood and doing a number of different things. In terms of defending yourself against bad information hygiene, it’s a serious thing and frankly, something they ought to be teaching people in school.

Vance: So, do you have any more on your list of terror here?

Jim: Yup. I got really one more big one. And that is there’s many of us, the people in my world have been thinking about working on something called Game B, which is a comprehensive alternative to our current sociopolitical economic operating system. And one of the reasons we’ve been working on Game B is that we believe Game A is on its self-terminating curve.

Whether we kill ourself with nanotechnology or AI eats us or we cook the planet through global warming, or we accidentally create a super virus that instead of being a warning like COVID, which had less than a 1% death rate, turns out to be a real mass killer, could be 50% or even more, that destroys advanced civilization. This was coming anyway and call it the self-termination event for Game A. A rough guess is that we might have as long as 60 years until we stumble into one of these traps.

If AI is really speeding everything up and it really feels like it is, suppose that time window has been cut in half because Game A is just rotating faster, it’s advancing into these danger zones at a faster speed, instead of 60 years, maybe we only have 30, and that’s just so far.

There was an amazingly interesting, but also a little scary announcement from Nvidia two days ago. Nvidia is the guys that make the giant boxes of game processing units basically for which these large language models are built and run on. They’ve announced a new architecture called Blackwell, which is much bigger, much cheaper than their previous architecture. It’ll take a year or two before it comes out, a year probably.

But not only is it a particular product set, but it also includes tools to help their suppliers be better. There’s tools built into it for chip design for instance, because they get chip components from people down the chain, and it includes tools for designing the AI data centers above them to make them more efficient, to make their customers more efficient so that their customers will end up buying yet more Blackwell AI tools.

So, this is like a triple speed up when you include the tools below Blackwell itself and tools to design better data centers to use Blackwell. So, what is that going to do to speed Game A up even further? So, I think this is one of the more subtle ideas that’s not obvious unless you have already gotten the idea that Game A is self-terminating. But if you have the pervasive speeding up of everything is its own kind of risk.

I noticed a few years back, Marc Andreessen said, “Software is eating the world.” Well, now, one of the things that AIs are already speeding up is software. I’ve mentioned I was involved with this project to write movie scripts using AI. I wrote most of the software myself actually. And I can testify that I was three times faster at least writing the software by using ChatGPT as my constant writing companion.

So, stuff that was not worth automating will now be worth automating, things that were worth barely automating are now worth doing in a very sophisticated fashion. You can do it for a third the cost or maybe less. And so, we should expect an acceleration of the unfolding of Game A. And that itself is a very serious and perhaps existential risk.

Vance: So, as you think about all of these risks, everything from making it so getting out of the Game A is much more difficult all the way down to somebody coming up with biological weapons through using narrow AI or scams. When you bring all these things up, what’s the next thing that you think people should be thinking about? Should it be that they agree or disagree with some of these as existential risks? I mean, only one of these needs to be bad enough to snuff out society, and you’ve talked about eight that could be used pretty powerfully to radically alter it. So, what do you perceive is next after thinking about this?

Jim: Oh, god, this is so hard. It all comes down to the one we touched on briefly, which is our governance capacity. If we don’t fix that, then we have very little hope of addressing any of the other seven, actually, eight. I know I snuck one in and didn’t renumber, so I actually have nine now, not just eight.

And this is a very difficult question. We’ve all watched politicians on TV and read about what they’re up to. And you say, “This list of issues I just went across, what’s the chances that a US senator is going to be able to make intelligent decisions on regulation or legislation in the future for a world like this?” What do you think the chances are with today’s governance structure?

Vance: Probably right near zero. Virtually zero.

Jim: Yeah, I’d say damn close to zero. I’m with you. And so, let’s think about two alternatives. One, and this will be a temptation, especially as our AI gets better, what about we give the AIs themself the authority to write the laws and regulation? What do you think about that one?

Vance: Oh, man, herein lies that you’re handing over the very thing that you’re worried about to the thing you’re worried about. It seems to me like this is probably not a good solution, but maybe necessary.

Jim: I sure the hell hope it’s not necessary, but it might be. And this is something that we should think very long and hard before we let that happen. The second one is I will say it’s time that we have to finally sit down and think about very serious changes in our political operating system. The US Constitution was an amazing thing of genius. It was written in 1787 when the United States had the of three million people, about the same as the number of people in Kentucky. And they wrote a document, which has now lived for about 230 years, and it served us reasonably well.

But it was designed when the biggest company in America had 100 employees, when it took five or six weeks to get a message from New York City to London via sailing ship before the steam engine had arrived in North America, before the first bit of coal had been mined in North America. And so, it is a operating system for governance, which I would argue despite the amazing things that it has brought, is not up to the tasks of dealing with this list of risks and a bunch of other risks that have nothing to do with AI.

So, I think we have to bite the bullet and design a new political governance structure that can, one, get better people into it. Just think about this. I was having lunch with a good friend of mine just before this podcast.

And we were laughing and I told her, “Imagine up in heaven, John Adams, Thomas Jefferson and James Madison sitting together around a table looking down and looking at the presidential candidates we’re likely to have on offer, Donald Trump, Joe Biden, and Robert F. Kennedy, Jr. ‘And this is in a country with 330 million people versus the three million people at the time of the founding, 100 times the population. And they come up with these three screwballs. What the hell? There’s got to be something very seriously wrong with our political operating system.'”

Vance: Well, I mean, that’s a pretty tall order to imagine that we’re going to change our governance system. What do you think the odds are of that?

Jim: Well, good and bad. Will we do it early to get ahead of the curve? Probably not. That does not seem to be the human way. Fortunately, more often than not, humans muddle through and do what they have to do at the last minute. But as you mentioned, some of these risks are of such magnitude that if we don’t judge the last minute correctly, we could go over the edge and we’re caught in a really tough condition. We have serious potentially civilization-ending risks.

We have a governance system which has absolutely shown itself to be incompetent, corrupt, unable to deal even with the class of problems that move slower and are of less magnitude that these AI risk problems. And we think we could give this thing a go to solve these problems. I don’t think so. And we’ve always been resistant to radical changes in our political operating system. So, how do we square that circle? Mighty difficult.

Vance: Yeah, Jim. I mean, you paint a rather bleak outlook. When you think about these outcomes, do some of them come to mind as the more likely to happen than others? And if so, how do you prioritize this?

Jim: This is an interesting one. Well, the people doing bad specific things with narrow AI is happening right now. So, we’ve got to write appropriate legislation, educate law enforcement, become smarter ourselves with the safe words so you don’t get subject to scams yourself. So, there’s a bunch of tactical things to be done there.

The one I think is actually the most likely to bite us in a serious way. It’s actually in some ways related to the governance question. And that is the epistemic decay, the Idiocracy thing that we have so poisoned our well of discourse around conspiracy theories, disinformation, misinformation and exploitive sludge that our people become incapable of making any kind of reasonable judgment about the direction of our society.

And it certainly feels like our society is on average getting less good at execution. Just everything seems to be screwing up a hell of a lot more and the decisions are getting made or either not getting made or they’re the wrong decisions. So, you combine 1787 vintage governance with epistemological decay and the rise of Idiocracy, and I rate that as our number one risk to cause a very major screw up.

Vance: Well, Jim, as we kind of round out this conversation, what do you think people should be reading about or where should they be looking into doing something about these things?

Jim: Well, that’s a big question. I don’t really have a brisk answer to that other than listen to the Jim Rutt Show. We talk about this stuff here, and I know you talk about it on the Vance Crowe Podcast as well. In terms of mainstream media, none of them are doing a good job and all of them are biased. If you have the time for it, you can tune the frequencies in Twitter to get some interesting people. But it’s work and you have to have some knowledge to make that work. That’s a good question. I don’t have a good answer for it unfortunately.

Vance: Well, I feel like you did. It’s good for me to hear your concerns about these risks because I sometimes hear people talking about the AI risk, and I think the paperclip problem, that’s a long way off, these other challenges, I’m not sure what we can do about them. You probably just got to keep going.

But when you stop and really think about it, there are some things that could radically change society. And I think it’ll be that kind of old adage of gradually then suddenly. It’ll be something that it’s been around for a long time and we think, “Ah, this isn’t really turning into anything.” And then bang, a big breakthrough happens that allows whether it’s the super intelligence or a whole bunch of people to be able to do things that they couldn’t do before. And I think this has been a valuable conversation to try and isolate what those are, think about them in a deep way and then figure out, “Well, what could you do, if anything, to mitigate this?”

Jim: And then if we don’t have a good map of what our problem space is, it’s very hard to much of the resources to do something about it. So, that’s why I finally, people have been asking me to do this for quite some time, and I said, “All right, I’m going to do it.” And so, that’s why I asked you on here. So, give me an opportunity to go through my list in a pretty comprehensive way. And as always, you’ve asked some great questions which have caused me to stop and think and hopefully made this exposition a little bit better.

Vance: Well, I have really been honored to be the host here, Jim, and normally at the end of a podcast, I ask people about where can people find you. Maybe a good question would be, so what are some of the big episodes you have coming up that people should be looking forward to?

Jim: Truthfully, I don’t pay that much attention to what’s coming up. I just do them. Let’s see here. Let me look at my calendar. Tomorrow, we’re going to have a session with Rich Bartlett on what he learned from three months living in a co-living environment. That will be very interesting on kind of Game B ways to live.

I’ve got some interesting conversations about physics coming up. The guy named Matt Siegel. Oh, this will be very interesting with Robert Ryan. We’re going to talk about seven ethical perspectives. He’s a really good thinker, who doesn’t fall into any of the known categories, which is what I really love about him. And we’re going to talk about seven different ways of thinking about ethics. I think that will be a very interesting show. He’s a very interesting guy.

In mid-April, we’ll have one with the famous, Samo Burja. We’re making a sixth appearance on the show, and we’re going to be talking about what are the lessons learned from the Ukrainian war about the future of warfare.

Vance: Well, Jim, as one of the only other podcast I listened to outside of mine, I’m looking forward to all of that. So, I really appreciate you having me on as the host, and I’d love to do it again sometime.

Jim: Yeah, I really appreciate to have you been on the host. It was everything I was hoping it would be.