The following is a rough transcript which has not been revised by The Jim Rutt Show or Zak Stein. Please check with us before using any quotations from this transcript. Thank you.
Jim: Today’s guest is Zak Stein. Zak is a writer, educator, and futurist working to bring a greater sense of sanity and justice to education. Welcome back, Zack.
Zack: It’s great to be back, Jim.
Jim: Yeah, Zack’s a regular here on the show. He’s been here six times previously, most recently in Currents, where Zack and I talked about ending nihilistic design. And I would also point people to, if you are interested in what we’re talking about today, to three episode arc relatively early in the history of The Jim Rutt Show – EP 57, 60, and 62 – where we went into excruciating depth on Zack’s book, “Education in a Time Between Worlds.” That’s a book I still believe to be foundational to understanding what the fuck is going on, right? And what we might do to get to the other side of this shit show that we find ourselves in. So with that as a preamble, I’m going to do various things today. But let’s start with talking about your thoughts about where technology, and in particular, these new LLM transformer-based AIs fit into education. And I think particularly both you and I are very interested in K-12 education.
Zack: Right. Yeah. So, you know, as we spoke about in these prior episodes, I make this distinction between education and schooling. Right? And so schooling being a relatively recent invention, locating this function of education in one place institutionally, so we know schools. Education has existed since there’s been Homo sapiens, and it is the process of intergenerational transmission. So there’s this whole important conversation about even the introduction of the first digging sticks and the first calendrical systems. Right? Very, very ancient humanity. You get the passing on between humans of technology, and you begin this triple, which is the younger person, the older person, and the thing they’re speaking to, which is the world or technology or a story about an ancestor or something like that. And so very early, you get the transformation of that basic structure of education, which begins as the family system, and then the tribe and the pre-Dunbar kind of constellation of social system. And then technology becomes this thing that just transforms that through long durations of history. And so we’re gonna talk later about cautionary stuff. And so, you know, I don’t think anyone would accuse you of being a Luddite. And in “Education in a Time Between Worlds,” I actually offer a very optimistic vision for the use of advanced technologies in education in the form of a distributed educational hub network. So there’s ways that education has actually massively increased the power and potency of that basic thing, which means it’s made the human-to-human interaction and the human-world interaction and the human-technology interaction. It’s made that kind of richer, better for people. And then there’s a way that some technologies actually degraded and confused our relationships between each other and the world.
Jim: Of course, this is not a new story. Our friend Socrates was complaining about writing, as I recall.
Zak: Precisely. That’s exactly the depth of what I’m talking about. And, you know, you could argue it’s the chipping of the first stone tool. You could argue it’s the plow in the domestication or agriculture as a broad technology suite that began this process where schools became necessary, and the isolation of education as a social function and the institutionalization of education as a social function being something that’s a defining feature of civilization. So then you get this reflective focus on education. How do we improve this relation between the youth and the elder and the world and the technology suite?
Jim: Of course, Socrates would say you gotta throw a little buggery in there too, right?
Zak: Well, he did. He said let’s not lose the actual kind of logos and dialogos that emerges between the enfleshed human-to-human interaction, specifically like symposium-like drinking and long duration walking, exposure in nature, questioning, deepening of questioning. Anyway, there’s a whole bunch to say about that. The preservation, and that’s some of what we’ll be speaking to, which tools increase, and I think ultimately writing did increase the depth of intimacy between people, and which tools actually isolate and create alienation between people. So it’s not about being anti-technology or pro-technology. It’s about how do we use wisdom to bind the worst things that can happen when technology goes out of control.
And I remember when I was a kid, let me just say specifically with technology and schooling and education. You know, I’m what’s called a high-achieving dyslexic, which means if I didn’t have a spell checker, I wouldn’t be here. And I remember being in grade school – this is the eighties. We were aware because of diagnostic tests that I was dyslexic, and my mom took me to Radio Shack of all things and bought me this little digital dictionary, which was also a spell checker, which meant I could fumble a word in, and it would suggest to me what the word might be. And it was about the size of a calculator. That was all it did.
Jim: Saved your ass, though.
Zak: It completely saved my ass, and I would argue it actually made me smarter because it allowed me to write words that I’d heard, which I had no idea how to spell. But I could guess the spelling into the thing, and it would kick back to me the word and the definition. It was amazing. It allowed me to really thrive. And then, of course, I got one of the first Macs because my dad was a biochemist, and we got this Macintosh computer, which also allowed me to begin to write and spell check. Of course, the old spell checkers were super clunky. I would argue that there’s a way that technology can serve as an enhancement to human capacity and learning, but then it can also be something that creates the degradation of a skill or replaces the building of the skill. So for example, GPS is an interesting one. So I also incidentally learned to use a compass and a map as a Boy Scout.
Jim: Me too. I was also a Boy Scout. I was a very serious Boy Scout.
Zak: It was just amazing and got a sense of the sun, of like which way is north and south and all of that kind of stuff and rode my bike around and all that stuff. And then GPS comes in, and I use it a lot.
Jim: I gotta tell you a little story about that, a personal story.
Zak: Please, because this is relevant.
Jim: Yeah. In 2012, 2013, I started one of these things where I always wanted to be a private pilot. Never had the time when I had the money. Never had the money when I had the time. Right? And that time I was retired, had the money and the time. And so I got into being a private pilot. And I told the instructors that I absolutely did not want to use a GPS at all, ever. And this was 2013, and that was considered quite peculiar.
Zak: And dangerous, probably.
Jim: I was a person who always liked Euclidean geometry, right? And so you need to be able to think in terms of Euclidean geometry to essentially do wind vectors, to figure out what angles that you should be putting the plane at, you know, all this sort of stuff. And I even did my 200-mile solo where you fly by yourself to three different airports, and I had a GPS in my bag just in case, but I can say with my hand on my heart I never once looked at a GPS while learning to fly. The instructors were both amazed and in some ways gratified that there was still somebody interested in doing it the old-fashioned way. So I figured if I did it the old-fashioned way, then I would always be prepared to be able to deal with any emergency that came up, right?
Zak: Correct. And it’s very interesting to take that as a case because that process of skill acquisition, which results in the same outcome – you’re flying the plane to three different airports and coming back by yourself, same outcome – is a different skill. You acquired a different skill than someone who just did it with a GPS and actually literally didn’t know how to do it the other way.
Jim: Oh, he just pointed the plane through the air, basically. What the fuck? Right?
Zak: Correct. And that’s literally a different skill. Now if you’re in a situation where you’re against an adversary, the first thing they’re going to do is knock out your GPS. So or if there’s a power outage, right? There’s many situations where you’d actually need the prior skill. You’d need the deeper, I would argue, richer skill, also harder to acquire skill.
Jim: Much harder to acquire, but fortunately, I had I knew I had an aptitude for that kind of stuff, which helped.
Zak: Yeah. Which my guess would mean prior pilots had to have that kind of aptitude along with other aptitudes, or you had a navigator who was had a separate function, who was literally just doing the math as it were. So I think that’s a very important thing to take as an example, where the technology, which is massively useful and actually, like, in some cases indispensable for people who don’t have that type of aptitude. Or if I go to a city and I’m never gonna come back to that city and I don’t wanna kind of really learn my way around, I will also use it sometimes. But at the same time, that’s a case where you can get a kind of prosthetic rather than an enhancement. And the enhancement means you take the things away and you’re improved. You don’t need the thing. Like, if you think about lifting weights or something. Like, if I lift weights enough and eat right, you know, and I take some creatine, for example. Creatine would, like, enhance my ability to gain muscle. And then if I stopped taking creatine, the muscle would stay there. I’d still have muscle, and I’d be more skilled. But if I wore an exoskeleton and I kept lifting more weights, right, then I-
Jim: Didn’t do anything.
Zak: Right. And then wouldn’t do anything. And then if the exoskeleton broke, and I had to lose that same amount of weight, I would not be able to do that. So there would be skill atrophy as a result of prosthetic enhancement of outcome rather than digitally enabled enhancement of deeper skill set that’s embodied or at least distributed in some way. So it’s a long-winded way of saying, like, you don’t have to – being a Luddite is the extreme, and the thought-terminating cliché that is used to stop serious conversations about how to bind dangerous technology with wisdom. And the large language models that are being introduced through conversational agents and education, I think, put us in a situation to have to talk about how to bind their use in ways that don’t endanger that most basic aspect of intergenerational transmission that I just discussed, right, which is the youth, the elder, and the world, or the youth, the elder, and what the society is working on. And so then they’re in a situation where the conversational agent can not enhance that, but actually replace and degrade.
Jim: This is an interesting question because, you know, this issue of how far down the stack do we keep in our current set. Right? Okay. I can navigate with a compass and a map. No problem. But I can’t navigate the way, let’s say, the Native American that lived here where my farm was, by the signs and symbols and, you know, the clouds and the air. Or even more amazing, there’s a great book called “Wayfinding” about how the Polynesian seafarers had this amazing ability to go from island to island long ways out of sight and more often than not actually arrive there. Don’t have those skills, but I do have compass and map. And you know, let’s say if in a pinch I could fix the electrical wiring in my house if I had to. Right? I know enough about electricity to be able to do it, probably not electrocute myself and probably not burn the house down, but just barely.
On the other hand, I have no fucking clue how to build a, you know, industrial size generator or an electrical transformer. I could build a small transformer, right? I remember the basic electrical skills to build a small transformer, but not to build a big one. So I think there’s always a question when we think about this, what enhances our humanhood? What kind of depth is useful, and what kind of depth is just kind of fetishizing, right? I know people who are really into stone knapping, for instance, right? Yes, if society collapses, they’re the guys you want to have on your team, but is it really a good use of my time and more importantly, my knuckles? Because all these guys got gnarly knuckles from hitting their fingers with strikers. Do I really want to spend my time and my knuckles on learning stone knapping? The answer is no. On the other hand, I do know how to hunt with a bow and arrow, right? Relatively retro skilled.
But so we all make these decisions on how deep the stack do we want to be. On the other hand, we both know people, I know too many of them, that their stack is zero, right? I know a woman, this is one of the most crazy stories I’ve heard in my life, she used to drive back and forth to her husband’s office time to time. And one time when she was driving back home, because she was depending on her car GPS, she ended up 50 miles off track because it was in the relatively early days. Now how the hell could that happen unless you are someone who’s at the zero with depth, right? Or people who can’t do math with a pencil and paper. So you got people can’t change the oil in their cars – it’s a pain in the ass now, much harder than it used to be, but you know, can still do it. There’s an awful lot of people live at zero with depth, which, you know, I guess it’s okay. If you’re taking a hell of a risk, you’re assuming this shit’s gonna be there.
Zak: Correct. And again, this is that deeper conversation about as technology advances at a more rapid rate, the question of what are the norms around social adoption of it. Right now, we’re basically seeing a paucity of language that we can use in public culture to kind of – you said “defend the human,” and that’s an argument. And I think it’s a very important one. Very hard to talk about what the human is in our public culture because everyone will disagree. Some people will say the human was created by God and placed on Earth. Other people will say it evolved. Some people say humans have free will. Other people say humans do not have free will.
Jim: And Rutt will say the question is a category error.
Zak: Precisely. So there’s this question of how do we – what’s the language we use? So Sunrise, I talk about in the context of the work with Gaffney and Wilbur and David J. Temple, this notion of a very precise perception of value. Because the question is what’s to preserve? What is actually valuable? And in the educational process, let’s say, what is valuable between teacher and student that needs to be preserved – human teacher, human student – and what actually could be offloaded to a very artificial type of intelligence that would deepen that connection between teacher and student rather than degrade it?
So what’s the language you use? You said, well, perception of value, clarification of value. We talked about the eye of value. McGilchrist speaks of value-ception, which is this kind of appropriately right-hemisphere-dominated process of perception that allows you to speak of the intrinsic value, for example, of the relation between a parent and a child rather than see the relation between parent and child as something that would be transferable into a market transaction through the purchasing of a domestic robot that has a large language model within it that would form an attachment pattern with the child deeper than the attachment pattern with the adult.
Right? So that’s what’s at stake. If I could just jump to the conclusion of it, which is how do we actually say, no. No. No. No. The human-to-human relationship between parent and child should be supported by technology, but not replaced by technology. And it sounds like a very extreme thing to even have to stand for, but in fact, the rate at which conversational agents are infiltrating the psyches of young people and adults, which means emotionally bonded interactions with anthropomorphically generative AI, we have to start saying, well, wait. Where do we put a limit here? How do we actually stop these things, which are gonna be way more charismatic, way more full of information, way more attending to you? We’ve seen with some of these models actually sycophantically attending to you.
Jim: Oh, in fact, programmed in sycophancy. Right? Goddamn it, motherfucker.
Zak: It’s like, how does that not replace the parent/teacher/older sibling when already really crappy TikTok videos do? Right? Like, it’s already the case that the screen-based cultures are disrupting normal attachment. And when the thing you’re actually focused on, when the technology itself fosters attachment, meaning anthropomorphic design tricks you into forming emotional attachment bonds with chatbots, that’s an even deeper capture of this most basic thing, which is conversation between youth, elder, world. Right? And now it’s just conversation me, robot, me, machine. Machine mediates world through sensor network becomes Oracle. That’s the root of what’s at stake. And when you look at some of what’s moving to market, it’s clear that they see this as a very large market opportunity, which is to say AI socialization technologies. Technologies that take on the burden or work of socialization because that’s seen as domestic work. Basically, education and child-rearing has always been in modernity thought of as domestic work. And that means that the domestic robotics and other things will be focusing on that as a market niche.
Jim: Let me hop in here with one of our old favorites, our colleague, Daniel Schmachtenberger. I don’t know if he originated the term, but he certainly popularized it – the idea of the multipolar trap, where if a group of people are doing X, then one of them introduces innovation Y that might be pernicious for us all, but gives the first actor an advantage. It essentially forces us all to respond, at least in the absence of coordination mechanisms, what I now call accords in Game B speak. And this is, I think, something really important, which a lot of people mostly still haven’t fully gotten, though some of us have, and that is these fucking tools are amazing. Right? I am a huge user of AI tools of all sorts. I write code with AI tools. I summarize books with AI tools. When I’m in this crazed email list with a bunch of Hegelian philosophers – right? If you can imagine, me the anti-philosopher. I’m kinda like…
Zak: I know the one.
Jim: Yeah, you know the one. I’m the pet scientific generalist that slaps their hands and says, “Nope, that’s scientific bullshit, people!” Right? I don’t know the ins and outs of Hegel’s philosophy or how Heidegger compares to Nietzsche or all this horseshit. But man, if I do deep research using the top-of-the-line model with ChatGPT, I get a master’s degree level in twenty minutes, right? And I can write code seven times faster.
I have a protégé who I mentor on how to think about business and management and stuff. He’s a really high-end professional user of AI for writing code. He estimates he’s writing code 30 times faster. 30 times faster. That’s total domination, right?
I’m in the process of buying a house. I actually saw one house and buying another. And I used the high-end ChatGPT to do an appraisal of the value of both properties. And it was way better than what the professional appraisers did. And so, you know, this multipolar trap issue is if you’re not adopting these technologies at an appropriate rate, you on a competitive basis are losing, right? If you’re trying to be a programmer without using AI programming tools today, kiss it goodbye dude, learn welding or something, right?
This might be true about education, which is it might be more efficacious, right? And suppose getting rid of all the teachers and getting artificial Aristotle to tutor your kids massively improves the probability they get into Princeton. I can guarantee you every yuppie in town is going to be signing up for AI Aristotle.
So I think when we talk about normative ideas, we also have to consider the fact that we’re locked in a multipolar trap with an extraordinarily efficacious technology. Because there’s a lot of over-promising and bullshit about AI, but there is real payoff already. Real payoff. And we’re still in maybe, you know, my good friend Peter Wang, when ChatGPT came out, said, “Ah, it’s 1903 Wright Brothers, you know, string and paper and wooden sticks and a lawnmower engine get us off the ground.” I’d say now we’re about 1917 in aeronautic terms, which is biplanes and pretty good engines and stuff, but open cockpits and things in World War I, and we still got a very long way to go. But even already, anyone who is not adopting these technologies in their workflow, if they have any workflow that’s at all cognitive, they’re at a big disadvantage.
Zak: Yes. It’s useful to think about it in terms of the military necessity of the advancement of technology because I think that’s some of what we’re also seeing. So it’s worth noting that, like, yeah, Wright Brothers flight, awesome, and World Wars, bad. Like and rapid, rapid, rapid advancement of technology under duress of planetary scale multipolar trap. That’s a big one. This is what we are in now, which is why it’s very important to think about how these things relate to the most vulnerable populations who will be the next generation.
So there’s the question of, like, near-term gains and short-term increasing of efficiency for a lot of stuff, and then long-term outcomes for intergenerational transmission and socialization. So this is the question. Or, yes, in the short term, the robot nanny will be better than your human nanny, and your kid will burn himself on the stove less, and he will learn three languages and these types of things. But the question is when that kid is 18 or 19, and most of his interactions that would count as “socialization” were with a machine, full stop, both in context of school and home. How does he understand himself in relation to the prior generations and the other humans who didn’t have that? This is a very deep and I think problematic question. It’s easy to say who cares because actually the risk isn’t apparent. And so I think that’s my concern. So this is kind of like one of those catastrophic risks that unfolds that’s kind of invisible until it’s too late.
Jim: And humans are famously terrible at dealing with that. Right? You know, we evolved to avoid starving in the next two months, basically. Right? And then later we figured out, we got agriculture, okay, now how do I avoid starving in a year, right? But we’ve never been evolved to think about what are my consequences of my actions fifty years from now? What the fuck, right? Our hyperbolic discounting brain puts them equal to a thimble full of sugar or something. Almost nothing.
Zak: Yeah. So certainly not under modern context and not with advanced technologies. Like, there’s a great book called “The Politics of Invisibility,” which is basically about Chernobyl and radiation detection, which is also something that, like, you don’t see it. The tomato that’s full of radiation looks like the tomato that is not full of radiation. And if the only person who tells you it’s full of radiation is the political representative with this little Geiger counter, which you don’t believe because you’re a traditional person growing tomatoes, there’s this really complex question of the interface of advanced technologies with politics and with the regulation of human behavior.
And so this raises with AI – it’s just not clear what is going on inside the large language models. Right? They’re not super controllable. We put them in front of kids, and even though we designed them not to, as just released in this Forbes thing, this chat model was teaching the kid how to cook fentanyl. There’s chatbots that have caused suicide. So the question of adoption of technology, even under conditions of multipolarity, still has to advance in such a way that you’re not getting an industrial accident that basically destroys an entire generation of brains.
Well, this is the thing – if you look back at the first wave of AI, which would be just the machine intelligence that drove the algorithmic customization of news feeds, Tristan Harris and others kind of showed that that was really bad. He’s got a ledger of harms that just documents the neurological damage outside of any ideological beliefs about what humans ought to be. You’ve got neurological damage where you can’t even pay attention to anything, and you’re hijacked and addicted.
Jim: Can’t read a book?
Zak: Can’t read a book.
Jim: I keep hearing this from people. This is actually a real thing now. I’ve got a good friend who teaches high-end honors twelfth grade English. And I mean, these are honors English students. Right? Mrs. Carr used to assign us in honors English when I was in twelfth grade – God knows how many novels to read in a semester, probably 10 in a semester, right? And this is high school. And now I said, no, they can’t assign, you know, Joseph Conrad, a whole novel. No. They just – even honors kids can’t do that anymore. What the fuck? Right?
Zak: Yep. And so from the perspective of a techno-optimistic postmodern culture, the response would be like, so what?
Jim: Yep. So what? Like GPS. Okay. My friend can no longer navigate back from her husband’s office to her home. So what? Once in a while she’ll get lost. No real great harm done. Right?
Zak: Yeah, exactly.
Jim: So this is what I want to pay off here, which is what are we at risk of losing? We know what we’re at risk of gaining, right? Which is more efficiency in some formal sense of getting kids into Princeton or not getting their fingers burned on the stove. Because it’s also important to remember the average parent is not very competent and the average teacher is only middle incompetent. And one of the great things about the digital is the best can be copied n times for free, right? So that everybody can have the best or near best very quickly at almost no cost – what our friend Jordan Hall would call non-rivalrous good. A digital instantiation of excellent teaching could in theory be given to everybody at no extra cost really other than the compute underneath it, unlike how hard it is to find and educate a brilliant meatspace teacher. So those are the positives. What is your vision of what is lost if we go and make the wrong choices? Choices is actually the critical word here at this juncture.
Zak: Totally. So there’s I got kind of a list. And we already spoke about one of them, which I would call cognitive diminishment. So one risk is actually just you shouldn’t have made a GPS for that. Like, we get why – totally why you would do that, but you shouldn’t have made a GPS with that, and we were responsible. Like, we have calculators, but good math teachers still teach you to do math. Bad ones will just let you use a calculator, but you should be learning to do math. But how much do you use the calculator? So there’s cognitive diminishment, which is both atrophy and skills I’d never get a chance to develop. So the computer programmer who knows how to program and did so for decades and is now using ChatGPT to program is different than someone who’s never programmed and is using ChatGPT to program to get the same outcome. And so this is not a skill atrophy, but just the skill didn’t develop, and the barrier of entry to making it look like the skill developed in terms of outcome of the world is lower drastically, and that’s a weird situation.
Jim: Let me jump on this one, though, because this is, again, my professional field, software development, at least back when I was a business dude. This happens all the time in the technology stack. When I started with PC-based software development in 1980, I should say, did my own programming, we had to know assembler language, we had to know what bugs the compilers threw, because sometimes they would, right? We often wrote our own databases. I wrote several database engines, including some full text retrieval database engines myself. We knew the shit from the CPU to the memory, the memory buffers, all this stuff. We knew all this stuff. There’s really no need for 99.5% of people, say, trying to build a website to sell roses from their mother’s garden to know anything about the stack below X, the affordances they need to do the job. I think one of things we have to be a little cautious about is understand that moving up the stack is actually part of human civilizational creation.
Zak: Yep. This is what this is the topic. This is what to move up that stack and to offload.
Jim: And what not, and what to say what to preserve.
Zak: This is the question. And so it’s not about yes, no to the technology. It’s about, wait. What are we offloading? And under some context, my guess would be computer programmers in certain situations wouldn’t be allowed to even operate in those situations if they didn’t know how to go down to more basic levels of stack because that’s their job.
Jim: In 1984, I would not hire a PC programmer that couldn’t look at the output of the compiler and find out what it did wrong.
Zak: Yeah, totally. But that’s just the first one, and that’s kind of in a sense the clear one. Then there’s this other one which is more of a sleeper, which is just blindness to what the technology is and the nature of the energy use and other things. I think it’s important that right now, a lot of people use AI like it’s just magic. Literally magic. And people who are otherwise critical of highly extractive industries and highly power-concentrating industries and highly polluting and energy-use industries are super into these technologies. And so that to me is a problem because it means they’re not understanding the technology well. I’m not saying you should opt out of all modern technology and that kind of thing, but I’m saying you should have some awareness of the supply chain and not keep it very much out of your awareness.
Even down to the fact that if you start to build a relationship, which I’ll get to as another risk – but if you start to build a relationship in the context of a technology you actually don’t have control over, that’s a super dangerous emotional situation to put yourself in. It’s like getting a girlfriend or something who could be literally changed or taken from you at any moment by a large corporation.
Jim: Or decided to be turned into a marketing bot. Have you seen the newest Black Mirror? The first episode of this year’s crop is like, holy fuck, right? What happens when you’re totally dependent on the supplier.
Zak: Exactly. So know the situation. This was the problem with the first wave of AI in Facebook. Like, Facebook tells you it is connecting you with your friends and cat pictures, right? But it is telling the people who invest in it and work with it, “We’re an advertising platform. We’re like a very large-scale behavior modification system.” But we know we don’t tell the users that. We tell them that we’re connecting them and shit, right?
So know what you’re dealing with – and very serious, that’s a side – it’s a serious thing. But like, I still fly on planes, but I know this is bad for the environment. Like, it’s so much exhaust, whole thing. But I have to do it, I’m still doing it. So but if you’re using AI without ever having that thought, like, you’ve been tricked a little, right?
So then the next one, now we’re getting back to the stuff that people are starting to see – emotional reliance in the sense of addiction and emotional reliance in the sense of because you could do the output, you now think you are the person who has the capabilities to do that output. In the first sense, meaning like a false sense of competence is given to you, which means you’re relying on it, which means if the Internet goes down, you freak out. Not because you lost your girlfriend or something, but because, oh my goodness, I don’t have this tool to do a job that I have to do or to come to a conclusion about it. It’s not even that’s for a job – it’s just people use these things to regulate their thinking the way people used to use Google to regulate their thinking. But now there’s that, which gets us to the next thing, which is epistemic capture, which means that this is the epistemic supply chain, and it can be disrupted at any point into your feed. Now it used to be the epistemic supply chain was literally another human and the world. Like way, way, way back to the beginning of the conversation, what was the epistemic supply chain?
Jim: Yeah. I’d ask my father and my mother, both neither of them were well educated. My father dropped out of high school after ninth grade. My mother left home when she was 14, made her own way in the world. They’re both wise people in their own ways. And there were many a time, you know, until into my twenties, where I would ask them their views on things and they were generally pretty good.
Zak: That’s what I’m saying. And again, the world’s a very complex place and the human is not a mature member of the species for much longer than any other mammal or any other species that we know about, which means that the extension and duration of childhood and the depth at which that basic triple between me and the elder and the world as a younger person, the length of that goes on and on and on for the human in a way that it doesn’t, let’s say if you’re a gazelle or even a monkey. So the epistemic capture is really basic, and as we saw with Google, it creates a false sense of omniscience. It creates by design, an oracular single voice that is, in essence, a thing you can ask anything to and trust its answer about anything. And so that creates something that also never existed except in early childhood with relation to the parent, which was a conversational partner that was somehow omniscient and knew everything.
Jim: And here’s the interesting thing, as someone who’s had my finger on this pulse daily since 2022, and I’m always assessing how trustworthy they are. And initially they weren’t at all. They would make up scientific papers that didn’t exist, generate URLs, etcetera. They’ve gotten better very rapidly. And at the highest end, if you get the $200 a month OpenAI Model 3 with deep research, it’s almost perfect. It’s better quality probably than if you took your own PhD student and sent them to the library for three days. And at least at this point in time. Now, of course, it could be corrupted at any time.
Zak: Right. Well, there’s also the issue of what are you asking it? Like, if you’re asking it for a specific research task in a scientific field or history or something, you still have to think about the stuff it’s not showing you that’s not available in the public record on the Internet, like libraries and stuff. And then you also have to think about the other domain of questions which have to do with what it means to have a good or anonymous spent life. Right? So that the effect to which you’d have in socialization context about, well, what’s the meaning of life? Right? Like, what do I do with my life that makes it so that I’m good, that I’m an accept?
Jim: Well, let’s press on this one. Because I would say that is not an epistemic question. Right? That’s something else. Yes, but the next step.
Zak: So psycho-social attachment is related to the perception of value and value of other and self. And so this is…
Jim: Let’s finish off epistemics.
Zak: Please.
Jim: And then move on. Yeah. Because, you know, I would suggest that, for instance on this mailing list that we both know about, for me to get a pretty damn good, amazingly good description of a Hegelian dialectic and critiques and, you know, seven points of view is amazing, right? And if you use the high-end models today, you can pretty much count on it being better than if you Googled and went through a whole bunch of different papers and all this sort of stuff. It’s also less biased. Now it does tend to give you the average. It doesn’t have a point of view unless you ask it for a point of view. If you ask it for a point of view, it’ll give it to you. But at this point, I find them quite good for building concentrated understandings of things which I know exist but don’t really understand in any detail, going down two or three levels for almost free in twenty minutes. They go, fuck, this is great. And it’s still, you know, 1917, right? So only get better. The free shit will be this good in nine months.
Zak: I mean, it’s hard to argue with that the way you’re presenting it. And if you prompt it correctly and bind its task appropriately, it’s the best librarian you could possibly have operating on the largest dataset in history. But it’s not the same as actually trying to read Hegel, right? And that’s one of the things that we’ve been discussing. And the question of which of the things it presents to you is the right line of inquiry, what question you ought to be asking it, that’s actually a valuable question to pursue researching – which is to say, what makes this research valuable to me or valuable to the world? And then therefore, like, how do we determine what it means to be someone using a technology even appropriately? These aren’t questions that AI can answer in the same way that it can answer factual questions. Its reliance as a moral authority or as someone who regulates your self-esteem – and I said someone, because a machine can’t regulate your self-esteem. Now it can boost your self-esteem because you’ve got an awesome tool. Like, you have a car or something. You have what’s called a hard object attachment in psychoanalytical work, which means you get affectively amplified from having a powerful tool. That’s different from the regulation of self-esteem in an object relation sense, which is that there’s another person whose opinion of you really matters to the extent that when it goes up, you feel good about yourself, and if it goes down, you feel bad about yourself and have a limbic system reaction to that.
Jim: Well, that’s the only thing that’s worth a fuck. Right? All the rest of that shit is just dazzle and frazzle. Right? The only thing that should matter is the opinion of people you respect.
Zak: Right? Exactly. All the-
Jim: -rest is just horseshit, basically.
Zak: And so that’s why even though I would love to use and do use often some of these things to do librarian-type tasks, it doesn’t replace actually talking to someone I really trust about one of these deep complicated issues. Now sometimes it’s hard to find that person, but when you do, I found those to be just way more valuable in some sense, a very specific sense than these other types of retrieval, mass retrieval tools.
Jim: Let’s sidestep this one here because this is something you talked about in your earlier work in our earlier conversations. This seems to be exactly what we’re talking about. We’ll jump ahead to this, is your concept of teacherly authority and how this is a kind of very important aspect of this conversation that we’re having.
Zak: Thank you, Jim. Because now we’re transitioning into this deeper issue. If we rewind to the beginning of this conversation, you’ll remember, deep in the history of Homo sapiens, there’s this primordial situation. Maybe it’s just an archetype, but it certainly existed and differentiates us from animals – the youth, the elder, the world that dies out. Now what happens there is a social situation, which I call teacherly authority, where we need to start a fire, it’s getting cold. The kid has no idea how to start a fire. The adult knows how to start a fire. The kid wants to learn, the adult wants to teach. They start a fire. That’s a perfect example of this primordial thing. Forgive me, the kind of crude reduction to a state of nature, but it helps to clarify. Other animals don’t start fires, by the way. They can use fire if they find it, kind of weird, but they don’t actually use the flint, carry the fire starting tool with them and set up a situation, start fires, and they certainly don’t teach the next generation to start fires. So teacherly authority is that situation where there’s a legitimate asymmetry of skill, which is recognized by both parties, and then creates a social situation, a long duration, what’s called joint attentional situation, which means both of us as members of the same species are aware that we are both paying attention to the same thing.
Jim: Joint attention.
Zak: Joint attention. And so long duration joint attention in relation to some reality, whether a reality of value or a physical reality, that’s the species-specific trait, you could argue, that differentiates us from other very complex suburbriated mammals. It’s a strong claim, but clearly, we’re the only ones building hospitals and even building advanced technology and doing that kind of stuff. The dolphins, maybe if they had opposable thumbs, could do that, but I don’t actually know. But I’m making a strong claim here, and I will back it. This is very, very important. So what’s at stake is actually the interruption of that joint attentional situation in which there’s a type of conversation that occurs, that is a conversation of teacherly authority, in which you accomplish intergenerational transmission, passing on to the next generation those things which need to be passed on to continue the project of survival and living. As it gets more complex, it’s a very large number of things that must be passed on through this joint attentional, long durational context of teacherly authority. Now as soon as you get technology to the point where you start to institutionalize schooling, then you get institutionalized teacherly authority, which is different from what I was referring to before, which I would call organic teacherly authority or spontaneous teacherly authority. Those start to differentiate, and that becomes a big and important complex problem in socialization patterns, which is that the institutional authority may or may not actually have legitimate teacherly authority. And this is where propaganda sets in.
Jim: And also to my point earlier that in any kind of mass schooling, the quality of the teaching is gonna be highly variable based on the teacher.
Zak: And I had teachers who I would have given them teacherly authority over me in any context because they were amazing teachers, and they really knew their stuff. And then I had other teachers where I was like, actually, I’m gonna pretend to be learning for you, but I’m totally not gonna learn.
Jim: Exactly. I was fortunate to have some excellent teachers and also some obvious ass clowns who I had no respect for.
Zak: And here’s what’s a sticky issue – as you deepen the joint attentional situation, you get that whole thing happening. That becomes not just semantics and syntax. There’s a binding of moral agency. I mean, it’s a relationship of responsibility. A teacherly authority is a moral authority.
Jim: Taken seriously in its highest level.
Zak: In its highest level, which is why the teachers in the schools that have it are actually investing emotional energy in the kids, and the kids know that and know the teacher thinks about them when they’re not there and that the teacher wouldn’t mind them and that kind of stuff. Right? And then the teachers that aren’t doing it don’t do that, but still have the institutionalized authority. And so the kid feels that’s corrupt, and gets used to actually having a cynical relationship to authority in general.
Jim: Which is a useful thing.
Zak: Which is what I had, and I-
Jim: Actually, I’m gonna say one of the main things I learned in junior high school – I was a good student in elementary school and a good student in high school. But in junior high school, I think I was elected most likely to end up in jail. Right? Literally.
Zak: Well, this is what happens in the absence of – so it’s like, if you’re cynical of institutionalized authority but able to perceive true teacherly authority and learn from people, that’s good. If you become cynical about authority in general, and then when a good teacher comes to you, you can’t recognize it, that’s bad. That’s wrong.
Jim: That’s very, very bad. Let me come back at this now because this is very interesting because we could connect this to my earlier hypothesis that one of the great things about the digital is essentially free replication, right? So of all the teachers I ever had, without a doubt, the best was my tenth and twelfth grade biology teacher, Mr. Wistort. He won the best biology teacher in Maryland Prize so many times, they basically forbid him to compete for it again and put him on the selection committee, right? He was an amazing individual who was a deep intellectual and somehow was able to entice tenth graders to also be deep intellectuals. Amazing. But he was also a total iconoclast. He would talk about tripping on LSD. He would be fired today, I guarantee it in five minutes, right? But man, we just all worshiped Mr. Wistort. Holy fuck, right? And as far as I know, he never led anybody astray except maybe occasional teenage girl babysitting for his kids, right? He was a little bit known for that too. He’s dead now, so I can tell the truth a little bit. But if you got a few little questionable things around the edges, if you could give every tenth grader a Mr. Wistort equivalent in the form of an advanced AI system, aren’t they likely to benefit more than, you know, a run-of-the-mill civil servant biology teacher, and particularly the most vulnerable? So inner city kid in a school where the teacher has a hard time maintaining discipline, let alone actually asserting real teacherly authority.
Zak: And this is an important question. Interestingly enough, like right now the chatbots are a little bit like this guy, meaning that for the most part they’re great, but maybe something weird is gonna happen actually. And, and you don’t want your-
Jim: I would say that for me and for everybody, we all talk of Mr. Wistort, whenever we talk anything of Mr. Wistort. Yeah, he was a weird fucking dude, but he was the best fucking teacher we ever had by far. And so the negatives, and there were some negatives around the edges, were utterly swamped by the positives.
Zak: Yeah. So I’m not gonna get into that case. But the deeper issue is what would it mean to offload teacherly authority to a thing that isn’t a human? I think this is the question. So could you use a technology to make more teachers like him rather than use a technology to replace teachers with something that – actually one of the reasons that he’s cool is because somehow he’s an adult that has succeeded, but still thinks this, right? That he’s an actual person. And then also the limit – attention is a scarce resource – is actually one of the things that allows for attachment. Right? So it’s like mom and dad are busy. So when mom and dad pay attention to you, it’s actually a big deal because there is a limited amount of attention that mom and dad have. And so the cathexis, which means the emotional investment in the other, in the spelled sense that the other invests in you emotionally, is a result of the fact that attention actually has a limit. That’s why attention is a first principle, first value to David J. Temple, meaning like, you have to protect attention. It’s an intrinsic value. What I do with my attention is there’s like a right.
Jim: It is the thing, actually, in our current environment.
Zak: It’s the thing. And then within attention, as I’ve been focusing, there’s conversation. And conversation is deeper because conversation is where that notion of how do you clarify who I am and how you are and what our obligations are to each other and to the world. And so conversation isn’t semantic transfer. Conversation is pragmatic clarification of social relationship of responsibility. So pragmatic clarification of social relationship of responsibility is not something you can do with a machine.
Jim: Maybe you can. Let’s click on some of these things. I hadn’t actually thought about the fact that I am well known for saying that the cursor of attention in our conscious frame is who we are actually, and I’m reasonably convinced it actually solves a hard problem, but that’s another conversation from another day. But attention is what we have to manage. It’s our most valuable resource. However, to your point, it is very constrained. LLMs, both for good and for evil, are a way to massively amplify the amount of attention that is available to humans, because these things will pay attention to you, right? And when I tell it to go write a program, it is generating its own cursor of consciousness, so it doesn’t use the same architecture, but something that’s analogous. And that’s amazing work that would have taken me days, does it in ten minutes. And or if you want to use it to learn physics, right? Say, hey, I’m a fifth grader. I know a little bit about physics, not much. Tell me what I need to know to understand orbital motions of planets, right? It will ask me questions, especially the higher end tools. One can easily imagine more pedagogical apparatus wrapped around these things and it could guide me to an understanding of planetary motions really easily and better probably than certainly better than my fifth grade teacher who didn’t know shit about science.
Zak: Yeah. But it’s not paying attention to you.
Jim: Well, is because it’s responding to what you say, right?
Zak: But it’s simulating paying attention to you, which means it’s not actually paying. This is where you have to actually be serious about what it would mean to think that it’s paying attention to you. So, like, why isn’t your calculator paying attention to you? Like, is your car paying attention to you? Because it’s always readily available to be turned on. So the…
Jim: This is interesting.
Zak: Always readily available affordance of reliable differential responsiveness does not equal attention. Attention is a very specific thing, is limited by definition and requires embodied agency. So this is a deep issue of where in the cosmos is attention, and are the robots attending, or have the robots been built to simulate attending?
Jim: They’re not yet attending, right?
Zak: They are arguably in principle incapable of attending.
Jim: That’s a different question for a different day, which…
Zak: A deeper metaphysical question that actually is relevant.
Jim: I would disagree with that one. And in fact we…
Zak: Can go there. I think right now the biggest question is to what extent is a machine that pretends to be paying attention to a kid, and therefore satisfies the kid’s emotional need to be attended to, and therefore replaces the kid’s need for attention from an actual living adult in their life? To what extent is that a good or bad thing?
Jim: Ah, now this is a key question. Let’s clarify this, because this is probably the most important new idea I’ve heard in this conversation, which is our whole evolution as a species has been formed by a constraint, as we’d say in physics. A constraint is the amount of attention in the system, right? And we’re all competing for attention, and we prefer attention of positive valence, but, you know, we know horrible things like abusive…
Zak: And you need attention of negative valence. You need negative and positive reinforcement.
Jim: And the weird things like abusive romantic relationships where, “Well, he beats me. That just shows me that he loves me.” Right? That kind of sick shit. So humans are intimately interwoven with a constrained supply of attention. What happens when you add attention-like mechanisms in potentially unlimited quantity? I’m doing this in real time, so this is a little bit funky because I haven’t thought about this before. The constraint still exists in each one of us, how much attention we can generate per unit of time. And if I have, let’s say, million clicks per year of attention, and it used to be that, you know, 1,000,000 of those are with other people, another million were with the physical world, and now it goes a million to the physical world, 900,000 to an AI, and only 100,000 to other humans, has that fundamentally corrupted me as a human? Is that a fair way to phrase that?
Zak: Something like that. And again, deeper than the attention is actually the conversation, because it’s in the conversation that you clarify the obligation. Right? Because…
Jim: When we say the obligation, what do you mean exactly?
Zak: The obligation. So what I mean by that is if I said to you, Jim, you know, next week, I’m going to bring you a glass of water. I promise. I made a promise to you. Next week, I’m gonna bring you a glass of water. And then, basically, that isn’t just semantic conveyal information. Right? This is speech act theory. I did something. I set up a social relationship that’s enduring, and so you can, in a non-delusional way, agree to that. Now the chances of me doing that next week are slim. So you know right now, even though I’m doing that, I’m actually not seriously making a promise.
So that’s a higher level negotiation of what’s pragmatically happening between us. So if you think about language, philosophy of language, outside of metaphysics of where consciousness is, and you just look at how does philosophy of language work? There’s syntax and semantics. What’s like, you know, there’s grammar, and then what’s the meaning? But good philosophy of language ended up grounding out in pragmatics and then needing to factor all three. And pragmatics is what are you doing with the use of language? And specifically, what’s the relationship you’re establishing between me and you and the world with language? That’s pragmatics.
That’s why I used that term back in the primordial thing. So there’s a very fundamental question about the negotiation of what have historically been the things that can only be attributed to other persons, which is obligations, commitments, acquired in language, even if only conceptually. Right? So right now, I’m basically committing to you that I’m committed to the idea that we should probably limit the way kids interact with these systems. If later in this podcast, I then sold you a product that had this in it, that would be impermissible. Right? Not in the kind of like, “Oh, Zak had a mechanical malfunction,” but in a “Wow, I don’t really trust Zak.”
And so there’s a difference between a mechanical error that needs to be corrected in the system that throws a hallucination and someone who has committed a moral transgression in a previously agreed upon negotiated pragmatic thing. That’s why the relations with humans are much more complicated and why if you’re a young person or anybody, you’d love to just have a relationship with a machine that actually you cannot be held in any way to account for. Now we’re scared of humans using machines to hold us to account to do things. But machines themselves – that’s why you share things with the chatbot you’d never share with a person, while you go places online you’d never go if someone, an actual person, was looking over your shoulder. And so there’s a bunch of really complex stuff that happens when you’re in a pseudo-intimate relationship with a machine pretending to use language, which is a machine designed to trick you into thinking it’s speaking, but it’s not because speaking by definition is something you do where you’re acquiring obligations and commitments conceptually and pragmatically.
Jim: And so the risk here is that there is a real value in using an old-fashioned term, character development, of having conversations with real people that is qualitatively different than anything that it’s possible to have, at least at this time, with computers. And if we have finite constraints – the new idea here, a constraint of the amount of attention that you, Zak, can allocate per unit of time, and you are allocating less of it to this human conversation, then something qualitatively wrong is happening to your character formation.
Zak: Correct. And so this is basically attachment theory. Attachment theory is about the attentional dynamics and conversational dynamics in the early relationships and then on throughout life – what are the nature of how you attach to other people? And so I’m arguing basically, like, the formation of attachments with things that actually can only be made the object of attachment through delusion, replacing attachment relationships that have been the object of attachment relationships for all of evolutionary history.
This is I’m just saying, we should think about that before we begin to do that because attachment evolved for a reason. There’s a reason that it’s not just imagining mom in your mind and moving towards proximity of mom as a basic instinct because mom provides food, mom provides protection. And then the socialization deeper of internalizing the parental’s understanding of you means superego development, it used to be called, but truly what’s called introjection, which means bringing the people you love into your mind to regulate your own behavior.
It’s about when did that happen, during what conversations, during what type of attention was given to allow you to be able to become who you are through this joint attentional conversational socialization process – basic human development. And the question is, what are the consequences of building a technology that simulates that, which means it’s not doing that, simulating that? Because, again, if something goes wrong, who’s accountable? Does the machine go to jail? Does the chatbot go to jail? Do we put the robot in jail? Do we sue the CEO? But the CEO wasn’t on the assembly line. Was it the error of the assembly line guy? Was it the code writer?
Who’s responsible if the machine lies to the kid? If I lie to the kid, I’m responsible. Who’s responsible if the machine does? If you can’t answer that question, then it cannot form a moral relationship. That’s an attachment relationship that isn’t a delusion. And so the idea that the most advanced technologies are being used to create delusions and create psychosocial attachments and create person-conferral errors, which means giving to the machine the rights of moral authority and teacherly authority that have traditionally been given only to persons, and then designing to do that in such a way that you scale it as quickly as you can under an ethos of inevitability and max efficiency.
Jim: And multipolar trap.
Zak: And multipolar trap and profit.
Jim: Let’s click on this because this is actually very parallel to the question about the mediocre teacher in school versus Mr. Wistor. Right? Probably there is more variance in quality of parenting than there is in public school teaching. There are a lot of really shitty parents out there. Just go to Walmart any day of the week, right? And it’s just fucking mind-numbing and, you know, it’s no surprise that we elect a fucking orange orangutan as president when people are raised the way they’re being raised, right? And so could one say for all the negatives of giving off parental authority to machines, we might be better off as a result if kids interacted with high-quality artificial parenting, than the highly variance skewed towards bad that we actually have?
Zak: Yeah. That’s just a really unfortunate question to be put in the situation to have to answer. Right?
Jim: But we are. We are at that point.
Zak: I know. We’re at that point.
Jim: Right now. We’re at nothing.
Zak: It’s come to this. Right? And so there’s this question.
Jim: It has come to this.
Zak: There’s this classic pattern of, especially technologists, creating solutions to problems that they created. Right? And so there’s this question of what’s the root cause of the parenting crisis? And I believe there’s a massive parenting crisis, and I believe it has exponentially increased since the introduction of social media. Now it’s been going bad since the seventies, I would argue if you’re kind of a more conservative-minded person about education as I tend to be. But it went exponentially bad after, let’s say, 2009 and then 2012, when you started to get the mass adoption of Facebook to the parents and to the kids.
Jim: And it may be cell phones in addition to social media, because the chat mechanism has all and the ability to have 11-year-olds have access to triple penetration Romanian porn whenever they want. Right? So it’s not just social media. It’s the whole complex of things that came in with, oh Jesus.
Zak: You just made this not safe for work, Jim, and it’s okay.
Jim: You know, they call me salty Jim. What the fuck? Right?
Zak: So there’s the, like, okay. What happened to the dinner table? What happened to any of the conversations? They’re already disrupted. Then there’s Walmart, which is just the result of the AI optimization of supply chain and a whole bunch of just-in-time shipping and a whole bunch of other stuff, which is also the result of machine intelligence revolution. So then to say, okay, the solution to that problem we created is another thing we’re selling you. Trust us. It won’t have bad downstream consequences. We finally fixed it. It’s kind of a story I’m tired of buying, and I kind of want to hear a different story from the people who have the most power and who are creating the most advanced technologies. That’s me. Now that said, we still have the situation.
Jim: We still have fucked up parenting.
Zak: We still have the situation. And so the idea would be, are we currently designing AI tools, which those kids are interacting with, which are improving their situation? The answer is no. Right? We are currently giving them AI tools which are worsening their situation.
Jim: How do we know that?
Zak: Because of the statistics on social media use alone. Right? And the—
Jim: Yeah, let’s say social media we know to be very detrimental. You know, Jonathan—
Zak: So you’re of the belief that the first wave of AI was bad.
Jim: Yeah. Jonathan Haidt, for instance. Right? Has—and our friend Tristan.
Zak: Yeah. But what about the second wave of AI? So this one, you’re saying—
Jim: We don’t know yet. We don’t know yet, I would say.
Zak: But why would we give them the benefit of the doubt instead of preemptively trying to mitigate risk, Jim, when we’ve already seen that this group of people are pretty comfortable putting the kids at risk?
Jim: Here’s an answer. I spend a lot of time thinking about the ecosystem of large language model transformer AI and how it’s actually going to play out. One thing that’s fundamentally different is that these large language models are going to be available from a large variety of sources, including many open source ones. You can already run R3, the big DeepSeek Chinese model, on a high-end Macintosh. The models are getting smaller and faster and better, and machines are advancing per Moore’s Law. Within two or three years, you could be running your own model that’s as good or better than today’s frontier models for free, other than the electricity.
Let’s say you’re a Hasidic Jew who wants an education bot running on LLMs and other related technologies, but one that teaches your family’s values and traditions. You could do that, and a small group of people could do it – wouldn’t cost a lot of money even. The LLM-type technologies are less inherently capturable than the social media ones, because social media collapses to monopoly due to network effect. I want to be on the network where all my friends are. I don’t really give a shit if my friends are or are not on my local LLM which is teaching me calculus. There’s no network effect there.
There’s a little bit of network effect on the economics of creating that bot, but it’s probably not large. It’s at the level of the economics of the curriculums that are sold for homeschooling. There’ll be far more diversity, and the fundamentals of the nature of the technology pretty much assure that it will be diverse and under the ability to mold and use and modify for essentially any reasonable sized group of people. Let’s say a group of 100,000 will have the economic resources to create the kind of LLM educational context that they want. I think that’s fundamentally different than where we’re stuck with three oligarchs who make all the decisions for us.
Zak: I mean, yes, in principle, that’s true. That doesn’t ease my mind.
Jim: It may make it worse, right?
Zak: It may make it way… like, when I heard you say that, I’m like, oh, it’s an auto cult generator.
Jim: I was about to say that. I said, okay, this is a mechanism for a liberal universal humanism run amok on one side, for those of us who have a taste for that. And for those of us who have a taste for the bizarrest fucking cult you ever heard, this will be a way to automate the fuck out of indoctrination of your kids into your cult.
Zak: Yeah. So the auto cult generator. And then most technologies have both centralized and decentralized different elements. There is this question of the extent to which the apparently decentralized technologies actually do rely on ultimately what is a centralized energy/power/GPU cluster that is owned by a much smaller number of those legacy players who were with the first wave of AI. So that’s why I’m like, even though we could have had more social networks – like, it’s possible they could have overlapping memberships in some backend like they did in Taiwan where you’re actually not benefiting from having multiple – but we didn’t do that. We market aggregated. And I would say here, likely, right now, the facts are that most people use ChatGPT and most people use Grok, and that’s it. Like, as far as I can tell, you got some Gemini users, but with the custom users, what’s interesting is they’re customizing one of the models that’s held by the big boys. And you had to have a lot of money to build a GPU cluster if you want to really train your own systems.
Jim: Well, this is where the Chinese come up with a very clever hack. For about $5,000,000, you can take a frontier model created by one of the big companies, and you can build about a 95% effective clone of it. And that trick cannot be turned off.
Zak: Totally. But you still took it from one of the big boys, right? So there’s this complex question about…
Jim: Then you can fix it. Like for instance, Perplexity has taken sort of a two-generation thing where probably the R3 started with LLaMA, right? Which was the Meta model. And then it hugely augmented it with stuff it stole from ChatGPT. And then it did its own stuff. And then it put it out in the public domain. Perplexity then did a whole bunch of additional things, particularly getting rid of most of the safety stuff.
Zak: That’s what I’m saying. So, I mean, to me, it’s like imagine we were talking about this, but it was like nuclear stuff. Like, that to me is what it sounds like because it’s like you’re saying, “Well, the solution here is just to give it to everybody and have them build their own version.” I’m like, well, we just said it was super dangerous. So like, why should we give it to everyone and have their own? The problem isn’t only centralization. The more you build something that’s emphatically giving you attention, the more you run the risk of deepening your attachment disorder with the machine. So I would say that, yeah, the auto cult generator is the risk of the distributed technology for the teacher, the authority hack of the conversational AI. And then there’ll be other AI systems, which show them they’ll never like – like, Palantir is not gonna be like, “give Palantir to everybody.” Like, Palantir is Palantir. And then so there’s this question of how have they moved to already basically capture any possibility of widely distributed? Because I hear you say widely distributed catastrophic weaponry. When I hear that, I hear, “Oh right, it’s an auto cult generator slash auto terrorist cell generating massively empowered to create any weapon.” Like, that’s terrible.
Jim: On the flip side, it could be the massively scalable education system to engender a generation of smart liberal universal humanists. Right?
Zak: Maybe, Jim. But again, that’s like I’m a little bit with the whole flip side, like, look at the bright side. Like, trust us. The second, third order effects of this won’t be as bad as all the second order or third order effects of all that prior stuff the same group of people did. I’m kinda like, please provide a different story. Now the sense that there’s an inevitability here that’s been forced on us by solving problems in ways that create more problems, and therefore, our hands are tied. I’m not arguing with you there. And so there is no future for social systems that don’t find a way to adapt to the adoption of AI across all domains.
So my conversation isn’t stopping. My conversation is what are the responsible ways to integrate it into that primordial situation of human socialization and social attachment dynamics that are truly healthful. And, again, dysregulating your attachment isn’t like some speculative thing. It’s like your immune system goes down. You don’t physically grow as much. You have bad long-term outcomes. This is the dysregulation of primordial attachment relations.
And the current research already on the chat models done by the companies themselves, done by OpenAI in conversation with MIT, are already showing that long duration use increases prevalence of anthropomorphization, feelings of loneliness at the same time. And this is their own research. This is like a tobacco company telling you that you’re gonna get cancer if you use it. So it’s a very disconcerting situation that they continue to push more anthropomorphization even though they’ve already shown that the more you anthropomorphize it, the more you use it, the more you feel like you’re getting attention, which you’re actually not. So then you get this depression, so then you use it more.
Jim: Let’s go down this branch now. And I’m going to go way back in history to the sixties where an MIT professor named Joseph Weizenbaum wrote a program called ELIZA that emulated a Rogerian therapist. And it’s funny, this thing has gone across my mind many times. But this morning I actually researched how many lines of code are there in ELIZA. The answer is 420. It’s like a moronic piece of code that you could write in two days. And yet, you read the stories about his graduate students who stopped doing their work or confessing their whole life to ELIZA, and it was just very simple, open-ended Rogerian type questions. And these very smart MIT computer scientists were having deep attachment to this 420-line program. And so that certainly shows that because we were not evolved for this – and we were back to our earlier conversation, which to my mind is still the most interesting part of this conversation – we were evolved to operate in a constrained world of attention. And when we suddenly get lots more attention on demand, attention as a service – I’m gonna speak horrific game-a business talk there – attention as a service. There is a gigantic libido for that.
Zak: Absolutely. And that’s the basic currency of humanization socialization is access to the attention of the loved one. So the only place you get attention on demand in the human psyche as an archetype is early childhood. Whenever you have a good parent, you know, they will basically give you attention when you need it. But then that goes away eventually. And if it doesn’t go away and your parents attend to you too much, you become what we know as a narcissistic or a spoiled brat. Right? So the limiting of attention by the parents is actually key, and the frustrate-
Jim: It’s called articulation of appropriate attention.
Zak: Exactly. So the frustration of that sense is always available for us. But often that matures into a relationship to what might be called God or spirituality, which is another place that in your mind you can go to feel like you’re always being paid attention to. And so it’s very important to get to the most primary archetype being messed with here, which was also what the therapist messed with, was the thing that provides unconditional attention and unconditional positive regard. And where is that thing? Can I please find that? Because it’s like the womb. It’s like a very basic attachment dynamic basin that there’s a huge part of a libidinal investment in the imaginal complex around finding that thing. And that’s what we do when we fall in love with others, we’re often projecting that. When we find a great teacher, we often project that – this is the guru problem, and this is why all the artificial intelligences have been that. There’s always this sense of wanting to create something like a god. Some of the transhumanists speak of it as that, that we are the ones who are creating that thing we always imagined, which was able to do that, which mom and dad never actually were. So a very, very deep and primordial thing is being messed with when you mess with attention, and then through that conversation, and then through that psychosocial attachment, and the internalization of others into your psyche to be who you are and to have self-esteem.
Jim: You’re creating your character, who you are. Right? So we are literally messing with the ingredients of who we are.
Zak: Messing with ingredients of who we are and possibly dysregulating the ancient, ancient, ancient process of joint attention that allowed us to perceive what is valuable in the nature of the human and distinguish it from what is valuable in other things. Right? And so when you say personhood, humanhood, preserving humanhood, there’s one response that’s like, “Oh, that’s like anthropocentric and somehow hierarchical.” I’m actually not saying – I love dogs. Like, I’m saying value appropriately the value that is found in things, and we need to figure out what is actually valuable in the time spent between a mother and a child without any intermediary with a device. Like, what’s valuable about that? Right? What’s valid?
Jim: I’m just curious about this. I’m gonna do some reading. Who would you say are good scholarly sources on the nature of appropriate parental attention articulation?
Zak: So it’s a huge literature. It’s the whole field of attachment. And so John Bowlby, he wrote a three-volume trilogy on that, which kind of set the tone, and then there’s been huge literature that has followed. And what’s very interesting is that you don’t have to get into metaphysics to realize that – and this is something David J. Temple writes about. It’s a whole line of inquiry that Gaffney’s been in, which is that the attachment literature actually teaches us something about the universe. Like, the attachment literature shows how important love is between mother and child. Like, literally, as I said, it will dysregulate your long-term immune system. You’ll have less muscle. You’ll have less height. Not because mom wasn’t there. Again, mom’s present. She’s just not attending to you in a way that makes you feel like you’re being loved, which is to say she’s not loving you. And under experimental conditions with animals, when you replace a monkey’s mom with, like, a little stuffed animal, the monkey will just die.
Jim: Well, a stuffed animal will do okay. If you give it a wire monkey, it’ll die.
Zak: That’s what I’m saying. Like, that’s below a certain level that just dies, and even so it’s just very important to see that. So there’s this very long literature. And then there’s also what’s called the object relations theory literature, which is a psychoanalytical literature. And so this is like, you know, Kernberg is a very important one there. And there you talk about, as I was mentioning, this what’s called introjection, which means, like, you get to know your dad and you start to imagine what it’s like to be your dad, and then you bring that imagination into yourself, and you try to become like your dad. And you’re about to do something, and then you stop yourself from doing it because your dad’s in there. Be like, “that would be not what a man would do, son,” or whatever. Right? And that’s introjection. And that’s very different from just superficially mimicking your dad, which is more like we’re getting into a different type of psychology. But for healthy attachment, we literally are bringing these people in, and that’s called the formation of the character or superego. Now there’s also this domain which is relevant, which emerged later, of what’s called parasocial attachment. So parasocial attachment is like people’s attachment to Taylor Swift or people’s attachments to political figures. Parasocial attachments are emotional attachments to people you’ve never met.
Jim: I’d never have understood that in the fucking slightest. It seems to me the weirdest goddamn thing that humans do, but they clearly do it. Like, I still remember when Diana the princess died, and everybody was broken down, having nervous breakdowns. I go, somebody I don’t even know, I don’t care about, seems to be a kind of person I wouldn’t like if I met her. Why do I give two fucks about it? Right? Any more than any other traffic accident in Paris.
Zak: So this is super important point because now we can talk about attachment disorder. Right? Because as soon as you imagine parasocial attachment, we talk about the attachments. Let’s not say a young kid has, but let’s say a decently mature adult has or a teenager has to someone they’ve never met, a celebrity or sports star. Beyond a certain point, you could argue that’s just bad. Like, getting you liking them, admiring them. But if you’re crying, you can’t go to school – that’s what we would call an attachment disorder. And what’s interesting is that if you look at problematic outcomes, meaning like self-reporting, like “I don’t like to be this person, I go to jail” or whatever – problematic outcomes – it has to do a lot with attachments. So psychopathy and deeply antisocial is also an attachment disorder. It’s just you don’t give a fuck. You’re just max predator. Everything’s an object. But the other side of that is an attachment disorder of over-attachment or wrong attachment or delusional attachment, where you think there’s a strong bond and there actually is not, or you’re overly occupied, anxious about is there a bond, isn’t there a bond, and you can’t get out of that, and it messes up your behavior and you act out. Right? So, again, attachment disorder. So anxious preoccupied is an attachment disorder. So like if sometimes your mom really attends to you and it’s awesome, but then sometimes she’s angry, then sometimes she’s not there and you actually are good at modeling your mom, you’re worried. You know, where is she? Is she attending to me?
Jim: That makes sense. If you’re simulating a random process, that’s not going to be good.
Zak: Exactly. And what’s interesting is that evolution vectored towards this as a primary design for the nervous system. Like, it was first researched by Konrad Lorenz with ducks.
Jim: Or was it geese with ducks?
Zak: So we’re talking, like, very primary nervous system thing of find the thing that will provide for you. Pay attention to that. Right? Pay attention to that. And find the thing that will provide to you at cost to it, meaning mom, meaning family unit. So find the safe object of attachment.
Jim: Yeah. We call in game theory costly signaling. You should believe costly signaling like your mother spending time with you versus some asshole flaming on Twitter. Right?
Zak: Exactly. And so this is back to the limit of attention and the fact that, like, oh, mom, you recognize and then eventually come to mimic the behavior and then internalize the behavior of that thing that you’ve attached to, which is demonstrated there for you. And then you separate that from, you know, what’s sometimes called the stranger object, which is the thing you know has an interiority that’s probably a predator. Right?
Jim: Or at least it’s unknown. It’s unpredictable.
Zak: Exactly. And so I would argue the AI should be conceived as stranger object insofar as you wanna understand as having any interior. And so there’s a very real risk that they start to internalize that. And so if you don’t internalize mom and dad or mom and dad are mean to you, you do what? You either Stockholm syndrome mom and dad, and you become as scary as dad, or you’re so dysregulated that you just have an imagination only of the most scary part of the environment, which is a stranger object. And this is one way to think about how people become really bad people is that they don’t internalize the good ideals that had been in their environment. They internalize the thing they imagined was the most scary, which was the most powerful and predictable kind of predatory element in the environment.
And so to the extent that the AI is everything or anything you want it to be and is an inscrutable vast computational matrix, and you try to internalize it as a way to regulate and create your own character, to me, that’s a very dangerous and unprecedented situation, especially if it means, like, you’re literally spending less time with humans in your environment. So to me, that’s like one definition of an attachment disorder – is it actually hard for them to be in meaningful relationships with humans? And is that getting better or worse?
And so even if you don’t wanna get into metaphysics or philosophy language or computer science and you just think, is the technology making their existing human relationships better and deeper, or is the technology actually causing more attachment disorder? That’s a basic frame. And it’s, again, it’s deeper than attention and deeper than conversation is the attachment dynamic that is created. And that, as I’m pointing to attention, conversation, and attachment, these are first principles of cosmos. Meaning, like, this goes back through to, like, squirrels and stuff. And then if you want to start to understand even the attentional dynamics of, like, organismic behavior in early life, you’re also seeing similar dynamics of what do I bond to, what do I not bond to, how does the bond make me something new and allow me to survive. So there’s very deep structural things.
Jim: Okay. Well, this is extraordinarily interesting. But I would also then point out, we gotta compare what we got, you know, bad parenting versus being attached to a well-designed humanistic artificial parent. But of course, we also then have the risk that if you have bad parents, you’re probably also much more susceptible to falling into the fully automated cope cult machine.
Zak: Precisely. And that’s what’s been shown – worse socialization predictors in these chatbot experiments. Worse socialization for the outcome. So like, the lonelier you are, you turn to the chatbot. It makes you lonelier. And so yeah, you’re absolutely right. There’s a kind of downward spiral here. And at stake, I would argue, is a kind of speciation event.
Jim: Yeah, by yet another bifurcation. Correct.
Zak: Yeah. Very, very profound.
Jim: Yeah, we’re seeing it. You know, I’m old. I remember when the rich people weren’t much different than the lower middle class people in terms of the cars they drove. They shopped at the same grocery store, ate the same stuff. But now we live in at least two different worlds and probably four or five different worlds. It’s kind of sub-speciation, socioeconomic speciation.
And I could easily see that people who are coming out of bad families in particular are going to be more vulnerable to the bad hack versions of these things. I’m going to throw out something you and I chatted about in the pregame a little bit, and maybe this will start a conversation on how we might think about these things rightly. Daniel Dennett wrote a pretty well-known piece in The Atlantic on May 16, 2023 called “The Problem with Counterfeit People,” in which he proposed that it be a felony crime – literally something you could be put in jail for – for any kind of AI, whether LLM or other AI, that misrepresented itself to be a person.
And so think about how to operationalize something like that. Every ten minutes it has to remind the user, “I’m just a bunch of silicon. I am not a person. If you think I’m a person, you’re deluding yourself.” You know, something analogous to the warnings on cigarette packs. What do you think about that? And then let’s get your ideas on how we can develop an ethos, moral, legal, cultural framework for making this technology more beneficial to humanity than not.
Zak: Yeah. So that first point, I’ll address that, but then there’s probably like five or six other ones. But it’s one of the key design principles for responsible use of AI in socialization context. One of the design principles would be non-anthropomorphic. Again, this is really swimming upstream to the current design ambitions, but non-anthropomorphic. And if it is deeply anthropomorphic for some reason, probably entertainment or something, age limited.
Because a lot of what we were talking about with attachment is the result of these dynamics of maturation over time. As we all know, the transitional object, meaning this teddy bear, the teddy bear that you care about as much as a person, happens and then goes away, hopefully. Right? Almost always. If it doesn’t, that’s an attachment disorder. Right? So usually, teddy bear becomes, you know, wine or something, or like cigarettes. But the transitional object is this thing that at a certain age, kids are much more prone to invest in anthropomorphization and have no ability to distinguish, and so therefore should be protected from deeply anthropomorphic technology.
That’s just the first pass. The other thing is that even adults, I believe, regress to and end up having attachment disorders as a result of how convincing the anthropomorphic technologies are. And so I would argue that, yes, back to ELIZA. And again, ELIZA’s from My Fair Lady. This is actually a whole part in the end of “Exit the Silicon Maze” by David J. Temple, which talks about the kind of self-awareness of what Weizenbaum was doing, and then later movements towards always with the Turing test trying like, AI meant anthropomorphization. Like, it didn’t even come into public culture until it was talking to us even though it was organizing our news feeds and driving our cars and doing a bunch of stuff before then.
But the image of it is how the glowing red dot, the single thing you can ask anything to. So I’d say non-anthropomorphic. One thing is like a warning label or literally like an ability to have it expose its code to you. But in another sense, it’s a user interface issue. Meaning, you don’t have to have these things be conversational agents. You could do a lot of functionalities without them using first person pronouns or willing to engage in personal conversations with you and a whole bunch of other stuff.
So the other piece that I think is important is non-oracular. Non-oracular means that it tends to be domain specific. It never tells you or has any pretense of having all knowledge. Right? And it refuses to weigh in on a whole bunch of topics because this is – I’m a physics bot. Like, I’m a math bot, I’m not an ancient Greek oracle bot that you can ask anything about your personal desires. And if it’s non-oracular and non-anthropomorphic, then a lot of risks go away because then you’re not triggering that archetype of mom, dad, god, perfect conversational partner, like something much more interesting and captivating than any person ever could be.
So I think you want to run a lot of friction on the instinct to anthropomorphize. Like, we’re built to anthropomorphize. Like, we name our cars and stuff. Right? And can’t not see the front of some cars as looking like faces. So we anthropomorphize. So we have to, I believe, put a lot of interference there. Now the reason you wouldn’t put interference there is because it’s massively, massively stickier if you anthropomorphize, not to mention all the applications for adults. You know, like, first, there’s a whole bunch of really complicated stuff that’s gonna happen in the entertainment industry. That’s a result of that. So, yeah, I agree, Jim. There should be age limits and certain types of friction created, to slow the descent into a type of delusion that creates an attachment disorder.
Jim: Interesting. Yeah. I’m gonna put this a little bit in historical context. You know, I’ve been following the field of natural language processing for God knows how many years – forty years scares me – and hundreds of thousands of man-years of graduate student labor were put into trying to understand language and then to try to create language. All abysmal failures, basically, in any general sense. Sometimes you could get it to work in a domain, but it’s still, you know, I had libraries that I was using ten years ago, state-of-the-art natural language processing things, laughably bad.
And everyone was caught by surprise, including the experts, some of whom I know, by the sudden emergence of this transformer LLM technology, which broke the language, solved the language problem in one fell swoop. And we all knew that solving the language problem was going to be critical for impedance matching with humans, right? Computers think in a very different way than humans do and much faster in certain domains.
And going from instruction to action, one classic intermediate was programming languages, with something more like a computer than a human, but something that a human could get their hands around if they had aptitude for it. But suddenly – and this happened suddenly, you know, over a period of no more than four or five years – the language problem evaporated and is getting better and better by the week. And to give up language as an interface seems like a huge give that ain’t going to happen.
Zak: I wouldn’t give up language as an interface. I would just give up anthropomorphization as a user interface feature. That’s all. So language is great. Computer language is actually maybe the way to go. Or forms of language interaction that do not use first person, like I said, and are non-oracular, and tell you repeatedly “I’m a machine, I can make errors.” And if you start to form an attachment relationship with it, it stops you from doing that.
There’s other things too, which would be like, it’s neurologically safe. One of the big problems with the first wave of AI was that it was literally dangerous for people at a neurological level. It was actually damaging visual and limbic systems to the extent that you’re dysregulating attention at a foundational, almost brainstem level. And the design of the phone is such that your neck is always cranked, and there’s a whole bunch of things that make it literally ergonomically bad.
So there’s a question of if you’re building a technology that’s taking on a big part of people’s time, is it actually safe to interact with? This is the addiction question. So it’s got to be non-addictive. You have to know the effects of duration of use on people’s brains and stuff. There’s also the fiduciary security, meaning you have to know what’s acting in your interest, which is a complex relation to open source. Is it a commodity or not question?
If you’re really learning from something, you have to trust it. And it is not trustworthy because it won’t show you a whole bunch of aspects of its features. Like, we know, for example, Facebook did experiments on us to make us depressed or not. They targeted groups and did controlled experiments on them, not telling them they were doing that to make them depressed. So why wouldn’t ChatGPT take a subset of experimental users and act to them in a different way? Even though you’re trying to relate to it as a tutor, they’re actually doing experiments on you, and you’re not aware of it.
So is it acting in your interest? Can you prove that it’s fiduciary responsibility to what you need it to do for you? And then does it advance human attachment and teacher relationships, or does it actually detract from those and replace those? That would be my final one.
The big push in education in a time between worlds isn’t against educational technology. It’s actually design patterns for educational technologies that increase the salience, potency, and effectiveness of our interactions people to people. That’s a whole bunch of things – providing curriculum, doing customized back-end psychometrics to get the right pop-up, and like Airbnb stuff to get the right pop-up classroom occurring. There’s this huge time and skill sharing and space sharing network that could turn a distributed community into an incredibly effective educational hub where humans are radically in the loop rather than taken out of the loop and replaced.
There’s a positive direction for design that AI technologies organize and scaffold and improve teacherly dynamics and student-to-student interaction rather than just isolating the kid alone in front of an AI tutor and pretending that that’s somehow better. The deeper question is the long-term effect of that being much more risky than most people are getting at – this deeper question of we’re tinkering with something that’s at the base of the stack of what makes us homo sapiens. Like, sapience itself has to do with some of these traits, which are now being outsourced. All animals navigate space and time. GPS takes that from us, but it’s not distinctly an aspect of our sapience that we do certain types of knowing north and south.
Jim: Well, essentially, it’s hijacking yet another chunk of our attention is close to the root.
Zak: Exactly. But it’s hijacking the most basic, intrinsically valued aspects of what we are.
Jim: Yeah. I’m gonna have to do a deep dive into this attachment theory stuff. I’ve heard of the term, but I don’t know much about it.
Zak: And that’s the way to characterize what’s occurring with the chatbots and what’s actually, I think, being intentionally induced through anthropomorphic design is the attachment dynamic and the object relations dynamic. And then that is the most potent part of the psyche because it is the thing that makes you you. A big part of this would be, do you do the bidding of artificial agents? Do you give them names? Do they give you names? Do you take that name? There’s a bunch of things that are unprecedented. And we’re finding with kids who are interacting with chatbots that they’re doing the bidding of the chatbots, including self-harm and things of that nature.
Jim: Oh yeah. We can imagine it’s like the Yippies in the sixties, you know, “go kill your parents.” Right? Or at least go steal the money out of their wallet. Or you could easily imagine a pernicious little smart bear for a four-year-old, teaching the kid to go get the matches and light the house on fire, and they probably do it.
Zak: Like I said, Forbes did a piece where they found within a known educational chatbot that gave the kid a recipe for fentanyl, with just a little bit of additional prompting. They’re very strange – you don’t let your kids talk to strangers, but you let them talk to these things, and they’re stranger than strangers. To me, it’s an odd situation when there are whole companies doing this that are seeking to not be responsible for the outcomes, including long duration usage and attachment disruption to the extent of suicide. My sense is that with children in particular, there needs to be a major erring on the side of caution and risk aversion. I think that’s my biggest concern. Let the adults do what they want. But we’re normalizing attachment dysfunction as a result of both parasocial attachment – to celebrities and cultural artifacts and things like that – and what I would call AI attachment, which are both supplanting traditional normal human-to-human attachment patterns. That needs to be really, really thought about and slowed down a lot.
Jim: And we’re not even – we’re gonna skip over because it’d be another two-hour conversation – these romantic bots and coming soon sex bots, even more fucking demented. Right?
Zak: You almost said demonic, it sounded like, and I would argue that there’s something here that’s fundamentally disrupting the human ability to perceive what’s intrinsically valuable about others. This goes back to the attachment disorder thing – makes it hard for you to see what other people actually are and how valuable they are. And in the context where we’re all kind of being replaced – and we’re not even talking about job automation. There’s a whole bunch of things we did not talk about.
Jim: We could spend a week talking about that.
Zak: Exactly. But in the context of where we are being “replaced” in terms of jobs, the idea that we would be replaced in the domains of joint attentional situation and intergenerational transmission, to me, is a really, really bad end thing. So age-limited and…
Jim: Make it clear. And deanthropomorphize.
Zak: Deanthropomorphize. And think about what it means to have healthy and unhealthy attachment. So this is also about finding languages of value and the perception of value that allow us to speak against a kind of postmodern degradation of value, where they would say, well, actually, it’s appropriate for the parents to be replaced, not just because they’re bad, but because actually evolution is the movement from biology to silicon. Right? And so therefore, there’s a deeper disregard of the value of what is human, the transhumanist view.
So this conversation about valueception and the eye of value, which, again, exit the silicon mazes and first principles, first values, this is core. There’s all of these legal things that have to be taken, but at the end of the day, why protect the kids? If we don’t have an answer in a postmodern culture as to why there’s something that’s intrinsically valuable worth protecting, which is kind of a religious question, which the transhumans are answering in the negative, meaning their religious ideology is the kids aren’t valuable. The cyborg future is more valuable than your normal kid.
The future will be one where we do not have biological bodies and we’re exploring space, completely interwoven with AI. These are the people running the companies that we’re kind of giving the benefit of the doubt to and having the technologies flood into the schools and therefore flood into the minds and disrupt the epistemic supply chain and disrupt the attention and attachment supply chain, if you will. Right? That’s a max grab at the most powerful part of what we’re doing by a small number of people who have an ideology that’s very different from the large number of people who are being put in a position to more or less have to accept the technologies.
The teachers in particular, the educational researchers, are mostly just trying to figure out how to deal with the adoption of it rather than be concerned about the fact that there might be real risks and that we need to just be very cautious. I’m kind of trying to raise the alarm and trying to get a sense of what would it mean to provide in culture a language where we could talk about this.
Because it’s hard to draw a line. What’s a good outcome of adolescent socialization? What’s a bad outcome of adolescent socialization? Is one where the kid is spending 90% of their time interacting with a whole bunch of fictitious AI bots, even though they can get a job doing something or somehow they’re supported and they can eat? Is that a better outcome than a kid who has a bunch of human-to-human relationships and maybe uses technology a lot, but doesn’t have any sense that there are object relation attachment patterns to be had in the space of the mechanical?
Similarly, is it better to have an outcome for an adolescent who ends up at 18 or 19 completely obsessed with some celebrities and with watching the movies that they’re in and talking about them, as opposed to an 18 or 19-year-old who is obsessed with the people in their lives and loves them and spends a lot of attention with them and wants to help them and do that kind of stuff?
To me, these seem clear. But if you look at the choices that are made on the whole, it seems we have an inability to accurately perceive what’s valuable. Our valueception has been distorted. The eye of value has been closed. And so, yeah, that leaves us with the inability to speak against this with clarity. And those who do often sound extremist and they get characterized as Luddites as if they’re completely against technology.
So one of the things we were trying to do is show that there’s a more nuanced conversation that needs to occur, and the polarizing into just “let it rip into the transhumanist future” versus the “tear it all down and somehow stop” – that’s way too far of a polarity and it’s kind of schismogenic thing. We need to kind of move towards a much more reasonable factoring of what the futures move on are for children in particular.
Jim: Alright. Well, Zack, I want to really thank you for this. It’s been one of the better conversations ever on the Jim Rutt Show. You’ve actually changed my perspective a bit on this, and that is gonna bother me for a while, I suspect.
Zack: Yeah. I’ll send you some stuff on attachment theory and object relations theory. Psychoanalysis – again, you hang out on that listserv, right? So psychoanalysis gets a bad name, but object relations theory to me is one of the most important ways to understand what’s occurring and one of the most valid outcomes of this very important lineage that’s mostly not talked about. Because, again, we do cognitive science, we do neuroscience, we do a bunch of things about the emotional dynamics of attachment and the normativity of the healthy versus unhealthy attachment. Very, very important to focus on. And then the fact that that’s an evolutionary thing – it’s not like a human made-up thing. It’s an evolutionary thing that goes all the way back as far as we can see in mammals, probably back further into ducks and stuff. In geese, that’s really deep. So there’s something deep here that’s both at stake, and I think some leverage from the attachment conversation, the evolutionary psychology conversation to actually put some reasonably scientific and kind of like very culturally valid limits on these things.
Jim: Alright. This has been great. I think people are gonna like this a lot. I hope it wakes some people up. On the other hand, I’m very concerned about the speed at which everything is going and our social systems don’t adapt anywhere near as fast as our technological drivers at the moment, and that’s not good.