The following is a rough transcript which has not been revised by The Jim Rutt Show or Josh Bernoff. Please check with us before using any quotations from this transcript. Thank you.
Jim: Today’s guest is Josh Bernoff. Josh works closely with nonfiction authors as an advisor, coach, editor, or ghostwriter. He has collaborated on more than 50 nonfiction books. Josh writes a blog post on topics of interest to authors every day at bernoff.com. He was formerly senior vice president for idea development at Forrester, where he spent twenty years analyzing technology and business. Prior to Forrester, Josh spent fourteen years in startups in the Boston area. Welcome, Josh.
Josh: Hey. It’s great to be here.
Jim: Yeah. This should be an interesting and very timely conversation. We’re gonna basically talk about AI and writing. Josh being an expert in writing and particularly where writing meets the commercial world, i.e. getting people published, I thought he would be an interesting guy and we talked about his blog in the intro. I actually picked up the piece that we’re gonna be using as the pivot for this show today from his blog. It’s called “Could AI Replace the Teaching of Writing? Why the Boston Globe Op-Ed is Dead Wrong.” And so, of course, that implies that we also need to talk about the target that Josh is responding to, which is Stephen Lane’s op-ed from the Boston Globe on 11/15/2023: “AI in the classroom could spare educators from having to teach writing.” So this is a classic, fairly dichotomous, so it turns out not quite all that dichotomous, set of essays. And so we’re gonna dig into that. But as always on the Jim Rutt Show, we only use our nominal topic as a scaffolding to talk about whatever the fuck we want to. So maybe just give us a little bit more color on the work that you do with authors before we jump into this.
Josh: Okay. So the first thing for people to know is that I’ve been a student of writing for my entire career. I have a background both as a mathematician and a writer, and I’ve combined those two skills in everything that I did. At Forrester, where I spent twenty years, I sorta honed those ideas about writing. And based on that, I wrote a book called “Writing Without Bullshit,” which is sort of a compilation of everything that I think about clear writing. Since then, I’ve been working with authors. And, you know, AI is an incredibly useful tool. It takes all sorts of drudgery out. It’s excellent for research. It’s very good at reviewing things and summarizing them. But when it comes to stimulating people’s brains, especially young people who are learning to write, there’s a need to understand well, first of all, their writing has to come from wrestling with ideas, and that is not something that they can get by substituting AI to do their jobs for them. And then the job of the teacher that’s interacting with them is to understand what is going on, where the weaknesses are, what they can learn from, not to turn them all into little cookie cutter, you know, producers of the five-paragraph essay so that they can all write stuff that ChatGPT could write anyway. So I’d really rather not remove the human beings from this. That’s a step in the wrong direction.
Jim: Alright. We can hop into it. I should just mention two items. One, the very second thing I did with ChatGPT when it came out in November 2022 was I took one of the classic eleventh grade English class essay questions, which was compare and contrast – I think I said Billy Budd and one of the Conrad novels that was similar. Two of them that seemed like plausible Mrs. Kinsley-type questions. I typed it into the very original ChatGPT and said “please write an essay” comparing and contrasting them, and when I pulled it up, I said, “Mrs. Kinsley would have given this a B.” That’s pretty impressive. Now it would not have passed freshman English back in the day when they actually rigorously taught freshman English, but it was good enough to get a B in Mrs. Kinsley’s pretty rigorous eleventh grade English class. So, it’s been going on from there. Also, regular listeners know I spent a fair bit of my time in 2023 writing a very extensive movie script writing program that uses LLMs extensively, and it had an automatic mode where you could write two sentences, press a button, and two hours later get a full movie script. But let me tell you, it would suck big time.
Josh: Let me ask you – what did you learn about Billy Budd and the Conrad novel from that exercise that you did?
Jim: Nothing, because I already had read both books. That’s why I tried it.
Josh: Right? Oh, so you have nothing left to learn about them? There were no insights there?
Jim: I would say there was. The fact that those were books I had read fifty-five years ago – I’m sure it brought some things back. And today, if I were to ask the newest one, ChatGPT-4, and told it to really do this deep, I guarantee I’d learn a whole bunch of things that I’d never even thought about because it will grind away and it’ll blow your mind.
Josh: The fact is, and I wrote about this in “Writing Without Bullshit,” and there have been other writers who’ve gone into it in more detail, that the teaching of writing in school has devolved into a rote exercise with a five-paragraph essay and so on. I saw my kids go through this in high school, and it’s that way because there’s not enough time for the teachers to work with the students. And to assume that the solution to that problem is to make it possible for a machine to do the rote exercise is a step in the wrong direction, don’t you think? I mean, we want to teach people how to think, not teach them how to substitute a machine for thinking.
Jim: Yeah. Fortunately, I’m old enough that I actually got taught English, like, sort of the real deal. Right? We wrote term papers. We wrote short essays. And I’ll never remember hearing about the five-paragraph essay, to tell you the truth. We wrote book reports, you know, we did all kinds of stuff and the teachers took this stuff seriously. They loved their topic, right? And they had their red pen, right? And one of my favorite ones, twelfth grade English teacher who was a stitch, she was a truly brilliant teacher. She had a rubber stamp that said “Superfluous And,” and whenever you got into the bullshit, she’d stamp it right at that location on the paper.
Josh: So I just wanna bring up a sort of counterpoint to what you said. One of my most important mentors was a guy named Bill Bluestein, who was probably one of the smartest analysts at Forrester Research, rose to be the president there. And if you turned in a research report, which is what we were writing there, and it came back and he wrote “fix, not good,” you knew that you weren’t even good enough to criticize. If there was red ink all over the page, you were like, “Ah, now he has engaged. He has worried about the words in here.”
I have taken that to heart. And when I work with authors, I don’t just edit the text. Every author I meet, I find a new way that people can fail. You know, they’re redundant, they can’t put a sentence together, they’re poorly organized – I could go on and on. But I learn about them through the strengths and weaknesses of their writing, and I help them to understand the strengths they may not realize they have and to understand the weaknesses that they may not realize they have. And that’s not something that I can imagine a machine doing.
Jim: We’ll talk about that further. So Josh and I decided in our little pregame chat that I was going to actually outline the original Stephen Lane op-ed, then we’ll get into Josh’s reply, because I did do, as usual, my extensive preparation. To recall, it’s Stephen Lane’s op-ed in the Boston Globe, 11/15/2024, “AI in the Classroom Could Spare Educators from Having to Teach Writing.”
Stephen’s basic argument is something like this: the mechanics of writing – structural coherence, grammar, spelling, etc. – does not really need to be emphasized in school the way it is in traditional composition. He argues in favor of cultivating the students’ analytical and synthetic thinking. Under his view, the skill of writing is becoming less essential and that time once spent on mechanics could be better used to deepen students’ conceptual or critical thinking, which I think is not too far off from Josh.
Further, of course, and a lot of us are using AI and generating suggestions of things to think about, doing research for us, etc. He claims, and this is where I think there’s a big difference, that writing is secondary to the real intellectual work of analysis and synthesis. He argues a kind of utilitarian argument that there’s an opportunity cost consideration involved. The school day is limited, the number of hours that students will work on homework is limited, and if you can reduce the amount of time spent on wrestling with grammar and spelling, there’s more time for other things – presumably true at some level.
And then he notes that in the professional world – in getting students prepared for the professional world, which is one job of education but not the only job – there’s going to be an expectation that people know how to use AI in writing quite extensively. I guarantee the bosses are gonna be cracking the whip; probably at Forrester, they’re making the analysts crank out a lot more content with the use of AI, and it would be unfair to the students not to teach them essentially how to do the job of writing using AI.
And he also makes the distinction, which I think you’ll disagree with, that he basically distinguishes substantially writing from the actual scholarship. I’m old enough that I remember when even in eighth grade you had to write 20-page research papers, right? With footnotes, and encyclopedia was not considered a source, right? And all those things – you learned about how to do research, you’d learn about how to organize a thought. Sort of in eighth grade not so good, but by the time you got out of Mrs. Carr’s twelfth grade English class, goddamn, you knew how to do it. He distinguishes those. Do you have a comment you wanna make here, Josh? Looks like you can’t hardly wait to jump in.
Josh: First of all, to say that writing consists of two elements – one is clear thinking, and the other is the mechanics of it. Right? The spelling and grammar and all of that. Those are not separate things. The ability to express a thought in a coherent way requires all of that.
I think that every bit of the challenge of conceiving things and then being criticized about it and responding to the criticism – if you look at the situation we’re in with children now, I think that there are way too many children and adults who have ceased to be able to do critical thinking, and it’s because it’s easy for most of this stuff to get done for them by having a computer take care of it.
I use AI frequently in my work. There’s nothing wrong with that, but it is a tool. You know, I use the spell-checking features too, and believe me, I’m very happy for the fact that I no longer have to figure out how to organize and format the footnotes. But I still know what it means to cite a source, and I know what it means to organize content.
I can’t resist bringing this up. I recently had an experience where I was editing an author, and the topic of the book, I can’t reveal, but it was a technical book about how to use a computer to revolutionize a process. As I started to read the third chapter of this book that I was editing, I was like, there’s something wrong here. There’s something just not right. There are very few grammatical errors in here, but it’s extremely even, and there are redundancies, and it doesn’t hang together. So I emailed the client, the author, and said, “Did you use AI to write this?” And he said, yes. So now my job as an editor is to try and take AI-written pablum and turn it into something that’s interesting. And that’s the world we are headed toward, and the world we need to protect ourselves against is one in which we believe that the output of these machines, which is grammatically correct and vacuous, actually is content that we can gain something from.
Jim: Yeah, indeed. And, you know, as to my story about our script writing program – yeah, I could take, “Oh yeah, a 25-year-old guy has an affair with a 40-year-old married woman, her husband the lawyer kills them both and gets away with it by using his lawyerly tricks.” Yes, two hours later you have a movie script complete with beautifully formatted dialogue in the official movie screenplay thing, but it sucks. Right? So instead, we had…
Josh: Because it’s a formula. You told it to follow the formula, and it faithfully did.
Jim: Hilariously, I don’t even tell it to follow the formula. I just let the language model’s knowledge of movies do it. And it varies from time to time. But in reality, what we ended up doing is breaking the process down into 40 steps that humans had to be involved in every one of the 40 steps. And if you actually did that, you could knock off a first draft of a ninety-minute screenplay in less than twenty hours, which is vastly less than a human would do.
Josh: So let me give you an example here that’s very directly relevant to my background. There’s a new feature of ChatGPT called Deep Research, which will produce a fully formatted research report for you. And Dharmesh Shah, who you may know as a cofounder of HubSpot, wrote about that. He asked it to produce a research report for him, and then looked at the output and said, “Wow, this is excellent. It’s got charts, and it’s organized well – this is really a breakthrough.” And I thought, does this mean that there’s no more use for Forrester and Gartner or McKinsey or anybody who does research reports?
But the challenge as an analyst and a researcher, the thing that you spend most of your effort on is: which sources should I believe and which shouldn’t I? What is the little tidbit that I heard from some vendor that I put together with something else that I heard from a user and say, “Wait, there’s an interesting new thing happening here that I have to draw people’s attention to.” How do I decide which statistics that I find online are trustworthy and which are bullshit? And the idea that you could substitute ChatGPT for the judgment of an analyst is bizarre.
Research reports all look the same. They all start the same way. They all have a section about collecting information. They all have an analysis section. They all have charts. They all have sources. They all have conclusions. It’s a perfect place to use an AI to generate something that looks exactly like a research report, but actually is lacking in judgment. And that, basically, in microcosm, is exactly what’s wrong with AI in writing right now – it is great at form and has no understanding of significance and insight.
Jim: That’s a great example because, you know, if I’m trying to think about the role of an analyst – and as the technology executive at some large companies, one of my key roles was essentially being inside Forrester dude and to make the calls for the company on “use this, don’t use that.” I had a couple of people that were my research analysts and they could subscribe to Gartner and Forrester and two or three of the other ones. The key of that was figuring out what was actually important in our context.
As I mentioned pre-game, I used to go up and hang out at Forrester once in a while. Some really interesting smart people there – I probably might even have seen Josh if he were there in the mid-eighties. The guy who ran the place was an absolute stitch and I got along great. The essence of it to my mind was to find out what’s important and hone in on that. So far, I wouldn’t trust computers to do that – you know, LLMs to do that.
On the other hand, having done some research for business purposes – my first real inside job at a tech company was as a market research analyst, amazingly. It was a tiny company, so I was basically in charge of all research, business research, competitive research, everything else. Man, you spent a hell of a lot of time gathering the raw material. I was about to say I probably spent 60% of my time gathering raw material. I had no librarian. There was no online. There was no Google. No Google Scholar.
I could see Deep Research being a huge benefit to a Forrester analyst if you craft the question correctly, so that it gathers the data relatively comprehensively and in a non-biased fashion. It might be a little tricky to do prompt engineering on, but it could save you half or more of your work and provide you the raw material to then use your human skills. Okay, let’s go read these 25 pages and what is the center here? What is the theme? What is important? What is marketing horseshit? You know, does that make sense to you?
Josh: Yes. What an excellent use of the tool. One of the things I do with authors is I do an exercise where we try and figure out what’s the right title and subtitle for their book, which is a very personal and important element. It sets the tone for everything that follows. And I thought at one point, I said, “Alright, I wonder if a machine could do this.” So I uploaded the description of a book that a friend of mine wrote and said, “Come up with like 10 good titles for this book.” And eight of the suggestions were terrible. They were just off, or they emphasized the wrong things, or they were trite. But two of them were – I was like, “That’s really… I wouldn’t have thought of that. What do you think of this one?” And so as a brainstorming tool, that’s fine. But in the absence of the human to be able to know which of the eight suggestions are terrible and which of the two suggestions are good, you don’t get all the way to the solution.
Jim: Yeah, I think that makes a lot of sense. I use it all the time. Even in the earliest days, GPT threw out suggested names for things. And they suck mostly, but occasionally, they’re right. What did Dan Dennett call them? Intuition pumps. You know, something you may land on something close. You go, “Oh yeah, I didn’t want to be even thinking about that.” Right? And so they’re again good like that. But as you say, at the state of the art so far, humans in the loop are necessary, though I just sprung for the $20 a month pro version of ChatGPT a week ago. The next generation, the one that only the $20 a month people get access to, is noticeably better, particularly for deep reasoning and doing things like reflection. So for instance, you say, “You are a top-of-the-line name picker for the company in Sweden – I forget the name, but it does like three-quarters of the pharmaceutical names. And you’ve been doing this for twenty-five years, you’re a world authority on it, blah blah blah. Here’s our problem.” Please think deeply and hard, generate some internal candidates, think about why they might be good, why they might be bad for this marketplace and this competitive dynamic, and then spit out your 10 best. Those 10 will not suck, I guarantee you.
Josh: Yeah. Well, if you tried to do that for Viagra, it would be called Penis Up.
Jim: I’m gonna try that this afternoon. I’ll send you the results. Okay. No doubt well, Penis Up actually might have been better, actually. Right? You know, how about Bone of Steel? They paid me the $25 to come up with the name. That would have been my name.
Josh: Oh, Bone of Steel, dude. Well, it’s like, I’m sure that’s how Flomax was named. So, I mean, what a fantastic tool, but really not something that can substitute for people. I love the comment that somebody made that said, “Why are we coming up with robots to write poetry for us instead of what I really want the robot to do is to wash and fold the laundry?” Exactly.
Jim: Yeah. And I make this point – many of my episodes are about AI, not all, like maybe 20% of them. And I am of the faction that believes that the technology with which LLMs are created, so-called transformer/deep learning/reinforcement learning, is probably not the golden road to artificial general intelligence and having something that will fold your laundry and make your coffee in the morning and walk your dog and all that sort of stuff. But what it does do in its domain, as it taught all of us by surprise, is how amazingly well it deals with language. You know there’s a discipline called computational linguistics, which I’ve dabbled in a little bit, it’s been in existence since 1956. And hundreds of probably more top PhDs working in computational linguistics for sixty years produced bupkis compared to, you know, the very first free ChatGPT, in terms of being able to actually deal with a fair bit of general language, and then respond in a sensible fashion. It just blew everybody’s minds away. We had no idea this was possible.
Josh: But, you know, the amazing thing about a dancing bear is that it dances at all, not how well it dances.
Jim: Absolutely true.
Josh: And I don’t know. As you sort of poke at these things, you begin to see this isn’t as deep as I thought it was. I ghostwrote a book on AI. It’s called “The Age of Intent,” and it was written for the CEO of an AI company that made chatbots, customer service chatbots that you could ask questions about, you know, customer service questions for a company, and it would answer them. And at one point, they said, alright, well, we have a special algorithm that will tell us when we need to turn things over to a human agent because the person’s getting upset. I’m like, oh, this is interesting. It can measure the emotional resonance of people. So I kept pushing them – well, how do you know when the person’s getting upset? The answer is they use curse words, exclamation points, and capital letters. That’s what their model spit out. They looked at all these thousands of instances and said, oh, these are the cases where the people really needed to talk to a human. And I’m like, that’s a profound insight? Nah, it’s not really that surprising.
Jim: And it’s, of course, in computational linguistics, we’ve had little tools for many years that can do emotion analysis, sentiment analysis, etcetera. They’re not very good, actually, even the ones that represent hundreds of years of grad student labor are not as good as the free version of LLMs at that.
Josh: Have you ever asked an LLM to tell you a joke?
Jim: Yeah, they were terrible. The newest one, I think the best humor I’ve seen so far is from DeepSeek. It’s actually a bit witty and I don’t know why, but try it out and let me know what you think.
Josh: Things like humor and wit – this is, in a word, actually, I think the thing that people have as writers that AI does not have is wit. It’s when you read something and it somehow tickles the back of your mind and you think, this is interesting, this is fun. It is the human thing coming through. If you read anything that I write, and there’s like 3 million words on my blog as well as five or six books that I wrote, so there’s plenty to take away from there – people say, “I can hear your voice, Josh.”
Jim: And that’s good writing.
Josh: Well, it is – it’s because of the wit. I take things in directions that people don’t expect. I change the emotional valence of what I’m writing depending on whether I’m trying to be persuasive or provocative or to be a good explainer or whatever that is. And I don’t know. We might get to a point where AI can replicate those subtleties, but not before I’m retired.
Jim: Probably true. Well, I don’t know. Things are moving mighty fast. Mighty fast. I don’t make too many predictions unless you intend to retire soon.
Josh: Well, let me go back to the beginning. Okay? We have now a fourth grader, or we have a seventh grader, or we have a tenth grader, and we’re trying to get them to understand what logic is, how to think, and to tap into the combination of their ability to reason and their creativity. Is that something that can be outsourced to a computer? I feel like that might be one of the last things that you could outsource to a computer. Yeah, help them spell. Yeah, help them point out where the subject and the verb don’t agree. Yeah, remind them that they used the same word six times in this paragraph, and that’s not good writing. But that’s not the same as tapping into the combination of their ability to reason rationally and their creativity.
Jim: Yeah, the reason rationally, I think, you actually make a very good point there. I mean, this has almost become a cliche – that since the calculator, we’ve been giving up cognitive skills. And I was one of those going, “God damn it, if you can’t multiply a three digit and a two digit number in your head, I don’t want you working at my company.” God damn it. And of course today, ain’t nobody can do that, except for a few old farts. And people say, well, you can outsource calculation.
And then it was navigation. My wife and I have done a lot of road trips, a lot of traveling. And I would say, my mother, I think I picked up from my mother, she was the master of navigation with paper maps. She’d go to the AAA before every one of our family trips and get piles of maps and books, and she’d be sitting there giving my father detailed directions, making sure we didn’t miss any possible point of interest, any possible historical marker. And today, that whole notion is gone. And I think if the Internet were to die, think of millennials trying to make it home from work – half of them, right?
Josh: I have to share a story right here. My father was a chemistry professor, and he loved teaching. And the time period during which he was a chemistry professor basically coincided with when people changed from using slide rules to do calculation to using calculators. And when you use – nobody remembers this – but when you use the slide rule, you’d get an answer. Let’s say you get an answer that was 1.75. But you had to think, is that 1.75 kilograms, or is that 1,750 kilograms? And that required people to have a certain sort of intuitive grounding in the calculations they were doing.
And once the calculators came in, he would get people who would say, okay, if you combine two liters of oxygen and one liter of hydrogen, then you’ll get a reaction. And as a result, you’ll generate 2,750 liters of water vapor. And it’s like, wait a minute. That doesn’t make any sense. Right? The stuff would come out of the calculator, and there it is. So that must be right.
Jim: Yep.
Josh: That’s it. And the lesson – we’ve changed the tools completely, but the lesson remains, which is that if you’re using these tools, you have to have an intuitive idea of what’s going in and what sort of answers are coming out.
Jim: Though I have noticed people’s numeric intuition is way down from what you calculate.
Josh: Oh, I know. Well, look, I homeschooled my kids, and one of the first things we would do is mental math. They liked to do this at bedtime, if you can believe this, where I’d say, “Alright, here’s a recipe that has these ingredients. Now we’re gonna make three times the recipe. How much of this do you need? How much of that do you need?” And they would have to think about that in their head. The same was true before we would learn algebra. We would be rounding things off and doing mental math and having some idea of what the answers were supposed to be. Because I did not want them to grow up to a point where they were so completely dependent on these machines that they had ceased to be able to reason about the way the world works.
Jim: Yeah. To and this is a nice digression, but now we’ll hop back to the fact that I think there is a reasonable fear that if you start to outsource logical reasoning about things like writing, you may not ever develop the ability to do logical reasoning, which would be scary as shit.
Josh: I believe passionately – I cannot prove this, but I passionately believe that writing is thinking. When I wanna figure something out, I write about it. When I have written about it, I can tell if I’ve been thinking clearly. If I see someone else’s writing, I’m gonna be like, “Okay, this makes sense” or “Oh, there’s a huge logical hole in what you wrote.” Your thinking isn’t clear. The problem isn’t that your writing isn’t clear, it’s that your thinking isn’t clear. And once we stop encouraging youths to learn to reason and write as a combination, they won’t remember how to think anymore. Now if what you want is for your nation to be competitive, it really would help if your citizens were able to think.
Jim: I will note that I suspect that’s a matter of significant individual difference. And I’ll give you a couple of examples. One of my brothers is severely dyslexic, and he’s never read a book in his life that I’m aware of, and writes only with great difficulty. And yet, he was a business phenom, built a significant company from nothing, and had lots of people – 50, you know, big, nice company. He’s very smart, he could estimate jobs in his head, a little bit of chicken scratch on a piece of spiral notebook. He could orchestrate large construction projects, so he could think without writing. I would also even point at myself – I could write, and you, Josh and I have talked about this in a previous conversation. If I work really hard for a really long time, I can get my writing up to sort of workmanlike, but not very good. I’m just not a talented wordsmith on paper, but I will put my ability to think through a problem and to organize a plan of action and to execute it against almost anybody’s. I am actually a pretty damn good thinker, even though writing is not how I think typically. So I suspect that there are lots of other people in the world, some of whom are very accomplished in the domain of thinking, who use a different approach.
Josh: And now we have computers to accommodate them. You know, spelling is a lot easier for dyslexics now that we have spelling checkers built into everything and grammar checkers and all of that. And maybe they communicate by speaking instead of writing. But they still have to write the thing that they’re gonna speak about, not necessarily every word, but they need to organize those thoughts.
Jim: Not necessarily. I mean, anyway, I’m not sure, you know, I’d say my brother never wrote anything down that he said, and you know, he was a consummate negotiator, motivator, giver of instructions to people in exquisite detail, all right here, right in his head. Yeah, I’m not extreme as that, but, you know, he’s an interesting counterexample. So I suspect that, like anything in human capacity, it’s a…
Josh: So should the teachers have given up on teaching him writing? Or should they have accommodated his needs and figured out a way to allow him to express his creativity?
Jim: Unfortunately, they didn’t have computers in them days, and, you know, he would have been a huge beneficiary of audio interface to computers.
Josh: I mean, it’s interesting to me. You may know the story of John Chambers at Cisco who was dyslexic and very successful. And part of the reason that company embraced the idea of communication by video and teleconferencing and all of that was because he was like, “No, you shouldn’t have to do everything by email. You should be able to do this by video if we need to.”
Jim: I gotta give you a counter example. Yeah, so many examples, so many ways. Human differences are so interesting. When I took a job as a CEO of a publicly traded company, when I got in, I discovered that their preferred method of internal communications was voicemail. The worst, in my opinion, the worst possible modality. And my first two days on the job, I spent about an hour and a half listening to voicemails. Then on day three, I sent an email to the whole staff saying anyone who ever sends me a voicemail again will be fired. Please use email. If you need to speak to me, we will speak either on the telephone or in person. You can reach out to my assistant and she will be happy to book you an appointment. And I will say, they were good doobies. I never got another voicemail ever. Can you imagine listening to ninety minutes of voice mails a day? I mean, what the fuck?
Josh: I used to do these corporate writing workshops, did a whole bunch of them. And I did get contacted by a former colleague who said, “Can you help us? The way that we do everything here is instead of writing, we do PowerPoint presentations.” And I said, “No, I can’t help you.”
Now I will give you another example, which I think sort of reflects the value of writing. One of the best managed companies in the world is Netflix. And the way they do things at Netflix is unique. If you are a person at Netflix of any level and you want to have a decision made, you want things to be different, you write a research piece about that. And it’s like, “I gather the information. Here’s the data I collected. Here are the possible alternatives. Here’s the reason why this alternative is better than that one, and this is what I think we should do.” Then you do what’s called farming for dissent, which is you send that out to people. They give you comments, and you’re like, “Oh, I gotta fix this part of it.” There’s not a meeting. At the end, that decision gets made, and you don’t know if you’re like the junior XYZ in some department. Your memo could get floated up six levels and be looked at by senior vice president. So they hired me because they wanted to improve the quality of those memos because their entire company ran on it. But that is an example of an organization where they basically said, if you want things to be different, you have to put it in writing. And that required people to think logically.
Jim: Yeah, it’s interesting. In my own career – again, data point n equals one, so don’t take this one to the bank, kids – it is true that despite the fact that I intended not to use writing as my method of thinking, about five or six times in my business career, I wrote what I have retrospectively called Rutt-grams, which were very detailed papers, typically about 15 pages long, that made some radical proposal to the company I worked for. I spent weeks on them sometimes because, again, not a fluid writer. Very fluid thinker, but not a fluid writer. I made sure these things were at least up to very workmanlike enough to carry the quality of the idea, and that the idea was almost unattackable because it was presented so logically.
Every single one of them succeeded and ended up being adopted by the company I had in those days worked for – with one exception that was categorically rejected by the new idiot CEO that the company had hired. It was the most brilliant one I ever wrote. I have a mentor who I talk to every two weeks, a young man, very smart, and I’ve agreed to be his mentor. You know, everybody needs a mentor. I was telling him today that I walked out of that meeting – I was a very rising star in the company, my office was literally right next to the CEO’s by the choice of the previous CEO. And I basically flipped the switch in my brain, which was, “I’m out of this fucking place.” And I never did another constructive piece of work. I went to meetings and just did meeting baffle-flaggery as opposed to my usual job of throwing grenades and being the madman. Three months later, I went and started my first startup. So it actually worked out great for me. But, you know, the Rutt-gram, the 15-page exquisitely thought out six-week project to steer the ship from the back – those things were critical in my career.
Josh: You could have been a success at Netflix.
Jim: Another company that does writing as a core part of its governance is Amazon. Now very different. I mean, they’re a much less structured, much kind of interestingly self-organizational style. But there’s standard forms, you know. There’s the one-page bullet point, there’s the seven-pager, and each kind of action that’s asking for a decision about anything, apparently, has to be reduced to writing.
Josh: And then if you go into a meeting, they spend the first half hour of the meeting reading what you wrote so that everybody’s up to speed and then people aren’t just blathering ignorantly. Despite all of the video and Instagram and all of that, text is what the Internet runs on. It’s not a coincidence that the first and most powerful AI were large language models. It’s because there’s such a vast corpus of text to start from. In a world where the text is what the Internet runs on, if you want to have any sort of influence or be able to understand how things work, you’d better understand writing.
Jim: Yeah. That’s true so far, though. The dogma today is interesting. I mean, that’s today, god damn it. Pounding on the lawn with my rake saying “get off of my grass,” won’t write and won’t read beyond a relatively modest bit. And if you want to reach kids today, it’s gotta be video. And it’s possible that could become the principal modality of communicating ideas. Scares me because, truthfully, I don’t listen to podcasts too often. And when I do, I have them transcribed and I read them. Because I just – you can’t jump back and see, wait a minute, does this follow from that? My audio memory isn’t good enough to do that. Maybe some people are audio geniuses to say, alright, down here they said x, but doesn’t that contradict what they said two paragraphs before. In written form, it’s really easy to do. In audio form, much harder, I would argue.
Josh: Look at the chaos of your voicemail-driven company. I just think there are these exceptions, but in business, it generally is the case that the clearer and more direct and more logical you can be in text, the more successful you can be. And, man, God help us if we turn things over to the people who give the best PowerPoint presentation.
Jim: Well, we have done that. It’s called the US government.
Josh: I just don’t believe that that is the way that the best ideas rise to the top.
Jim: Yeah. No. It’s anathema in the world that I’m in that you do anything other than a final presentation in PowerPoint. People insist on thinking in PowerPoint. You can immediately tell certain class of management consultants. You say, okay, this person actually thought this through in PowerPoint. You know, get that asshole out of my office right now, right?
Josh: Alright. So what other problems are we gonna solve for the world today? You toss them up, and I’ll swat them down.
Jim: Oh, let’s finish off on Stephen Lane. The last bit of his essay, he proposed, probably just to provoke you, but he’s proposed that writing should just be for the few. Lane envisions a world where writing largely fades from the mainstream curriculum, but he still sees a small group of students, you know, the student newspaper nerds, the diarists, the aspiring poets and songwriters and all that. And that, you know, maybe there’s advanced classes for real live writing. But for most people, they don’t need that. They don’t need it anymore. They need to learn how to sharpen a quill with a knife.
Josh: I’m sorry, but idolizing illiteracy is a step too far for me. Okay. What’s going to happen in your life? You need to be able to understand numbers, and you need to be able to understand words. You need to be able to read a contract. You need to be able to understand the bill. You need to be able to do the calculations to understand why it might make sense to invest in x or y and why it might not. These are life skills. And, I mean, there are other life skills now. People need to know how to use a spreadsheet. They really do. But the idea that we will no longer need the skill of understanding how to read and write is – I can’t go there.
Jim: Yeah. Nor I. Nor I. If you’ve ever seen the movie Idiocracy, if you haven’t, go watch it. It’s a projection of what happens in, you know, a few hundred years of people just getting stupider and stupider. You can easily see the trend in our society in certain ways. But anyway, well, we’ve actually covered a fair bit of your essay in talking about the other guys. There’s one thing that we haven’t talked about, which is you talk about this guy, was it Ethan Malek, who has this idea on how to do education in writing? I never heard of the dude. I was a bad boy. I didn’t do my research.
Josh: He is, I think, one of the foremost thinkers when it comes to AI and its role in the world. He is absolutely worth listening to. The way that he uses AI in his classroom is certainly groundbreaking. It’s interesting to me that if you go into his class and deliver the output of an AI, he can tell immediately and you get a failing grade. But he’s training people how to do things like, okay, use ChatGPT or Claude or whatever to generate the first draft. Now how would you make this better? What prompts would you use to make this more effective? What are the weaknesses of this, and how can you use the AI to help you? He is basically training his students in the effective use of AI as a tool, and that is a life skill that’s going to be very valuable to them throughout their careers. That’s the right thing to do.
Jim: So basically, if the Josh writing education for K-12 program was to be enacted by Josh, the dictator of the world, it would look something like that?
Josh: Yes, absolutely. It would look like that. There’s a hierarchy here. Right? There are the tools. The tools are dumb. The tools don’t have insight, but they’re useful. Then there’s the brain of the student, which has the ability to grow and change and has access to creativity, but has weaknesses. And then there is the brain of the professor that has experience and knowledge and insight into how the students think. And if the professor is teaching the students how to use the tools, that’s helpful. If the tools are teaching the students to think, then we’re on a path to the marching morons, and I’m not happy with that.
Jim: Looking, what’s telling is that Moloch’s approach could veer into that, right? And where’s the line in this, because what I’m hearing you say is that Moloch’s approach is using the AIs very intensely in writing, but not to say, “Please write me a paper about Billy Budd” and blah blah blah. Instead you do a first draft, say “Let me improve this paragraph for me.” You know, take these sentences and make them of varying length – all the things that a good copy editor gets paid the big bucks to do, but don’t outsource the thinking. Where’s the line there? That’s an interesting question.
Josh: Well, the line is where you’re using AI as a tool. When you’re using AI as a tool, that’s good. When you’re using AI as a substitute for thinking, that’s bad. It’s not so clear exactly where you’d go to draw the line between one and the other, but that’s the general principle that I think people need to use.
Jim: Lane has a pretty negative valence about writing. You know, it’s kind of mundane. It’s tedious. You know, it’s all this and that and that. What’s your response to that view about writing as a human activity?
Josh: So I want you to think about singing. You can sing off key in the shower, and that’s not gonna hurt anybody, but that’s not the peak of singing. And then you can have a virtuoso opera singer. But singing is a human experience. It has an emotional valence to it. It’s something that people use to interact with each other. I don’t want to replace singing with machines.
And that’s the way I think about writing. If you remove the opportunity to write and to apply creativity from the curriculum, people will have lost something. I can’t resist sharing this. A couple of times when my kids were in their high school age bracket, I gave classes to small groups of homeschoolers in nonfiction writing. And so I would assign them all sorts of things. Like, we went on walks in the woods, and then I asked them to write about what they experienced, sort of sensory kind of things.
And in the end, the last thing they did was a research paper, but they wrote marketing copy and all sorts of other things. The thing that amazed me most was that these kids, not having been raised in the normal school system, were so filled with creativity and energy and joy in the things that they did. And some of what they wrote was really misguided, and I needed to say, “No. No. You can’t do this, and you can do that.” But it just helped me to understand that once you remove the constraints and allow people to express their creativity and give them feedback, there’s this enormous blossoming of success that happens. And a number of them are now very successful adults.
So if that opportunity exists, it would be criminal to remove that from the educational experience.
Jim: I like that. So, basically, you can compare it to art and music as part of the development of the human soul.
Josh: That is absolutely the case. Look, my wife is an artist, a visual artist, a sculptor in natural materials, and I am a writer. She has all of this space in our house that’s dedicated to her art materials. I have my little office and my keyboard and my computer monitors. But we recognize that each of us is a creative person in their own way, and it’s a joyful thing to have gotten to this point in life with someone else who is also a creative person. Everybody should have the ability to indulge that side of themselves.
Jim: That’s a very humane and heartening way to think about it. Actually, of all the arguments I think we’ve discussed, that one is the strongest.
Josh: Okay. Alright. I hope I’ve convinced your listeners and maybe even you.
Jim: I don’t think I needed convincing, but it’s a fun conversation. Okay. I subscribe to his little daily email thingy. It’s well worthwhile if you’re interested in writing, and you can hook up with him at Bernoff.com. Thank you very much, Josh, for a very interesting conversation.
Josh: Thanks, Jim.