Transcript of Currents 062: Stephanie Lepp Interviews Jim Rutt on Musk and Moderation

The following is a rough transcript which has not been revised by The Jim Rutt Show or Stephanie Lepp. Please check with us before using any quotations from this transcript. Thank you.

Stephanie: Welcome to The Jim Rutt Show. I am not Jim Rutt. I am Stephanie Lepp, I’m the executive producer at the Center for Humane Technology. I work with Tristan Harris, some of you may be familiar with us in our work from the Netflix documentary, The Social Dilemma. I have been a guest on The Jim Rutt Show. I came on last may to talk about deep fakes, and truth, and epistemology and a bunch of good stuff. But this time I get to play interviewer, because Jim just published a piece in Quillette called Musk and Moderation. And so I get to interview you, Jim, about the ideas that you introduce in the piece. And yeah, and just get a bigger picture view on the whole thing and get solution oriented about it.

Stephanie: And I guess, maybe that’s the last piece of intro I’ll give, is that there have been a gazillion takes on this situation on Elon and Twitter. And most of them are either pro-Elon or anti-Elon, or guessing what’s going to happen. Your article Musk and Moderation is one of the only takes I’ve seen that actually gets solution oriented. Because I would say that before Elon came along, probably most of us had some idea of how we wanted to change Twitter. And now that he’s here, here we are just fighting about it. And the way that we are fighting about it is kind of a manifestation of perhaps some of the problems with Twitter in the first place. But Jim, your article is very refreshing because instead of just jumping into the fray, pro-Elon, anti-Elon, you roll up your sleeve and offer some concrete suggestions for reforming Twitter, specifically with respect to moderation.

Jim: Great. Yeah, thanks for agreeing to be the guest host today on the show. And with that, I’ll go back into character of just being the guest.

Stephanie: Okay. So yeah, maybe we could just start with some context. So can you give listeners maybe just a very brief summary of what the situation is, of Elon purchasing Twitter, and a quick overview of your background moderating online communities?

Jim: Okay. Yeah, the situation is Elon first bought a little bit of Twitter, 9%. Then he quickly realized he had to be a good citizen if he came on the board, therefore he declined to go on the board. Then he said he wanted to buy it all. And then they adopted a poison pill, which was a corporate maneuver to make you put up or shut up. And then he rounded up $44 billion, including half of it his own money, another quarter borrowed, and another quarter from other parties, I think. Made a tender offer, and after a couple days of negotiations, Twitter agreed to be purchased. It’ll take four to six months for the deal to close, which is going to be annoying for us all, as we speculate on what’s going to happen and watch the controversies back and forth, because whatever’s going to happen, won’t happen right away. And it’s important for people to keep that in mind.

Jim: So that’s, I’d say, the tee up on what’s going on with Elon Musk and Twitter. And of course as Stephanie said, everyone’s coming out of the woodwork thinking either is the end of the universe as we knew it, or the beginning of the promised land in paradise. And I suspect it’ll be neither, and there will be a lot of hard work to be done and that’s assuming the deal goes through. And that’s the other thing that’s worth noting, is there are some ways the deal can still fall apart. I’d give it a 20% chance it’ll fall apart, but for the purposes of this, we’ll assume it goes forward.

Jim: As to a quick arc on my background and why I think I might actually know a little bit about this, I actually may be the human being that’s been doing this kind of stuff the longest, more or less continuously. I went to work for a company called The Source in 1980. And it was the world’s first consumer online service. Literally much of what we have on the web today, we had online for consumers, anybody with a computer and a modem, in those days, the old beep, beep, beep, beep, beep, kind of thing that dialed in. And we had bulletin boards, email, news, shopping, stock quotes, chat services, et cetera. And we had first tens of thousands, and quickly hundreds of thousands of users. And it cost, it was text mode, only 120 characters a second, which isn’t very fast. And it was 10 bucks an hour, which was really expensive considering it was in 1980 dollars, which would be the equivalent of at least $25 an hour today.

Jim: And you’d say, well, why would anybody do that? They said, well, because there was no alternative on earth. If you wanted to participate in the early days of the online revolution, it was just us initially. And fairly soon, we were joined by a company called CompuServe and the first round of online business battles was CompuServe versus The Source, and CompuServe won and eventually acquired The Source. I worked there for a little bit less in two years, I eventually got disillusioned with the incompetent bozos running the place, and left and did some startups. But while I was there, two quite relevant experiences. One, I was first the product manager for our bulletin board product called Forums, which is essentially discussion groups, kind of like forum software, quote unquote, not full on social media but a definite linear descendant.

Jim: And then the one that the company had acquired when it got its technology platform we decided wasn’t that good, so I actually designed our second generation forum system functionally. And then I actually sat next to the programmer for about three weeks and designed the UI as well, so I actually designed the whole damn thing. And then I was the product manager for it, which also meant I was the moderator. So here I was, moderating one of the first in the world, such things, and learned a fair bit. And guess what? In the same way The Source had a lot of the things we have on the web today, a lot of the issues that we have today, we had in moderating The Source forums in 1981. All the usual stuff, people fighting with each other, calling each other names. What are the limits? What can you say? We were owned by The Readers Digest, which for younger folks who probably don’t recall, a very stodgy publishing company. How they happened to own The Source was another story for another day.

Jim: But for instance, no obscene words were allowed, period. And you couldn’t even do the asterisk deal. No bad words, no George Carlin list of seven bad words and things of that sort. But on the other hand, in those days, surprisingly, it wasn’t obvious at least initially, that racial slurs were out of bounds. This was 1980, and people had let fly with really ugly shit. And we had to make some rules, “No, you cannot say that. Here’s 20 slurs that are not allowed in the public places of The Source.” Now, you could still use them in chat or you could use them in email, but you couldn’t use them in the public places of The Source.

Jim: And then a little later, I was co-product manager for one of the very first things that would be recognizable as social media. It was called Participate, and it was a very strange thing, it was over-engineered and over complicated, and kind of hard for people to understand. But it quickly became one of our top products. And moderation there became much more intense than on the bulletin boards, because it was like a branching tree structure of discussions, where you could branch off any discussion and then rename it, and then continue the discussion. And then even more crazy, the inventor of this thing thought that allowing anonymous users was the way to go. And even though on The Source, everybody had a firm identity locked to their credit card, so they had a real identity, when we initially launched Participate we launched it in two flavors, one anonymous and one with people’s source identity as their user handle.

Jim: And as I predicted, the anonymous one became a dumpster fire very, very rapidly. And we turned it off after a few weeks, and continued with the one that was real name only, basically. And it was just all the moderation stuff, but now amped up even higher because of the nature of the topics people were discussing. On our bulletin boards, actually a lot of the topics were technical, a lot of the reason people were on these online systems in the early days was to talk about computers, talk about modems, talk about printers. And those aren’t nearly as controversial as an open ended conversation platform, conceptually, like Twitter. Participate was kind of like an early precursor of Twitter. And so again, saw all this stuff.

Jim: Subsequently, I’ve been a member of every online community since then, just about all the big ones. Participated in virtual communities, ran a virtual community lab as part of Thompson Technology Group, part of the Thompson corporation, now Thompson Reuters, where we helped our business units use online community as a product to sell to their professional information communities. I literally had a four person lab that studied that. And what did we find? Moderation was the indispensable ingredient to success. Those business units that just put up an online community for their customers, it never worked. If they had a good moderator, it often worked. That was our number one takeaway.

Jim: Next step, 1989, I joined the online service called The Well, it’s basically been in operations since ’85. It had moderation issues in communities, and on and on and on. And been a Well member ever since, and ended up buying a chunk of The Well, when all the users got together, 10 of us got together and bought it. So, I’ve been dealing with that issue ever since. I was an early Twitter member, an early Facebook member, I’m an admin of two Facebook groups. So anyway, I got lots and lots of experience, and lots of opinions. Yeah, sorry for getting carried away and going on a little bit there.

Stephanie: No, you have over, sounds like four decades, so pretty relevant-

Jim: 41 years. Holy shit.

Stephanie: Okay. Well then, yeah, 41 years both on the management and even development side of things, and moderator side of things as well as on the user side of things. So with that, yeah, let’s go into your piece. So your piece is called Musk and Moderation. It’s in Quillette, and the shift that you make, the refreshing shift I would say that you make, is from this kind of absolutist, not particularly helpful question of moderation, yes or no? To the pragmatic, very helpful question of what kinds of moderation would make Twitter a fairer and more effective marketplace of ideas? And as you point out, there’s this sad irony in that Musk himself is asking pragmatic questions of how Twitter should be reformed, but yet he’s either accused of being absolutist or celebrated for being absolutist.

Stephanie: But anyways, we’re going to go with this question of what kinds of moderation would make Twitter a fair and more effective marketplace of ideas. Now, that statement in and of itself, implies that the goal of Twitter is to be a fair and effective marketplace of ideas. But putting aside the question of whether or not that is or should be the goal, or how we would know that we’ve achieved that goal, we’ll come back to those questions in a bit. Let’s just put those aside for the moment and just ask the followup question, which is what are the different kinds of moderation, just to start out? So you lay out a typology, and so starting at the top of your typology of decorum moderation versus content moderation, and then within content moderation, there’s non-point of view and point of view. Don’t worry listeners, we’re going to get into all of it. But just starting up top, what is decorum moderation and what are some examples of it?

Jim: Decorum moderation could also be called behavior. How do we act with respect to each other, irrespective of what we’re trying to say? So, think about it as the container and the payload, the cup and the coffee. So, decorum is about what is the nature of the cup? The coffee is, what is it you’re trying to communicate? And examples of bad decorum, which many systems will moderate, some won’t, are personal attacks, racial slurs, extreme, ugly wishes. I used an example the other day, two people are arguing over sports teams and they get mad at each other, and one of them says, “I hope your house burns down on you and your kids die,” right?

Stephanie: Oh, God.

Jim: That is an example, not content really, but it’s very bad decorum. And the other point I like to make about decorum is that it can vary by site. We should not necessarily expect every site in the world to have the same decorum, in the same way we don’t expect manners to be the same in the face to face world. And manners in face to face, very similar to my concept of decorum online. An example I used in the essay is someone might be out with their friends, having drinks and providing all the gory details of the most recent dating debacle, and would probably not do that at their grandma’s Sunday dinner table that weekend. So, different things.

Jim: And bring it back to online, Disney, say for eight to 14 year olds, 12 year olds, might have very different decorum rules and expectations than something like Twitter, which is aimed more at an adult and quasi-professional environment. So, that’s the idea of decorum. And in my years of experience, the idea running a online community, particularly a broad one, without well articulated, clear enforced decorum moderations is a prescription for disaster. It goes to shit every time, and pretty quickly.

Stephanie: Yes. Great, okay. So decorum is the online kind of equivalent to manners offline. Now, what is content moderation as opposed to decorum? And then what are some examples of, let’s start with non-point of view content moderation.

Jim: Yeah. And I will say this distinction is a little artificial, and the lines aren’t crisp between the two kinds of content. So forgive me for that, but it’s a place to start at least. And so content, again, is the payload. What you are trying to say. Like if I said, “I like the Washington Redskins football team,” that would be content. If I said it, “And fuck you if you don’t like them,” that would be decorum, as a quick example. And so, anything that is the substance of the message, or the message rather than the medium might be another way to talk about it in some sense. Again, this isn’t crisp, but that’s the sense.

Jim: Now, into this next distinction between inherently dangerous content and point of view content, this is again, not black and white, but I think there’s relative clarity on it. Let’s start with the inherently dangerous category, or inherently bad. And that would include things like doxing, and other obvious invasions of privacy, advocacy of violence, advocacy of serious crimes. And people say, “Why do you qualify that with serious?” And I say, personally, I would not want to moderate out people’s right to talk about civil disobedience, going out and having a demonstration without a permit, or throwing eggs at the cops or something. On the other hand, you might want to draw a line at seditious conspiracy, or advocating a bank robbery or something like that.

Jim: There’s nuance in everything, that’s an important takeaway. Nothing is simple in this world, it’s all nuanced. So I include advocacy of serious crime, advocacy of violence against specific named people or groups, dangerous things like how to make a bomb, or how to make poison. Or a real live example apparently, is instructing teenagers how to get the supplies that commit suicide without your mother finding out. I would put that in the category, this is just inherently dangerous, bad stuff. And every system, whether it thinks it’s going to be no moderation or not, sooner or later ends up making a list of such things that it doesn’t permit on its system. I expect Elon would do so, he might be a bit more liberal than others, but he’ll have such a list.

Jim: The final one is where the controversy I think comes from, and actually is also I believe where the reason, the motivation for Elon to lay out $44 billion, and even for the richest guy in the world $44 billion is real money. The final category, which is a subset of content, we have dangerous content on one side, we have point of view content on the other, which is essentially everything else. It’s where you’re trying to say something. You have a perspective, you have something you’re trying to communicate, you have a theory behind it, an ideology, or just a set of gut reflexes. Doesn’t really matter where they come from. And my perspective is that moderation should be close to nonexistent in point of view. We should not be making decisions about what is online in the public square, based on the point of view.

Jim: And so, what is an example of point of view? One of the examples I give in the paper is QAnon. And I’m first to admit, a bunch of idiots. I think the chances of their ideas being true are exceedingly small. The only reason I say exceedingly small is I tend to be an agnostic type person rather than an atheistic type person. And yet they were banned in a coordinated campaign by Facebook, Twitter, YouTube and several others. And I think I would disagree with that, because that’s a point of view. It may be a stupid point of view, and a bad point of view, but it’s a point of view. And I also laid out three examples that I find to be wrong and bad, which people will laugh at or get mad about, which are Christianity, astrology and Marxist Leninism. I think they’re bad. And I could make arguments, at least two of them are as bad or worse than QAnon. And yet I think would be very inappropriate for me to ban those top from Twitter, shall we say.

Jim: And in the same sense, I think it’s wrong that Twitter should ban QAnon, because who are they to decide on what point of views should exist in the world? So long as they’re presented decorously, without personal attack, without doxing, without coordinated bad behavior, like trying to mob people and harass them, and those kinds of things. So, that’s I think the key move here, is let’s ignore the dangerous content category, and compare decorum to point of view when it comes to moderation. I would argue if I were taking over Twitter, is I would actually strengthen the decorum moderation. I read the guidelines on decorum, that you could allocate to decorum in the Twitter rules. They’re pretty skimpy, frankly. And it’s one of the reasons in retrospect, why Twitter is considered such a hell hole by a lot of people.

Stephanie: Is it called decorum? Or what is it called?

Jim: No, I just call it that.

Stephanie: Or do you just identify what you consider to be… Okay.

Jim: I could go through their list, and I don’t have in front of me. I could go through your list and go this is decorum, that’s not decorum.

Stephanie: Okay, got it. Okay.

Jim: It think they need more decorum, and less point of view stuff, and they should be more explicit about the dangerous stuff. It’s kind of vague.

Stephanie: You mean the doxing?

Jim: Yeah, that kind of stuff.

Stephanie: The non-point of view content.

Jim: The non-point of view content. So, that’s my take on that. And I’ll give you my thoughts on why, because this is kind of controversial.

Stephanie: Well, before we go, let me just kind of… So that listeners can, if you can just listeners, imagine a tree. You have moderation at the top. Two categories, there’s decorum and then there’s content. And then within content, it splits into non-point of view, which is where all the dangerous stuff, as Jim calls it, doxing. And then there’s point of view content moderation, which is where we get into fraught territory. Okay, so now that listeners have that image in their minds, go on Jim, what’s your case?

Jim: Okay, here’s my case. And this is where people tend to jerk back and they say, “Are you serious? You’d let QAnon back on?” Or, “You’d let Trump back on? Or neo-Nazis?” And I’d say, “Yep.” But first before I let them on, I would tighten up decorum a lot. And Trump, I imagine would trip himself up over decorum rules pretty quick, we’d show him to the door. And if he doesn’t, oh well, he has a point of view. So you say, “Why would you let what you think are bad ideas into the marketplace of ideas?” And this is really, really important, is that nobody can know what ideas might be useful at some time to the human race. Or certainly nobody has the judgment to be able to make that call.

Jim: If we look at history, people are always trying to squelch the ideas at the edge. And I call it the green sprouts issue. At the edge of our farm field, there’s always some little green sprouts. Most of them are worthless, in the same way that most fringe ideas are worthless. And this is a really important analogy. I just talked to an evolutionary theorist on Wednesday about this, and he confirmed that more than 99% of mutations in biology are bad for the offspring, and don’t make them more fit. But if it wasn’t for the less than 1% that help the offspring, we’d still be bacteria. We would never evolve.

Jim: I was talking to another guy this morning, and he came up with another very nice homey analogy, which is garage bands. Most of them suck, they’re terrible. But if we banned garage bands, we’d never have any new music. And so, I think it’s hugely important that there be toleration for an open marketplace of ideas, of any idea, no matter how reprehensible anybody thinks it might be, so long as it’s presented without violating decorum, and without introducing hard danger, to find its place in the marketplace. And if people adhere to the idea, it’ll get more and more listenings, if it doesn’t, it’ll go away. As do most ideas, as do most garage bands.

Jim: And so, I think this is a hugely important idea that most people don’t think about when they say, “But that’s bad, can’t we make those bad things go away?” Well, if we can make the bad things go away, you’re going to inevitably kill some good things. And I have that experience. At least what I think is a good thing, I’m one of the co-founders of a group of about 25 people who started a social change organization called Game B. And it’s got some interesting and radical ideas, but it’s entirely nonviolent. It’s not even political. It’s more how to live. And Facebook came after us. We still don’t know why, my guess is that their algorithms somehow thought we looked a little bit like QAnon, even though our content is nothing like QAnon. But for some reason, they came after us and tried to kill us, in permanent lifetime bans with no explanation. The classic Facebook, Kafkaesque, Orwellian method of enforcement.

Jim: We were very fortunate, we had friends that were loud. We got six million likes on our tweet about all this. And Facebook realized they’d made some mistake, we had some people we knew in Facebook and it was reversed. But if it hadn’t been by the good chance that we were well connected, we would have been wiped out. How many ideas, how many green shoots that could have saved the world, have been wiped out by Facebook and Twitter’s campaign against things that they think are bad fringe ideas? So, we have to accept the bad to get the good, in the same way biology has to live with bad mutations to get the one in a thousand good mutations. And this is a core idea, which for whatever reason, isn’t deeply in circulation. And my sense is, if people get this idea or if they agree that this is indeed the case, then their libido to squelch things they think is bad, maybe will reduce a little bit.

Stephanie: Great. I want to get into that idea, and I’m going to push on it a little bit. But before we get there, I do also want… So, basically what you’re making the case for is strengthening decorum on Twitter. Could we call it loosening content moderation? Strengthening decor, loosening perhaps, content.

Jim: I would make more detailed, the dangerous content stuff.

Stephanie: [crosstalk 00:24:31] viewpoint, right. More detailed, the dangerous stuff. Yeah.

Jim: I would loosen, almost to zero viewpoint moderation.

Stephanie: Okay. So strengthen decorum, clarify non-point of view content, and then loosen, slash maybe even let go of viewpoint content moderation. The other thing you advocate for, and I want to play this out more of what it would even look like on Twitter, and sure we can apply it to Trump and what that would look like. But before we play it out a little more, you also make the case that a moderation policy like this would need a system of enforcement and appeal. And the one that you elaborate in your piece is very, very, very detailed. So let’s not go into all the detail, but just at a high level, what is the enforcement and appeal protocol that you would recommend?

Jim: Okay. And again, this is from actual experience and lots of it, which is that people will obey rules most of the time, if they’re understandable and they’re clear. And so, it’s my view that all the platforms should rewrite their rules like the criminal law is written. When a policeman arrests you, they don’t say, “I’m arresting you for breaking the law.” They say, “Nope, I’m arresting you for spitting on the sidewalk,” or whatever it is, “Which is statute 13.4.1,” it’ll be right there on the summons. And so, the platforms need to restructure their codes into tree structures. And I further propose that the leaf node, that the final thing, the actual law, the equivalent of spitting on the sidewalk, be written in plain English and be no longer than a hundred words.

Jim: Further, whenever they moderate somebody or discipline them, they must take the actual artifact that, the post that was in question or multiple posts, if it’s a pattern kind of thing, and they must say, “This is the statute, 13.4.1, this is what you did.” And they don’t do that, they just give you this, “You violated our terms of service.” You go, that’s real helpful, it could have been anything. So, that’s reform number one.

Jim: Number two, inevitably in this day and age, at the scale these things are operated on, some percentage of these going to have to be algorithmic. I don’t like that, I wish it wasn’t so, but at the economic density of Twitter, they’re going to have to have some of it be algorithmic. But people should be able to appeal to a human, for a quick review, five minutes, last one minute. And that review should happen in 24 hours. The other thing is just horrible. If they do have an appeals process, it can take a month. On Facebook in particular, I’ve never had to appeal anything on Twitter so I don’t know, but it could take weeks to a month, which is totally unacceptable in a real time flow of conversation.

Jim: And then finally, and this is I think, huge for setting the ecosystem up correctly. I advocate a quite complicated, you can read it in the article, but I think game theoretically correct way to have a second appeal, where you can essentially put up money, say a hundred dollars is the example I give, and say, “I believe I’ve been wronged. I’m putting up a hundred dollars.” And it then goes to arbitration by a professional arbitrator from the American Arbitration Association, that I recommend. I’ve actually used those people to arbitrate mass claims in cyber squatting around domain names, and it worked really well. They’re really professional.

Jim: And if the arbitrator, who is a professional who does arbitration for a living, looks at what you posted, looks at the statute you supposedly broke, 13.4.1, spitting on the sidewalk. If they find against you lose your hundred dollars. So, this very substantially limits the number of these appeals, because you got to stake real money. And now, here’s the fun part. If you win, the platform pays you 10 times what you staked. So, I put up a hundred dollars, and I win, Facebook, Twitter pays me a thousand. And oh, by the way, this is I think radical, maybe a little excessive. I propose in the paper that you can stake up to million dollars.

Stephanie: Oh my God.

Jim: So, I think I’m right, I’m putting up a million dollars, I win, I get 10 million. And what does that do? Think about this from a game theoretical, economist mind perspective. It means that Facebook or Twitter has to be right 90% of the time. If they are right 91% of the time, they’ll actually make money off this. Because they’ll pay off 10 to one when they lose, but they’ll take the pot the 91% of the time that they’re right. So, they will actually make money on these cases. And that produces a game theoretic pressure, to make them being right about 90% of the time.

Stephanie: No, it’s nice. You reverse engineered from the goal. The goal is Twitter should be right 90% of the time. So, how do we set up the incentive st that that’s the case?

Jim: And then a final thing to answer the objection, very good objection, which is a lot of people can’t afford a hundred dollars to defend their rights. So, I propose there be a market in these appeals, just as there is for personal injury law cases, where you’d post your claim. “Here’s my post, here’s the rule I supposedly broke, looking for people to back my appeal.” And anyone who sees this, in this marketplace reads it and says, “Look, that looks like bullshit to me. I’ll put some money behind that.” And that goes into the pot. And then when the thing’s adjudicated, the third party staker, the person who saw it on the market, gets 80% of the win. The person who’s appeal it is gets 20%, so they actually make money without putting any money up.

Stephanie: I love how elaborate this is.

Jim: And then if they lose their appeal, then the third party backer loses their money. So, everyone’s got skin in the game. At a hundred dollar minimum, it keeps the nuisance claims out. It allows even a dirt poor person without two nickels to rub together, to have justice, if other people agree that they’ve been wronged. And it basically forces the platform to be right 90% of the time, or lose their lunch.

Stephanie: Okay. So, let’s play this out. So let’s take this, we’re strengthening decorum, we’re clarifying non viewpoint content moderation. We’re either loosening or just letting go entirely of viewpoint content moderation. We have this kind of gamified, skin in the game, enforcement and appeal protocol. So let’s actually apply this to Trump.

Jim: Okay, let’s do it.

Stephanie: Yeah, so we let Trump back in and he inevitably breaks decorum. What happens?

Jim: Okay. So he breaks, let’s say we have a decorum, misgendering as a rule, that’s the equivalent of a racial slur or other thing that polite people don’t do with each other. And Trump, just because he’s Trump, decides he’s going to misgender a trans news person who had asked him an embarrassing question. And so, Twitter probably it’s Trump, they actually have a human do this. They flag him as a blue check, and rather than going through the algorithm, they have a human watch the blue check. And so they issue him a summons, which says, “Trump, you violated 13-4.1, which is knowingly misgendering a person,” or just misgendering a person. And he says, I don’t know what he says.

Stephanie: He says, “I’m going to appeal.” He says, “I’m going to appeal anyway.”

Jim: Okay, so he appeals anyway. So it goes to a human for the first free appeal, who looks at it for no more than five minutes and makes a decision. And let’s say the appellant, the internal employee of Twitter says, “Nope, you’re guilty.” And that goes back to Trump. And so then he can, if you want, either take his punishment, whatever it is. And oh by the way, I didn’t put it in the article, but I do believe these punishments as they are online, should be scaled. So something like that’s probably not a death penalty, but it might be a three day timeout. And so let’s say they give a three day timeout. And then, this is probably about a three day time out kind of penalty.

Jim: And so then he says, “I’m not going to take no three day timeout. I’m rich, I’m Trump, Goddammit. I’m putting up a million dollars stake on my appeal.” Or being Trump and the sleazeball that he is, he says, “I’m going to put up a thousand dollars, and then I’m going to ask all my supporters to put up the rest of the money.” That’s how Trump would do it. He knows how to use OPM, other people’s money. He puts up a thousand, “Here’s my claim,” it goes in the marketplace. He tweets, “Hey, I put this claim in the marketplace. All my supporters, come and back it.” And they would instantly, and it goes to a million. Okay, it now goes to a third party professional arbitrator. And they by the way, have no idea what the stake is, that’s an important part of the game. And so this professional arbitrator looks at the case-

Stephanie: Do they know that it’s Trump?

Jim: Nope, they don’t know or care. They don’t know anything. All is the text-

Stephanie: The offense.

Jim: And the rule it supposedly broke. And so they, and these are professional arbitrators, and so then the question is how do they decide? If they decide against Trump, he loses his million dollars. It actually has to be a hard stake. And there’s no-

Stephanie: Well, he loses OPM. He loses other people’s money.

Jim: Well, his friends, even worse. He causes his followers to lose-

Stephanie: Okay, this is interesting.

Jim: $999,000, and he loses a thousand. And that’s what happens. And on the other hand, if he wins, then Twitter pays him, or pays him 20% of $10 million, so $2 million, and his backers get an eight to one return on their investment.

Stephanie: All right. I think, yeah, and I love going through this because it’s such a more, I find, helpful… It’s like there’s so much fear, obviously right now that Musk is going to reinstate Trump. And so, but how can we let’s have our free speech principles about us? We don’t want to just reverse engineer some policy so that the outcomes we don’t want don’t happen. We like to be principled. And so, and then you run thought experiments. And you just came up with, while you were running this thought experiment, maybe ways to make it work, or make it less weaponizable or whatever it is, but knowing that the system is going to have to evolve. But yeah, I find it very helpful to just run through the thing and see that it ends up working. And maybe Trump is kicked out every for three days, over and over again, and losing people’s money over and over again, until… But then you just run into the bigger picture issue of it’s still stacked against us, stop the steal, stop the kick out.

Jim: No, actually I just had a thought.

Stephanie: That’s a bigger issue. Yeah, go for it.

Jim: I love thinking out loud. Which is just like the criminal law, this is the analogy I’ve been using throughout, second offenses are punished at a higher level. For instance, burglary, first time burglary of an uninhabited house might be one year. Second offense, five years, third offense, life. And so for instance, misgendering might have in its statute, right in the words, first offense, three day suspension, second offense, 30 day suspension, third offense, lifetime ban. And so, if you write these laws, because these are laws essentially, and he continues to do misgendering, after the third offense he’s gone for life, or for five years or something.

Stephanie: Right, and he wrote his own fate.

Jim: He did it.

Stephanie: He did it himself, he did it himself. And if he starts a campaign called stop the kickout or whatever, then we have bigger problems that Twitter is maybe, or maybe not, or maybe, going to help us or not help us solve. There’s a bigger thing going on here.

Jim: It’s like, so what? The rules are the rules, he chose to play. Sorry, that’s just the way the machinery works. Just the way it is.

Stephanie: Yeah. And it’s transparent, and the rules apply to everyone. So, I want to actually now come back to the question of the goal of Twitter, and go from there then to other suggestions that you make for reforming the platform. But so again, this line, what kinds of moderation would make Twitter a fair and more effective marketplace of ideas? Implies that the goal of Twitter is to be a fair and effective marketplace of ideas. And I guess, yeah, so the first question is just what does that mean? What does it mean to be a fair and effective marketplace of ideas? And do you think that is the goal of Twitter, or should be the goal of Twitter, or could even be the goal of Twitter?

Jim: I think it should be and could be, and to some degree it is. I run into all kinds of new ideas. I point to a group that’s been called out on the web, described as called the liminal web. And there’s an essay written by this guy, Joe Lightfoot, and includes a bunch of people I know who I never would’ve met if I hadn’t been for the online world, Facebook and Twitter. And so, it actually does work as a marketplace of ideas to some degree today, but it’s just not a very efficient or fair one. So, I would suggest that if we can make it more efficient, more fair, more effective, then we can help with it being a marketplace of ideas, where good ideas gradually get a get adherence. I now support the people of liminal, some of them I give them five bucks a month on Patreon, for instance. And I never would’ve met them if it hadn’t have been for Twitter and Facebook.

Jim: And so, there’s an example of their ideas have gotten some adherence from me and from lots of other people. And there are lots of other ideas I run across online. Most of them, of course, which are total shit. And I never want to hear from those people again, and if they are too insistent I’ll block them. So, that’s the idea of a marketplace of ideas. Now, of course there were other issues that I didn’t get into in the article, but I actually did have in the original draft of the article, damn editors cut it, which was we do have to do something about helping bring light to what statements are true, versus what statements are false.

Jim: And this is a very fraught and difficult issue. And I don’t claim to have even come close to solving entirely. But I did throw out a couple of loose ideas, which is to add another dimension. When you put a post on Twitter, make a tweet, then there’s comments. And that’s one dimension, which is engaging with the substance of your tweet. I would suggest at another dimension, which is pointers to evidence. And anybody on Twitter would be able to essentially put a URL in, that is a commentary on the claims in the tweet. And that people could then vote on those links, as whether they are useful support, or whether they point towards the tweet being actual or not factual. And so, then you have a second dimension, which is crowdsourced pointers-

Stephanie: Nice, that’s very post normal science.

Jim: To third parties, and it’s a separate dimension. You got comments here, and you got pointers to support there. And I don’t know if that’s enough, I don’t think it’s the best idea, but I think it’s a start. And we do a lot of things-

Stephanie: No, I think it’s interesting it’s because right now there’s no likes or retweets, there’s no qualification, it’s just activity. Whereas if you were to be able to actually qualify activity or qualify a response to something as to whether that you think it’s factual, or you think it’s not factual, I think that would be helpful. But just to return to the question of the goal, I want to stay here for a moment. Because part of why… I don’t think Twitter was necessarily built for this goal, let’s say, to be a fair and effective marketplace of ideas. Which doesn’t mean that it isn’t that in some ways or that it can’t become more that, but it wasn’t necessarily built with that singular goal in mind.

Stephanie: And so, if let’s say, if Elon or leadership were to decide like, “This is actually our goal, our top priority for this technology is to make it a fair and effective marketplace of ideas,” the next question I would ask you is what are indicators? What are metrics that to you, would indicate that the platform is becoming a more fair and effective of marketplace of ideas? And I would say just the last thing I’ll add to the question is right now, the marketplace, Twitter is arguably, what is it? Outraged fueled tower of Babel? There are some marketplaces in there, but it’s just, insert what we know from The Social Dilemma. So, the metrics might be evolve, the metrics might change, but right now we’re in echo chamber land. So, what metrics would you be looking for in order to let you know that the platform is on the right track?

Jim: That’s a good question. Real basic one is, are the moderation activities falling? If the number of times, especially decorum moderation infractions, if they’re falling over time, we are building a healthier culture and ecosystem. And it’s been my experience that when you make the rules very explicit, you always enforce them, and you’re very, very clear about the enforcement mechanisms, people aren’t stupid. Not real stupid, at least. And they learn, even Trump might be capable of learning. I don’t know. But not everybody is, there are people that just have no impulse control, and you will have that. But there’s lots of other people who have better impulse control, and they will learn not to do the bad things. So, that will be one measure.

Jim: A second might be a statistical sample of let’s say 10,000 tweets once a month, taken at random, analyzed by graduate students, high quality, low cost labor, and scored on their utility or something like that. So you could say, all right, this is a tweet about Kim Kardashian’s ass, that’s a zero. Here’s a tweet about how to lose weight, that’s a seven. Here’s a tweet about the status of string theory, that’s a 10. Something like that, that you could literally assess on a statistically valid basis, the quality or the importance of the discourse. I don’t know, if importance is the right word. Because you’re talking about your good restaurants, that’s sense making. Talking about your favorite-

Stephanie: I’m going to throw some messy ideas at you and see what you think.

Jim: Have at it.

Stephanie: Because if what we’re in right now is this outrage fueled echo chamber, so one thing is, is there less outrage? Or is outrage going less… What is the-

Jim: That’s a good one, you could-

Stephanie: Something around the emotional tone and what kind of-

Jim: Yeah, and they can do that with software. Free software will tell you the emotional tone. I got a package actually, that does that.

Stephanie: Okay, great. So, I would have that be part of what we’re tracking. I would also have this echo chamber, like are there wormholes being built between these echo chambers? Are some of them merging or starting to interact with each other?

Jim: You could do network analysis.

Stephanie: Yeah. Because this is all in the spirit of, yeah, what defines a truly healthy and thriving marketplace of ideas? Where the different spheres, there are different spheres but they’re connected to each other. The tone, the emotion, I’m curious about more likes from people who usually don’t like the same thing, or something like that. Or unfollows that then result in follows, or blocks that then result… How are we gauging the way that we’re starting to find common ground with each other, I guess?

Jim: There’s a bunch of things you can do. As you point out, I love that, network analysis. You can literally study the graph of messaging, and see if messages are going across the clusters at a higher rate. And while you were saying it, I had an idea for sort of a top of the food chain metric. Again, something that can be done really easily in software, there’s free software that’s pretty accurate to measure optimism versus pessimism. What would happen if the level of optimism started to rise on the platform, as you introduced these reforms? It strikes me that that would be a pretty strong indicator that you’re doing something. Right?

Stephanie: Yeah. What is the ratio they say in relationships? It should be five to one.

Jim: Optimism to pessimism?

Stephanie: To your partner, in between you and your… Like five to one positive or negative. And it’s not just because it isn’t that nice? It’s also because, what are we capable of doing together when that’s the ratio of the emotional tone of our relationship? Well then, so then now I want to go back to what you were… I can’t remember where we were when we touched upon this, but do we want to completely let go of viewpoint content moderation? Oh, no. You were saying that the key idea that you wish more people understood is that we need to let the green sprouts sprout, and we can’t predict which sprouts are going to be the ones that become the amazing bands. I’m now mixing metaphors. Amazing garage bands.

Jim: Exactly.

Stephanie: Okay. So, I’m going to ask you a question I’m but I’m going to kind of lead into it here, so just bear with me for a sec. So, I think part of what’s challenging about free speech is that it’s both an end in itself and a means to bigger ends. It’s an end in itself in the sense that we have a yearning for self-expression. That is a human need, so it is an end in itself. It’s also a means to, letting the green sprouts sprout is part of what enables our democracy. It enables collective intelligence, it enables Musk’s stated intention of extending the light of consciousness. But in order for freedom of expression to be a means to these bigger ends, in order for let’s say more speech to be a remedy for bad speech, we actually do need a healthy marketplace of ideas. Twitter is not right now, a healthy marketplace of ideas. It’s this Tower of Babel situation.

Stephanie: So that means, I wonder if that means that we might have to give ourselves training wheels, we can call them. We might have to have heavier moderation right now, until these metrics of emotion and the social graph and whatever, show us that we can let those training wheels go. And so, we might have to use tactics that we don’t like to change the circumstances, in order to not have to use those tactics anymore. And in order to alleviate, I would say, the tension between free speech as an end in itself and free speech as a means to a bigger end. So the question for you is given the Tower of Babel situation that we are in, might it be the case that we do need stricter moderation? And maybe even viewpoint content moderation, in order to transform the tower into a marketplace, move it in that direction so that we can actually handle, be ready for less strict and maybe no viewpoint content moderation?

Jim: Let me ponder that for a second. I think I would start with increasing the strength, and the clarity, and the enforcement on decorum. Because when I think about what goes wrong on Twitter, it’s mostly not because people’s ideas are so heinous, though sometimes people show up with heinous idea, it’s mostly because they start yelling at each other. And interpersonal problems rather than substantive problems. And so I would say, based on my relatively broad experience on Twitter, though mine’s somewhat idiosyncratic. Because oddly enough, my Twitter people don’t talk about team blue, team red politics very much. They talk about regenerative agriculture, permaculture.

Stephanie: You’re in a nice little, secluded little marketplace within the Tower of Babel.

Jim: Hardcore science. And there’s a little bit of… But it gave me, when I go out and I do [inaudible 00:49:08] around a little bit, especially during political season. It’s not generally the, for instance, Hilary… Hilary versus Trump’s kind of a bad example, but let’s say Bernie versus Hilary. That got really, really ugly in 2016. People were yelling at each other, and stomping off, and making acquisitions, doxing, doing all the bad decorum ideas. And so, the conversation ended up tribalizing people who were actually fairly close actually, in terms of their content. And I would suggest that’s a perfect example of where decorum moderation would’ve made a huge difference. People would’ve backed away from the interpersonal, the fighting for the sake of fighting and said, “All right, at the end of the day, the differences are about this big. And truthfully, I could live with either one of them.”

Jim: Wouldn’t that have been a heck of a lot better outcome than having so many people walk away mad that they didn’t vote at all? And guess what happened? The great Cheeto got in. And so, there’s an example where decorum moderation would’ve made all the difference. And I think that more of the Tower of Babel is interpersonal relationship problems than it is supposedly heinous ideas. Not to say that heinous ideas don’t occasionally occur, and with no viewpoint moderation they might become a little bit more common, but most people actually don’t have heinous ideas. I can’t remember the last time I ran across a truly heinous idea on Twitter. Some bad ones, some stupid ones, but-

Stephanie: Yeah. And maybe this is also just a question of, do we trust the metrics we have chosen enough to let them determine the slider, let them determine whether we need training wheels before we’re ready to take them off and have no viewpoint content moderation whatsoever? Are we willing to be kind of empirical in that way about it?

Jim: Got to be, got to be. Again, this is something that comes from my work in complexity science. Our ability to predict the unfolding of a high dimensional complex system is very low. Fairly soon, you just don’t know what the hell will happen. So you have to be empirical, you have to do what we call probes on the system. Let’s say Elon Musk hires Jim Rutt, and five of his friends to rewrite the moderation roles. Damn good idea, by the way, Elon. And in the meantime, the people at the Internet Observatory at Stanford and Center for Human Technology come up with the metrics. And we put them both into place at the same time. No, actually to do it right scientifically, we run it for 90 days without any change to moderation, and you get your metrics. Then we introduce the new moderation system, and we let it run for another 90 days, and we see if the metrics improve. If we do, we go, “Ah, we’re on the track.”

Stephanie: Right, exactly. Great, love it.

Jim: And then we have the smart people of both teams get together and say, “Okay, our metrics are going up, but maybe if we tweak moderation in this way, it will go up a little faster.” So we say, “All right, instead of being 10 to one on the top level, second appeal, we reduce that to five to one. Because it appears there’s some ways to game the system when it’s 10 to one. So, we change 10 to one to five to one, wait 90 days, see if that improved the metrics, some specific metrics that we’re looking at. And if it does we go, “That was good.” If it goes the other way we go, “Oops, we probably made a mistake. Let’s pick it back up to 10.” This is the, what I call in the Game B world, theory, practice, theory loop.

Stephanie: Yeah. Co-evolution between theory and practice.

Jim: Absolutely indispensable when dealing with complex systems. And oh by the way, this is an important point to make. One of my concerns about Elon, even though I’m mostly optimistic about him, is the worlds he’s worked in and have been so unbelievably successful, have been what are called the complicated domains. And the distinction between complicated and complex is important. And here’s an easy way to know the difference. In the complicated domain, you can take something apart and put it back together again, and it’ll still work. You can take your car apart all the way down to its pieces, put it back together again, and it would still be your car, and it would still work. Assuming your mechanic was actually competent.

Jim: But in the Complex domain, it’s not just the parts, but it’s their dance. It’s their dynamics that matter. For instance, you could not take your body apart and put it back together again. Your body is a complex system. You could not take the economy apart and put it back together again, it’s a complex system. Twitter is a highly complex system. It’s a constantly changing graph of connections, and every node on that graph is a strategic agent. So, it’s a classic example of complex systems. So the other thing that needs to be brought to the management of Twitter, and it’s not in Elon Musk’s wheelhouse, is the complexity lens that has to give you some epistemic humility about how much you could know, and how much you have to proceed by empirical theory, practice, theory, practice loop. And I think that would help him a lot if he could develop or acquire that lens essentially, as he starts to think strategically about proceeding to make changes at Twitter.

Stephanie: Great. Well, I wish we could keep going. I know you have a hard stop in six minutes, so I’m going to ask you one last question. And as a part of the mini preambles of the last question, as you know, I think that Elon Musk is very well positioned to be a bridge from game A to game B. There is this question of how does one transition from game A to game B? There can be many ways, but one way let’s say, is dominate game A and then transition. And he has dominated game A in multiple industries now, transportation, energy. And he happens to have this game B sensibility. The elites are a thorn in his side. For him, it’s like, “Do you really need to hold me accountable to short term profit maximization? I’m trying to get us to the stars here.”

Stephanie: So perhaps, and this is me perhaps being psychotically optimistic, but perhaps the first industry that he could transition us from game A to game B is low and behold, surprise, perhaps social media. This is a total surprise to all of us. And so, I do think there is an enormous opportunity for you, specifically you, Jim Rutt, to help Elon not just with Twitter, but in all kinds of ways. And so, my last question for you is to speak directly to him, and I’ll do it too. So hi, Elon. So Jim is going to tell you how he would love to help you. And he’s going to give you a way to contact him. So, go for it Jim.

Jim: Alrighty. Let’s see here, there’s a big one. Well, first I’m going to comment on your hypothesis that Elon Musk maybe game B without even knowing it.

Stephanie: Tell him, don’t tell me.

Jim: Okay. I’d say there’s a big sign to me that you’re on the right track, and that’s when you sold all your houses and fancy cars. It’s a very game B move to get rid of your couple hundred million dollars worth of residential real estate and live in $120,000, very modest house in West Texas. That’s a game B move, possessions are not what’s important about us, the shiny objects are not why we have our life here on earth. It’s to do things, make people happy, make love, go to Mars. And so, I think you’re game B and you don’t even know it yet, Elon.

Jim: And so in terms of what we could do for you, I would say not just me, I’m just one semi-retired dude, but there’s a whole group of people that have been thinking this way for a number of years. And we could help you think through what it means to use a complexity lens at Twitter, per our conversation. We should get Stephanie involved too, not just these other people. We’ll develop the metrics, and CHE plus Stanford equals the metrics, the Santa Fe Institute, plus the game B crowd comes up with the interventions and the changes to moderations. And a bunch of other things, by the way, the article lays out some other things. Like I’m with you a hundred percent, Elon. Get rid of advertising, or at least make it a small percentage of your total revenue. And that changes the dynamics in a very positive way, which I’d love to go into.

Jim: So we’ll put together two teams, one on metrics, one on interventions. We’ll help you think about Twitter as a complex system, whose goal is to become the collective sense making mind for the human species. And here’s the payoff for you, for the things you want. If you could get the humans to work this way at that quality, we’ll get to Mars a hell of lot faster.

Stephanie: Oh yeah. And how does Elon get in touch with you?

Jim: Send me an email at Jim Rutt, J-I-M, R-U-T-T, @jimruttshow.com

Stephanie: All right. Well with that, thank you Jim, for inviting me to guest host on your show. For listeners who want to be in touch, you can find me, ha ha, on Twitter, @stephlepp. And yeah, Elon, I hope you solicit Jim’s wisdom in your endeavors, starting with Twitter.