The following is a rough transcript which has not been revised by The Jim Rutt Show or Layman Pascal. Please check with us before using any quotations from this transcript. Thank you.
Layman: Howdy. This is Layman Pascal. And this is the Jim Rutt Show. Today’s guest is Jim Rutt. Jim’s a surprisingly UpToDate old timey network theorist, entrepreneur, complexity maven, game B advocate, and a liminal podcast through tux like he’s packing a couple of six shooters. I’ve had him on the Integral Stage podcast several times. I like them a lot. And today we’re going to dive into Jim’s sense of the principles, possibilities, protocols and proposals that might make Twitter or maybe any major digital discourse system into an actual space of liberty and collective intelligence, rather than just assess pool of culture, war and corporatism and reactionary state suppression of human sense making. We’ll be talking digital etiquette and digital infrastructure. Welcome, Jim.
Jim: Hey, thanks Layman. Great to be on the show.
Layman: So a while back when Elon Musk was taking over Twitter, you wrote an article to explain some of your hard one insights into intelligent content moderation. Recently, you took part in a saving Twitter round table discussion. What’s motivating you to engage on these topics? What’s important here or agitating you about this?
Jim: Well, the thing that’s agitating me about it is that these amazing networks that we’ve created, and I’ve been involved with it since pretty much the beginning, 1980 year thereabouts, have the potential for upgrading humanity’s capacity in a major way. And in fact, those of us who were pioneers were quite sure we were doing that good work, and that by now every citizen would be impeccably informed about the truth of everything, that common sense would reign, and that our politics would be better than it had ever been.
Well, guess what, folks? We was wrong. And in some sense, I suppose I feel like I have to pay penance to see what little bit I can contribute to get things back on the right track. Not only has our collective sense making seemingly declined, but other aspects of what many of us call the meta crisis have continued to accelerate things like population, climate change, depletion of top soil, fifth species extinction, and then when you add on top of that, what should be our collective sense making seem to be driving us insane. We’re not in a good place. And this is perhaps a place that we can do more about in the short term than we can doing things like solving climate change.
Layman: What does it look to you like Musk’s current strategic intent for Twitter is, and what should it be if you really wanted to seize this opportunity?
Jim: Yeah, that’s the problem. I don’t yet see a strategic intent from Musk. I said in the article, he seems to be quite conflicted in his thinking. At times he feels like a startup entrepreneur who’s trying to feel his way to a pivot to something perhaps more profitable. He talks about the X app, kind of like the bite chat, many, many Swiss army knife functionalities. And truthfully, it may well be too late in the day to do that with Twitter. There’s a bunch of good analysis on why that’s going to be hard. Other times he feels like a classic private equity buyout dude. Or the first thing they do is fire a bunch of people, crank down on cost raise prices and see how much cash they can squeeze out of it. And his moves of firing over half his staff, the sort of half screwed up introduction of the blue check where he tries to get people to pay eight bucks for the blue check.
Those all kind of feel like private equity-ish cash squeezing. And even worse, those two strategies are incompatible. If he actually did want to do the X app, he wants as big audience as possible, which means free, probably. If he wants to squeeze cash, it means a more focused, more intense and more engaged user base. And he doesn’t seem to be doing that either. So hard to say what a strategic intent is and whatever it is, it’s not clear. What I would suggest, and particularly because he claims that he bought Twitter, not to make money off of it, but rather to do good for humanity is indeed to focus on making Twitter the premier sense-making platform for the human race.
Layman: There’s a question of fairness that interests me around sense making. So there’s this idea that like a monkey will pay a banana to make sure another monkey doesn’t get too many bananas. And humans seem to inherit an instinct for fairness or at least an instinct to limit unfairness to a certain level. And this partly determines whether or not our collaborative sense making succeeds or fails. Because if exchanges seem too unfair, we cross a threshold and the participants become unwilling to undertake the efforts of collective intelligence together. And yet despite having that deep ethical inheritance, let’s say, we have a social situation in which we’re locked into a culture war that seems to maintain unfairness by constantly arguing over which unfairness is most dangerous. And I hear a lot of this around the Twitter debate. And I’m curious what you think is the bigger social risk.
Is it the unfairness of a managerial liberal control system that suppresses and demonizes opinions it feels are gross, dangerous and out of touch with contemporary languaging fads? Or is it the unfairness of a single billionaire motivated by greed and whimsy, suppressing any perspectives he personally or economically dislikes? Which unfairness is least bad?
Jim: There’s a question for you. That’s a choice of the devil or the deep blue sea. I am quite strongly opposed to the, call it, the conventional wisdom of the big platforms that they have got this soft censorship that they try to hide, but clearly they have an agenda. As I wrote in the first essay, my view, these platforms should instead focus on decorum and truly dangerous things like how to make a bomb or how to commit suicide and personal attacks, specific racial slurs, et cetera, but should avoid point of view moderation. If you want to talk about Holocaust denial or flat Earth or Catholicism or Marxist Leninism or QAnon, go right ahead, would be my view. But whether the kind of soft, big platform conspiratorial attempt to trim off the edges of the discussion or the capricious whims of one billionaire, or worse, I’m not sure. Neither are good. Instead, one needs a principled approach to moderation.
Layman: Yeah, I like that. I spend a lot of time with the sort of malcontents who think there’s a future for integrative developmental meta theory, and there’s a lot of issues in that around categorizing people into stages of development. And one of the issues that interests me the most is distinguishing between style and content because even if you need a certain kind of cognitive complexity to generate a type of idea or a type of cultural artifact, once they’re generated, they’re in play for everybody. So it’s not weird that a lot of demonstrations of so-called post-modern and woke values take the same general form as demonstrations of MAGA and so-called traditional values. Because the way we rank someone developmentally can’t be based on the sorts of claims they espouse. It has to be based on their style of behavior and their style of processing. Now you’ve just been making a similar point when it comes to online content moderation, that we have to sort of pivot from policing the content of perspectives to policing the style or the decorum that within certain basic safety frames we should be allowing all ideas but enforcing rules of etiquette.
So maybe you could give us a more clear breakdown about criminality ideas and perspectives and decorum as the general types of things we need to moderate differently.
Jim: Yeah, I would lay it out. Criminality for sure. Though I think one has to be careful about criminality. I would say serious criminality. For instance, advocating civil disobedience, I believe ought to be allowed. Think of the civil rights movement where people who were fighting for civil rights in the south did sit-ins at segregated restaurants. That was illegal and yet it was a social good. So just ruling out all advocacy of criminality. No. But I would say all criminality of a predatory variety, robbery, arson, murder, et cetera, assault. Another group is things that are inherently dangerous. What comes to mind is instructions for committing suicide. So your parents don’t know you’re preparing kinds of things, which apparently is a thing on some social media platforms. Perhaps how to make a bomb, how to make poisons to dispose of your unwanted spouse. Things that are inherently truly dangerous.
I would put in another bin. What I’ve proposed is decorum that is there be a specific set of rules that are published about what you should and should not say with respect to politeness, engagement being a decent human being, et cetera. Do not engage in personal attacks. Do not engage in the use of racial slurs. I would say option optionally, let’s say everyone who listens to this show knows, I like to say fuck a lot, right? And I think that would be a knob that one could turn on a online social media platform. If you have a platform that’s oriented towards kids and families, let’s say Disney, you probably wouldn’t allow the famous seven dirty words of George Carlin.
But on a more adult oriented platform like Twitter, you may, well. You should make those kinds of decorum oriented rules and have them published and have them much like a criminal code with numbered sections, et cetera, that can be quoted when people violate them. But then the last bucket, which is point of view, that’s where I think I do agree with Musk or at least what Musk says, if not what Musk does. Which is that points of view should be allowed, period, even if they’re distasteful, even if they’re very strong minority views, et cetera, so long as they don’t violate the predatory crime inherently dangerous or decor of rules.
Layman: So as a participant and as a human communicator, what’s the difference between me saying, Jim Rutt is a racist and Jim Rutt’s post sounded racist to me?
Jim: I think there is a big difference that as long as you’re directing the commentary to the discourse, you are not engaging in personal attacks. And that’s a fine line and it’s one that people aren’t used to necessarily, but I think it’s a skill that we need to develop if we’re going to have a wide open public square of ideas.
Layman: You mentioned that say, something like a Disney forum might want to have different parental controls on profanity than other forums. Is that something that should be controlled by the platform and the provider or is that something that could just be interface settings that the individuals tune themselves? Why don’t I just tell it not to show me the profanity when I’m using Disney? Why does Disney make that decision or is there a reason why the platform itself should make the decision?
Jim: Yeah, I think that to the degree that these platforms are owned by private companies, which is a different topic we can talk about, they ought to be able to establish a brand profile. We don’t really want to be talking about adult oriented topics on the Disney Channel. They can make that clear, right? That’s their product, that’s the market segment they’re going after. And I don’t think that a filter by itself is probably sufficient for that task. On the other hand, for a platform that’s aiming to be an open square and available for pretty much all kinds of discourse like Twitter, then I think filters fit very nicely. You can filter out words you don’t want to hear. And with today’s Ais, you can probably even filter out topics you don’t want to hear about.
Layman: One thing it seems that people find really unfair and unintelligent is the lack of clarity and restitution that occurs in shared forms when people run a fowl of moderation principles, which again, they often don’t know what those principles are. And we seem to have a lot of major platforms that are perversely incentivized, manipulated by governmental and corporate pressure and managed largely by automated processes in which individual users can be banned or demonetized or have their growth limited without explanation and without a clear pathway to return. And that feels profoundly unfair as a way of moderating human participation. But what’s your sense of the basic principles that would need to be in play to have a fair system for flagging, penalizing, removing and restoring contributors?
Jim: Yeah, I think this is huge, and I’ve been a target of an unfair banning on Facebook, which was done by an algorithm and they reversed it within 12 hours, but only because we raised a huge stink. They basically banned all three of the game B group admin simultaneously, and it was clearly an error. But if we hadn’t have been well connected and had powerful friends advocating for us on Twitter, we might never have recovered. So I personally feel the burn of this, and several of my good friends, people like Jordan Hall and Bret Weinstein, et cetera, have been unfairly zapped. And again, as you point out, not only unfairly, but in this Kafkaesque way, you have been banned. You have no right to appeal and we’re not going to tell you why you were banned. You violated our terms and conditions. Which in Facebook’s case is this endless document of obscure bullshit.
As I proposed in my original essay, I would suggest the following things if we’re going to have moderation that’s fair. One is, as I’ve mentioned, the rules ought to be structured like a statute book with numbered sections, subsections in paragraphs. And so if you’re going to be banned, the requirement ought to be if it’s an AI, so be it, or if it’s a human that they reference the exact post or posts that they claim to be infractions and they quote the sections of text from their regulations that they claim to be violated. And I propose just to keep the lawyerisms to a minimum that those sections be no longer than a hundred words. So that written in plain English so that people know what the claim is against them. Secondly that you always have the right to demand a human appeal. Again, fairly often these people will, including myself and including Bret Weinstein, no appeal possible. You’re just dead. And I think that’s absolutely wrong and probably ought to be illegal, period.
And I would suggest they should have the right to a human mediated appeal within 48 hours where a human has to look at it and confirm or deny what the algorithm did or what some group process did. Let’s say for instance, if 20 people report a post that could be automatically banned. That’s not a human mediated decision. And then finally, the most, I think creatively, I proposed that you have a third appeal, if you want to, where you put up a hundred bucks or more, in which case it goes to a third party independent arbitrator. And if the arbitrator finds for the company, you lose your $100 or whatever stake you put up. But if the arbitrator finds for you the platform pays you 10 x of your stake, and I propose you could allow stakes up to a million dollars, I wrote that with Facebook in mind.
They could afford paying out 10 million occasionally because they’d also make fair bits from people who put up stakes and lost. And I also proposed so that even people without wealth could participate that there would be a marketplace where these claims could be syndicated. So you’d put up the claim that the platform made against you and the text that they claim violated it, and then you could get people to support you. Like in a crowdfunding way that would support the stake for your third level of appeal. And if you won, you the poster would get 20% of the win and your backers would get 80%. So you’d actually make some money in the cases where you were unfairly targeted. Now, if we assume these platforms are rational, which eventually they are, and converge to the economics of this model, paying 10 to one damages when they wrongly bounce somebody, you would expect them to get it right 90% of the time, which is probably a tolerable standard for a mass phenomenon like an online system with hundreds of millions or billions of users.
Layman: I like the element of skin in the game that brings for both parties. And it seems like a fun suggestion when we’re talking about huge corporations like Facebook and Twitter, but there’s a question about what sorts of… They’re mom and pop social networks and we might want to significantly limit the liability that they face. And so what this puts me in mind of, and you mentioned this earlier, is like what’s the threshold of private? As an avowed Marxist, Jim, I’m sure you’re familiar with the idea that social processes beyond some threshold have to become subject to different public incentives as they start to become a general utility than smaller things that might be allowed to focus just on private enrichment. So we all have access to roads and air and firefighters because we think that’s part of the commons that we need generally, but perhaps the incentives we give to mom and pop shops shouldn’t also be doled out to Goliaths.
Maybe Twitter’s too big to be held under the general rules or maybe it isn’t, I don’t know. So my question, they’re like the pulling back, the meta question is, how do we even begin to have a good conversation about setting thresholds between private and public domains and between who can be penalized in what fashion?
Jim: Yeah, that’s a good question. And goes down many rabbit holes. I should say that I believe I said it in my original essay. If I didn’t, I should have. Is there certainly ought to be a size threshold, let’s say 10 million unique visitors per month, something like that. To put that in scale, Facebook has a couple of billion unique visitors a month. Twitter has a few hundred million unique visitors a month. So we’re talking about big but considerably smaller even than… I think Reddit has about 50 million. So Reddit would also qualify for these. It’s sort of the smaller end platforms that would qualify for these things. But smaller systems, I’m with you, there ought to be a different sense there. And also they serve a different purpose. They typically are not meant to be horizontal public squares. They’re there for some specific community, and that specific community might have quite different values.
Let’s say a Christian online service might not allow blasphemy, for instance, while I would expect a public square should allow blasphemy. So I think there should be some scale parameters. And secondly, even on the big platforms, to the degree that groups are private, like for instance Facebook groups, I don’t think there’s a strong case to be made there. That it’s really none of Facebook’s business what you talk about there so long as it isn’t blatantly illegal. So that’s a phenomenon where I suspect that the Musk standard of, is it legal, should apply within private self-managing groups.
Layman: Do you think of Twitter as reaching a threshold of being something like a public utility?
Jim: It would be nice if there was a public utility that played the role of Twitter. I’ll put it that way. I think it is large enough, and particularly with the chattering class that it has become by default a public utility. And that’s why I think that we need to think about it more carefully than if it were a private network dedicated to some specific purpose.
Layman: You mentioned a couple of names a few minutes ago. Bret Weinstein was one of them, and that strikes me personally because my girlfriend was kicked off Twitter for posting a link to the Unity 2020 campaign with no explanation and no recourse. You also mentioned Jordan Hall, who I know has got some ideas about how these sorts of systems can be reconfigured. In your round table discussion you also mentioned our friends, Tristan and Daniel. Who do you think Elon Musk should be reading or reaching out to if he actually wants to move this project forward in a way that’s useful to the human species? What sources should he be looking to?
Jim: Well, I just think you listed, I’ll throw myself in. The five of us would be a good place to start. We’ve been thinking about this stuff for quite a while and we have some interesting ideas and many of us, or at least the three of us have had experience of actually being unfairly penalized. So I think our community’s a good place to start, frankly.
Layman: Musk suggested open sourcing the code for the Twitter feed. What do you think are the advantages and the risks of doing that?
Jim: I think in general it’s a good idea because I think that we could get an idea of how we’re being manipulated, frankly. However, I think the potential downside is it might provide some opportunities for exploiters to game the algorithm. If you could actually read the algorithms and see how it works, it would give you at least some clues for gaming. And I think an even better idea, where I would take Musk’s idea to the next level is not just open source, but provide a marketplace and open source algorithms. So that anyone could create an algorithm that uses calls into the Twitter a API and then put it on a marketplace and charge for it. A couple bucks a month say.
So the Layman Pascal Twitter feed algorithm would be open source and you’d get two bucks for anybody who subscribe to it and you could essentially do whatever you wanted within the capabilities of the underlying Twitter API. If you then have a vast marketplace of feed algorithms, you get some really nice benefits. One, you get diversity in how the system works for people. And this is where lack of diversity is a problem on networks. This is a classic complex systems network science finding that is as nodes become more homogeneous, the possibility of cascades of bad things happening goes up. So adding diversity at the node level is a good thing. And then secondly, with let’s say thousands of different feed algorithms, it becomes, well not impossible to game them all.
Layman: It seems to me there’s a couple different kinds of censorship that inhibit collective sense making. One is sort of top-down censorship where we inhibit people by just prohibiting or blocking their expression. And the other is that we inhibit people by allowing everybody to reactively chide or confront or denounce them or create an ethos in which it’s so unpleasant they don’t feel like they can express themselves. How do we strike a sweet spot between those two kinds of censorship in order to get the best possible outcome?
Jim: Yeah, that’s an interesting question and I’m not sure we know the right balance yet. I’d say decorum rules help some, and decorum rules could outlaw things like doxing or direct personal attacks. But one can still be fairly adversarial and confronting and stay within decorum rules. I suppose the best way to think about the next layer on top of that would be the filters we talked about. And I do remind people, especially on Twitter, you could block people easily enough and it would be nice to have some more fine green tools than we have today. For instance, you might only want to tune somebody back somewhat, for instance. I’d only want to see 1 out of 10 responses that they make, something like that. But I think that’s an interesting question because we do want ideas to be confronted, right? The whole idea of a marketplace of ideas is that bad ideas will be replaced by better ideas, so long as there’s good faith meeting of ideas in the marketplace. And I still believe in that.
Layman: You mentioned doxing. Elon recently suspended some reporters for doxing him, which turned out to mean that their posts included links to sites that he claimed were revealing too much information about him. Now, whether he was being genuine or not, do you think moderation protocols should hold individual posters responsible for the content of links that they post to? Because it would be weird to charge a person with obscenity for pointing out the location of a library that contained a book that people felt to be obscene. So what’s the level of culpability relative to making links available?
Jim: That’s a good question. In an adult oriented public square like Twitter, I would suggest that you should be able to post any link you want. Maybe you add a not suitable for work signifier to it like they do on Reddit. If it leads to a porn site or something like that. Again, moderated for different purposes in a children’s site, I think you’d have a different set of rules. You could say that our rules apply to what you link to as well. With respect to the specifics of Musk and the airplane tracking website, I think frankly there is the example of the Capricious King that we talked about initially, right? The capricious king making shit up as he goes along is not a good look. It is not a sensible way to proceed.
Layman: This is a good chance for me to bring up a personal grievance. A year or two ago, my lawnmower died and I decided to get an electric string trimmer. So I looked it up online, ordered one, I picked it up the next day at the local hardware store, and then I started to get deluged for the next month by ads on Google and Facebook for electric string trimmers. Now, who needs such a tool less than a person who just got one? It’s not profiling me very well. But they don’t want to profile me very well. They want to get money from string trimmer companies, so they’re more likely to get paid if they categorize me just by the general search term I used than by the precise details of where I am in the process of string trimmer ownership. It brings up a lot of issues for me around how content is targeted. What decisions and incentives are going into the categorizing, what sorts of markers are being used to put people into various categories.
Right now, the content we see on Twitter and on most major services is either the most recent content or some hybrid compromise between your previous engagement history and the platform’s attempts to exploit our services, economic sponsors. There’s more innovative sites that have a choice of lenses through which we might rank the content streams that are presented. But what’s your sense of how we organize content and how we categorize it in order to present it to each other? What are the most useful indicators about people or posts that would inform the organization of our feeds in more useful and satisfying ways?
Jim: Yeah, that’s a huge question. And I think when we talk about making a platform a social collective intelligence, those things really come to the fore. And today’s methods are far from satisfactory. You actually sort of had two topics here. One is advertising, and the other is how do we discover, which I would say are two different things. Let me talk about the advertising first.
In my heart of hearts, I would prefer no advertising at all. I’ve got a strong sense that the internet went down the wrong road once the cost of network and the cost of computing became low enough that services could become ad supported. Famously Larry Page, in his PhD where he laid out the Page Rank algorithm, that was the underlying original secret sauce of Google, specifically complained about internet advertising, bad incentives for search engines. And sure enough, they were one of the first people that reached the point where they could make their service entirely advertising supported.
So the best way to get rid of exploitive ads, get rid of all ads, goddammit. And to give a sense of the economic feasibility of that, the average revenue per user to Twitter is a dollar a month. The average revenue per user to Facebook is $2 a month. In today’s very low cost network environments, these platforms are still quite affordable that one could spend on the order of $10 a month and get access to everything they wanted on the internet. I’m quite convinced. And now that was not true 20 years ago, but the economics have progressed at a point, ironically enough, they originally very expensive, then they were just barely profitable for advertising. And now they’ve driven the price down so far through competition, which is good that they could be subscription based and be an economic barrier to relatively few people.
In terms of automated curation and then you know how we see what we see that is at the heart, I think, in many ways of what a collective intelligence should do for us. And the two things are linked. To the degree that a platform is based on advertising, there’s every incentive to keep you online as long as possible. To degree that it is a subscription service. Let’s say I was paying Twitter a dollar a month, they have the incentive to have me online no more than I want to be online because it costs them to have me online. So the incentives are directly reversed, and you can have some very clever new kinds of algorithms that the current Twitter couldn’t even imagine. For instance, suppose I told Twitter I only want to see 10 posts a day, right? And I’ll subscribe to a feed algorithm that has access to all my behavior and has a theory on how to rank the 10 things a day that I want to see and only fed me those in many ways.
That would be a much better use of my time and attention, assuming that the algorithm did a good job than the current endless feed of horse shits, some of which is interesting, some of which is a pure distraction interspersed with clickbait type ads. And as to how it might pick the 10 I wanted or more generally. And if someone wanted an endless stream of stuff, how it prioritized the stuff. I think one very interesting idea, especially for people who are not the artistically performative Twitter rights, would be to do a strong follow on a small list of people.
For instance, I really like Rich Bartlett, his perspective on things on Twitter. I’d say if rich likes something or even reads something up vote that to go into my algorithm. If Jordan Hall does the same, if Daniel Schmachtenberger, Tristan Harris say a 10 or 12 people that I know use those as truffle pigs, basically, to find the good stuff for me, I think that’s at least one idea that’s worth experimenting with. But I also very much like the idea of intentionally tuning down the window so it only gives you what it thinks is the very best stuff for you.
Layman: It reminds me of your advocacy for liquid democracy in terms of being able to draw on or put your voice behind other people’s evaluations, who you find to be high integrity, high wisdom characters.
Jim: Exactly. I hadn’t thought about that before, but it’s exactly the same logic, which is to use the gradient of capacity. And in liquid democracy, we’re assuming we proxy within an issue area to people who know more than we do, right? In the area of this kind of proxiable curation, I’m pointing to, hey, I’m not on Twitter all day, I don’t want to be, but there are some people who I really respect and if they’re reading things, then I probably do too, or at least I want an algorithm that takes that into very strong consideration. So yeah, I hadn’t thought about that before, but the logic is very similar. And of course, when we’re confronted with this many to many networked world of billions of people, in most cases there are people smarter than we are in any given domain. And it’s I think very healthy to acknowledge that, provide tools to let people take advantage of people who are better at curation than we are.
Layman: When Elon took over Twitter, there was a lot of hand wringing about the possibility that he was going to let Trump back on. And it seems to me it provokes a lot of questions most people haven’t asked themselves about how they feel emotionally and ethically about the question of platforming and about the question of access in general. That’s something I wrestle with as a person who records public conversations with people. Are there people I shouldn’t be talking to? But I always fall in a particular direction on this. If it turned out that Jim Rutt has been murdering people, I would definitely still talk with him and make those talks available as long as he was pretty nice and clean in his dealings with me. So that’s my bias. I go with personal experience of decorum. I tend not to believe that people are guilty by association.
I tend not to believe that decisions about platforming are a significant source of social danger or a significant strategy for combating social danger. And if you’ll forgive the phrasing, I’m attentive to the worth of the human spirit, even if it’s entangled with demonic outcomes. And I’m curious what you think, what’s the bigger moral risk? Is it allowing so-called bad people to have access to our general information space and contaminate us by association? Or is it the suppressive, superficial, maybe useless feeling that we have to be very careful and very controlling to make sure we only communicate with and get associated with the good, pure, safe and righteous people?
Jim: Well, I’m strongly in your camp on that one. And in fact, I periodically get critiques usually from the left about the fact that I have a very, very broad list of people that I follow. That I follow some people who I loathe, actually as it turns out. But that’s okay. I’m interested in what they think. And I think any person that’s trying to make sense of the world ought to have a wide view on what people think, even people that they loathe. So I am very strongly against this idea of de-platforming. But then two other issues about de platforming that just add to the show nature of what we are. The first and most obvious one is once the handle of de-platforming exists, both sides or all sides are going to fight to influence it. And so then you see all this performativity of people trying to get Twitter mobs spot up, to get people de platformed, et cetera.
And so once you have a lever like that, it becomes worth fighting for. And so it by itself increases the incivility and usefulness of the platform. And then of course, the real big one, I call it the enlightenment point of view. Enlightenment meaning the 18th century thing, not that hippie shit people talk about. Is who decides, right? Who should have the power to de-platform somebody? I’m not at all clear that for the big public platforms, or even better, if there were a cooperatively owned platform, who gets to decide. I think that’s too much power for anybody to decide where the Overton window of respectable discourse ought to reside.
Layman: So your suggested moderation principles allow for in principle all points of view, partly as a point of liberty and partly as a way of sourcing the green shoots of new ideas that we can’t anticipate. But what do you say to someone who says, look, ideas are not neutral and they’re not solitary. Memes are pack animals that have real effects in the world. And if we don’t radically minimize certain kinds of expression, then we are responsible for allowing a contagious spread of ideas. In fact, all the ideas associated with that expression, even if they’re not in it, and all the ways that those ideas might trigger violence, danger, intolerance, regression. How do you respond to a critique like that?
Jim: Liberty comes with costs, and I acknowledge that there are probably memes and ideas that do some harm. But my perspective is the program of censorship and the fighting about censorship and the danger of the who decides falling into the wrong hands are much greater than whatever collateral damage comes from the propagation of certain ideas and memes. An example that I give, that’s very similar, is a right to keep and bear arms. As a good mountain man, American, I’m a very strong believer in the right to keep and bear arms, but I also acknowledge it comes at a cost, right? The United States has a higher violent crime rate than other countries, but I suggest that that’s a trade worth having to be able to have the bulk of the armed citizenry against the tyranny of government.
You look at the Black Book of Communism as an example. When Marxist Leninist took over a country in the 20th century, approximately 10% of the population died in the ensuing massacres and chaos. Avoiding those kinds of outcomes is worth accepting a higher rate of mass shooters and things like that. And I know that sometimes sounds cold-blooded, but that’s the way I believe. I believe that you have to look at the big risks and tolerate some harm from the smaller risks. And I think that applies very much to the idea of platforming or de-platforming.
Layman: Well, as a Canadian, I’m not sure the rewards are justified in the gun case as the United States is handling it. But in principle, I agree with what you’re saying there. Another thing that comes to mind for me is I wonder if there’s a scale issue. Are we simply, with our digital networking systems, going too fast to privilege, quality and liberty? Do we need to build in viscosity? Do we need to slow ourselves down? Do we need to organize our systems to break up hyper fast, hyper-connected environments to generate spaces that systems theory suggests would be more likely to produce depth innovation and flourishing?
Jim: I did, I think, mention that in the first essay. I don’t remember if I did in the second essay, but I’m a great believer that having knobs for viscosity is a good thing. One of the proposals I’ve made in the past is that when you put up a post, you might be able to optionally have a setting that says, reply guys can only reply once a day. Because one of the real problems of online discourse, this has been true for 40 years, is that the hyperactive poster with nothing better to do has an unfair advantage to hijack the discourse. While the busy person who only checks their social media once a day has much less ability to influence the discourse than the person who posts 15 comments to a post. So I think that’s a very useful example of viscosity. Also, the example I gave earlier of only getting the 10 best posts a day, and that slows things down a tremendous amount and gives you time to think about how you want to engage with these posts rather than giving 15 milliseconds to each one before you decide what to do.
And then I think finally, an educational aspect comes into play. I’ve long been an advocate of saying that to do something with online content such as to upvote it or downvote it or share it, is actually a morally consequential action. And if we could educate people that this should not just be 15 milliseconds at a knee jerk, but rather when you do this, ask yourself, does taking one of these actions, is it actually good for the world for me to be putting whatever credibility I have behind this statement or this link, et cetera? I think if we do all those things. Things will slow down a bit and the quality of what is created will be better. As to the other part of your comment about reducing the connectivity of the network, I agree. I think that islands of intense work on specific topics is in many ways more valuable than the giant public square most of the time.
As an example, when I was on Facebook… I’m pretty much off Facebook these days. 90% of the value in the last two or three years that I was on Facebook came from the private groups rather than from the public Facebook, which had just turned into a cesspool of bullshit in a way that was much worse than Twitter. And I think the Twitter could actually benefit, and I believe I called for that in my essay, of having the equivalent of groups within Twitter so that you can form up and collect around concepts, around affinity, around interests, et cetera, and operate at a level of higher local connectivity, but less long range connectivity.
Layman: I love the idea that a well considered thumbs up or thumbs down is your civic duty.
Jim: And your moral responsibility, even more so than civic duty.
Layman: As a purely structural issue, do you think thumbs up and thumbs down systems as clunky as they might be, creates significantly more collective intelligence than just having thumbs up options?
Jim: I think so. However, an experience I had back in 2013 where we put up a Reddit clone for the Game B community that had thumbs up, thumbs down, it was not successful even though we added some really cool features to Reddit. In those days the Reddit source code was open source, believe it or not, and we added all kinds of cool new features, made it look a lot better. But I had a very strong sense that the downvote functionality in a tightly connected community had the issue that many people didn’t want to post because they didn’t want to be exposed to a down vote. And that has concerned me ever since.
In theory, if we were all automatons, the upvote down vote makes a lot of sense for shaping the sphere of discourse. But if it’s actually stifling people from providing content because they’re emotionally harmed by a down vote, I mean, I’m a cast iron son of a bitch, so it doesn’t bother me in the slightest, but I know a lot of people are much less cast iron sons of than I, more sensitive souls, and I can see how that would feel bad to them to get a down vote from people they cared about. That’s my concern. And curiously enough, if that were the case, one might opt for up votes and down votes that were not publicly viewable, but the algorithm could see them, for instance.
Layman: When I’m looking for some collective intelligence on movies or TV shows that I might want to see, I tend to favor Metacritic over Rotten Tomatoes, because Rotten Tomatoes focuses more on a thumbs up, thumbs down, and Metacritic focuses more on a gradient. And it seems like even though thumbs up and thumbs down might be better foregoing this tendency to inhibit people’s sharing than simply having thumbs up. There’s a problem there, where if you vote 51% in favor of something that’s very different than voting 99% in favor of something. And all that intelligence is missing when you just have a general in favor or general against position. Would it be better if feedback came in the form of a gradient or is there something uniquely good about a vote up, vote down system?
Jim: I’m with you on the gradient. In fact, in the very final version of our hack of Reddit, I don’t think we ever actually released it, but we replaced thumbs up, thumbs down with two sets of five-star ratings. One was for quality and the other was for importance. And I would’ve loved to roll that out, but the project was just losing velocity for various reasons, and we never did put it out. Say, for instance on Amazon, those star ratings actually are pretty useful. Something has four and a half stars, it’s probably pretty good. If it has less than four, it’s probably not very good. And people of course rate things one and rate things five, et cetera. So I think that something like a five-star rating system is an interesting compromise between thumbs up, thumbs down, and having to allocate points in some complicated fashion that most people aren’t going to be willing to do. So I think gradients are good in general, and I think five star is something that most of us are used to and provides enough of a gradient to be useful.
Layman: Jim, what the hell is emergent engineering, and how could it be useful to Twitter?
Jim: Yeah, this is an idea that’s coming out of the Santa Fe Institute, and I’ve done a podcast on it, actually. It’s the idea that we engineer for emergence. Let’s start with the distinction between the complicated and the complex. The complicated I like to say is that what you can take apart, put back together again, and if you’re a good quality mechanic, it’ll still work. Like your car, for instance, it’s got a zillion pieces, it’s very high tech, but a good mechanic could take your car completely apart, put it back together again, turn the switch, and it would start. In a complex system you can’t do that. For instance, if you took your body apart and put it back together again, guess what? It wouldn’t work if. You took the economy apart and put it back together again. At least it wouldn’t be the same as it was before.
That’s for sure. And the reason is that a lot of information in both your body and the economy is held in the dynamics of the system, not in the statics. The things that are moving continuously, the flows and loops while in your car, your car is perfectly sound when it’s turned off and just sitting there, your body is not sound. If it’s turned off, you turn off your body, you die. So the idea of emergent engineering is to develop engineering principles for the truly complex. Part of this is having a suitable amount of epistemic humility as one of the things we do learn in complexity science is the ability to foresee the consequences of your action when you intervene in a complex system is not at all obvious until you actually do the change.
The thumbs up, thumbs down thing. For instance, I was quite convinced that made a lot of sense until I actually saw it in action, and now I’m considerably more ambivalent about it. And so the discipline of emergent engineering is developing a mindset for how you do these probes and tests and think about theory, practice theory loops where you don’t try to develop a 747 of complexity from a single point of view. Because 747 is another complicated but not complex system. Well, something like Twitter is an inherently complex system, and you would want to use this emergent engineering approach of having ideas, trying them and carefully measuring them and then backing off if you’re wrong, reinforcing when you’re right.
Layman: It seems so straightforward. It’s hard to figure out why it’s so seldom implemented. It reminds me of the question of this ongoing relatively superficial argument between government and top-down administrative systems on the one hand and the supposed competition of the not really free market on the other. Why is it so difficult for institutions or administrations to set up internal competitive systems within themselves that can outperform the primitive and distorted external marketplace? Are a lot of people doing it and I’m just not seeing it? Or is there some inhibition to setting up more dynamic, more emergent processes within an institution?
Jim: Yeah, that’s a good question. And as far as I can tell, there’s almost none of it. There’s a few companies, I’m just reading an essay the other day, about a few companies that are using new processes internally that look a lot like internal markets, where groups within the firm have contracts with the rest of the firm to do certain things. Things are priced and there’s a benefit for being more efficient at what you do, but that does not seem to fit within the command and control mindset, unfortunately. And I do think it’s a huge opportunity as we try to move from game A to game B of developing mechanisms for controlling institutions, organizations, companies, colleges, et cetera, that have a lot more of the aspects of the dynamics of a market and yet have the high trust and unified mission of an institution. And I’ve been looking around, I don’t see a lot of it, but I do think that’s an area with a lot of potential, a lot of upside.
Layman: Seems like there’s a real identity and authenticity problem both for humans in general, but also in the domain of digital social communication tools. So we need to provide skin in the game by anchoring identity to individuals. We’ve got to weed out commercial and propagandistic bots, but maybe we also need to allow some degree of creative self-presentation, maybe strategic concealment for whistleblowers or the persecuted, maybe not. What’s the best overall approach to safe, creative, reliable identity authentication?
Jim: Yeah, this is a big question, and I’ll say I have a bias that comes from my experience over the years. Which is in general, the quality of anonymous hostings is a lot worse than skin in the game, full, real world identity. And on the other hand, again, this is one of these things where I have learned from experience, there are legitimate uses for at least pseudo anonymity, where a single real world human is behind a anonymous shield. For instance, in support groups for domestic violence or even for diseases, et cetera, or as you say, whistleblowers and perhaps most pointedly for people who live in countries where the rule of law doesn’t apply. And I might include Canada there, considering how the current administration responded to the trucker strike. I’m not sure Canada fully qualifies as a liberal democracy anymore.
And if I were organizing civil disobedience like that trucker strike in Canada, I would be very tempted to do it anonymously so Trudeau and friends didn’t try to freeze my bank accounts. So I’ve become more sensitive to the needs for anonymity, but I still strongly believe that for the reason you said, skin in the game, where possible full names, full accountability ought to apply. And it’s the sense of accountability and consequence, which is what I believe is so important for making discourse better. If you’re not willing to put your name behind something, probably you shouldn’t say it.
Layman: Yeah, that trucker thing was actually a pretty serious blow to my childhood sense of Canadian patriotism.
Jim: As we talked about pre-game, I’ve spent a lot of time in Canada over my life. I worked for the second biggest Canadian company for many years. My wife’s family had a classic cottage in Northern Ontario. I’ve many, many, many, many, many weeks of really great times there. I’ve always just admired Canada and Canadian people, liked them a lot. I was pretty shocked about that, tell you the truth. I’ve always been impressed by the educational quality of the Canadians. Most Canadians know more about American politics and the American constitution than Americans do. But this shocking move to authoritarianism by the Trudeau administration disillusioned me too, I got to say.
Layman: One of the things that comes up when we’re trying to grapple with how to handle emerging technologies is the fact that we don’t know what they’re going to do next. We’re dealing with a lot of problems around Twitter and social discourse networks that’ll most people in the world have not caught up to. But what have we not even seen yet, or what have we only seen the beginnings of? What are the problems that are going to occur or might occur that will afflict something like Twitter?
Jim: Well, one that’s been the talk of the town the last three months is the emergence of generative AI at a new, much higher level. ChatGPT came out in November, and I’ve used most of these generative tools as they’ve come out and you hit the walls within a few minutes or at most an hour where the thing just kind of breaks down into goofy shit. You say, all right, this is interesting and will eventually be important. But it’s not yet a game changer. ChatGPT is a game changer. It’s amazing the quality of stuff that thing can generate. Now, it’s not perfect. It hallucinates, it makes up fake facts, et cetera, but it’s good enough for a lot of purposes. And as my good friend, Peter Wang, has said extensively, the thing to remember about ChatGPT is it’s the December 1903 version of generative technologies.
December 1903 was when the Wright brothers got their very ticky tacky little airplane off the ground for a couple of minutes at Kitty Hawk. So things like ChatGPT will be getting much better, much faster. And so I think this is the big one. Generative content is going to potentially change the landscape of our online world in ways, which I don’t think we can really foresee yet.
Layman: Yeah, my sense is even this year, 2023, we’re going to see some real surprises in that area, perhaps, especially as Google wades into the fray and tries to outdo its competitors. But I think just the distribution of generative AI tools is going to cause a lot of surprises in this coming year. One of the questions that comes up for me is, given the sophistication demonstrated by these chat bots, how close are we to having a system that can differentiate style and complexity in verbal comments? Like, how close are we to differentiating a terrorist’s post from my philosophical reflection that quotes an ironic reference to ALA and bombs in January 6th? Can we automate the recognition of sophistication in language in a reliable way and focus on how rather than what signifiers are being used? Or is that still years away?
Jim: Truthfully, I don’t know that technology very well. Obviously it exists. Facebook and Twitter could not even attempt to do the moderation that they do without substantial automation. But as you point out, there are many, many examples of people getting zinged for ironic references, to bad things when they meant the exact opposite of what the algorithm thought they meant. So it’s obviously not yet perfect, but it’s like everything else in what’s called the natural language processing space, NLP, I would expect these things to get better at a pretty rapid rate. Now, another interesting technical arms race. There is now software that claims to have some ability to detect that an artifact was written by ChatGPT, for instance. And I believe these are being offered on platforms that they sell to college professors to look for plagiarism. However, I would expect that there’d be an arms race there as well, with in general, the generative side being a little bit ahead of the detector side.
So these things are possible, but they’re never going to be perfect, and there’s always going to be an arms race. One of the things you did not mention when you mentioned Google entering the fray, the other thing that’s going to be very interesting is when the open source versions of these large language models get out there, and they probably won’t be quite as good as the open AIs and Googles and Facebooks. And while today, the gap by being half a generation behind is big. When we get the ChatGPT 5 being half a generation may not even be noticeable, and the open source versions won’t have the guardrails that the current systems have, which to my mind are pretty ridiculous anyway. But they’ll also be able to be modified for whatever use the user wants and not packaged in an attempted harm reduction packaged the way the current things are.
And in fact, you want to hear a scary idea. I was talking to some political operatives a couple weekends ago, and these are pretty serious dudes, and they’re predicting that by 2024, both parties will deploy personalized chatbots where they take the data that’s known about you in the United States, at least you can get about 650 data points from a company called Axiom for 10 cents. And the political parties have already bought these data sets, train some additions, the sidecar to the LLMs, and basically there’s a personal chat bot trying to get ahold of you on Twitter and via email and text messaging, et cetera, that is highly personalized so that it resonates with your personal psychology. Yikes.
Layman: Yeah, a lot of uncanny stuff coming down the pipe pretty fast. I’m a big fan of the aphorism format. I love Nii Che’s desire to really compress complex thoughts into short forms. You know, “What doesn’t kill me makes me stronger” is a very provocative historical tweet. But at the same time, I also love great Canadian Marshall McLean’s concerns about the format of messaging. And it seems like the format of small messages that they favor on Twitter might not be an area in which people are succinctly, crystallizing complex ideas, but rather they’re being incited to a simplified, high speed throwaway communication style, lacking context and nuance. Do you think there’s a risk that the small window of communication on Twitter is simply too small to be a useful general sense making tool?
Jim: Now, I’m a famously long-winded motherfucker, right? And I didn’t do much Twitter until they moved it up from 140 to 280. Right? And I can barely express the thought in 280. So I would like to see that number increased. I fiddled around with Mastodon for a couple of weeks in I think December and over in Mastodon, the limit is 500 characters, and you could get a lot more of an actual thought out in 500. Now on the other hand where Facebook, where it’s, you get these long manifestos from crackpots, right? And that’s kind of screwy too. So I think this is a classic emergent engineering question, back to your previous topic, where platforms should experiment in this kind of space and attempt to converge to what produces the best quality discourse. I think I proposed in the essay, I’m pretty sure I did, that there be longer form artifacts.
And I know Musk has actually said he’s going to add this soon. Because if you’re active on Twitter, the new thing to get longer form crap out there is to take a screenshot of it and then attach it as an image to your post, which is about the most brain dead technical adaptation that I’ve ever seen. It sort of works, but what they really ought to do is allow you to have a “note” and then attach it to your tweet and then maybe increase the size of tweets from 280 to maybe 500 and see what happens. But I think particularly the one of allowing a note to be a sidecar to a tweet would be great for it to be able to have more in-depth discourse. They could also use some work on their threading engine, which drives you crazy if you’re trying to figure out where a tweet that just shows up in your tweet stream stands within a fairly complicated threading model that they have. So both of those would be, I think, important knobs for Elon to experiment with in the coming years.
Layman: Seems like we need to empower people to generate niches, communities, small world networks where people can develop expertise in certain areas or share an ethos untroubled by the rest of the system. But it also seems like there’s a risk there of people getting trapped in their own information tunnels. What’s your general thinking about how we set up a balance between novelty and efficiency and utility? How do we get the spaces we need without getting trapped in those spaces? What access to disturbing external content do we need 5%, 10%, or Well, yeah, what’s your general sense of that?
Jim: Yeah, that’s a great question, and you’ve laid it out. There’s definite benefit for high coherence, smaller networks working on specific domains. On the other hand, it’s real easy to fall into echo chambers and just hear people who believe the same things you do. And so as I proposed in that essay, at least for the public square versions like Twitter, Facebook, and especially if we have things like customized feed engines, which could become filter bubbles that are even stronger than what we have today, I’d love to have a wild card built into these things. And if we’re having a marketplace in feeds, people could make this a feature of their feeds, which says 5% or 10% wild ass shit that you’d never go looking for in a million years.
I suppose I would actually like to have a little bit of flat earth in my feed or a little bit of QAnon, not because I don’t follow anybody in those spaces, but it would probably actually be useful for me to see a little bit of that. And then if I decide I wanted to drill down that rabbit hole for whatever peculiar reason I could. So I do believe that we all need to have a little bit of novelty in our feeds so that we don’t overly collapse. And this is in a higher level of analysis. I believe there’s a significant and useful tension between diversity and coherence. Both are valuable and getting the balance right is going to be a key exercise for us all in the coming years.
Layman: It’s a bit worrying for me to see government, military and intelligence services defining things like misinformation and disinformation. They’ve always tried to control the cultural conversation space, but it feels like there’s swaying into a 1984 scenario as a kind of allergic reaction to the possibilities of free information flow. But I think this raises an interesting question about moderation rules, which is okay, we don’t allow dangerous predatory crimes. We do allow upsetting ideas. We do enforce rules of decorum. What do we do with misinformation which is factually incorrect or disinformation, which is factually correct, but maybe has a misleading context. So that’s something we regulate, we remove, we flag. What’s a smart way to approach that kind of stuff?
Jim: Yeah, I think that this is an issue, but I’m like you, I feel a very negative vibe when I think about the government getting involved. The United States set up a disinformation department within Homeland Security. Fortunately, as soon as the light of day went on that they folded. Now, I think guy was a good thing. On the other hand, there are idiots on one side and bad actors on the other, which are putting out both misinformation and disinformation. I think the right answer there is a community immune system. I will give must credit for investing in the community notes idea where people can actually annotate things that are posted, presume at some point some kind of vote from the annotations will come up, which will suggest whether the community at large thinks this is good information or bad information.
I think approaches like that are probably better, almost sure or better than top-down approaches, but we do need to do something about that. On the other hand, with the emergent engineering perspective, one should also think about how will these community notes platforms be gained, right? Will there be crowds of people who will try to manipulate the community notes by claiming that perfectly reasonable stuff, but from their “enemies” is disinformation or misinformation? So you got to think through that, have to try it, and we’ll have to adapt, I think to find a community sourced curation that’s actually useful, but I would take my risks on that journey rather than delegating it to Pierre Trudeau and his friends.
Layman: A lot of the systems that we’re embedded in now seem to be optimizing for engagement and there’s some problems with that. But if engagement is not the ideal metric, what should the system in general be optimizing for?
Jim: Engagement is terrible. As I said that the fact that we’re in an advertising model forces engagement to be the metric. I mean, if you own a store, the last thing you want to do is put shopping carts in the door so people can’t get in your store to sell them stuff. And if what you’re selling is advertising space, you want as many people online as long as possible, in terms of what would be a better metric. A guy named Joe Edelman has long promoted a metric and he is even has some good ideas on how to measure it. And that’s what he calls time well spent, and particularly retrospective time well spent.
When you look back at how you spent your time over the last month, I think is the time he suggests, how valuable do you think that was for you? I would love for something like a Twitter or a community owned platform to basically query its members monthly and tell them how much time did you spend? Okay, you spent 22 hours on platform X this month, one to a hundred, was that time well spent or not, or five-star rating. So I think that’s a really powerful, and people should check out Joe Edelman’s work on time well spent. It’s actually pretty subtle, and he makes a very strong argument that if you nudge people towards spending time that they retrospectively when looking back believe was well spent. That could be an extraordinarily powerful algorithm for society as a whole and not just for social media, but we could start it there.
Layman: That’s fun. I could get in on retroactive time well spent. That’s a promising idea. I’m wondering what you think are promising systems or platforms. Are you using some things that you feel are great? Are you hearing about some projects that you are excited about? What’s making you enthusiastic about the next cycle of platforms and systems online?
Jim: I will say there’s a lot going on right now with smaller platforms. Most of them aimed at a spoke community, a project called Wico, which is very interesting. There’s a project called Hilo, which is another quite interesting one. There’s another one called Coordinate Eight APE, which is for small groups to compensate each other, assuming there’s funding for a project. It’s a self-organizing management platform. There’s another one called the Metagame. I just talked to the founder of that the other day for the second time. Some very interesting ideas on how a community can self organizing using ideas like though not identical to sociocracy, so they could build out a quite large community of people working on large scale projects and still have coordination at a pretty high level without command and control. I think there’s more interesting stuff going on out in the green shoots right now than I’ve seen for a while. But I would also say none of them, the ones I’m aware of, are aiming to become a new public square. So I’d like to see some green shoots activity there as well.
Layman: I’m going to interview the Wico guys in a couple of weeks on the integral stage, which is weird. And this is weird for me too, because I’m basically a religion and spirituality guy, but this is pretty exciting stuff and I think we need to be putting a lot more of our attention and intelligence into understanding the dynamics of digital online systems, because that’s going to be determining not just the layout of society and economy, but also the way we shape our brains through the use of the technology going forward.
Jim: Yeah, 100% agree. And I do think that there’s been enough learning and frankly negative examples for the last 15 years that this is now stimulating people who are really seriously attempting to do something better. And I particularly like the Wico guys, matter of fact, actually providing a little support to their activity.
Layman: Well, I’m out of questions man. Is there anything else we should discuss on this topic?
Jim: Well, you did a good job of covering the waterfronts. Actually, been great chatting to you. And I had a new idea or two from the conversation.
Layman: Well, thanks for being on the show, Jim.
Jim: Alrighty. I guess we’ll wrap it there. Thanks for doing a great job hosting.
Layman: Cheers, man. Good talking with you.