Transcript of Episode 81 – Renée DiResta on Social Media Warfare

The following is a rough transcript which has not been revised by The Jim Rutt Show or by Renée DiResta. Please check with us before using any quotations from this transcript. Thank you.

Jim: Today’s guest is Renee DiResta. Renee is the technical research manager at the Stanford Internet Observatory, a cross disciplinary program of research, teaching and policy engagement for the study of abuse in current information technology. Renee investigates the spread of malign narratives across social networks, and assists policymakers in devising responses to the problem. Renee is influence operations, computational propaganda in the context of pseudoscience, conspiracies, terrorist activities, and state sponsored information warfare. She has advised Congress, the State Department and other academics, civil society and business organizations on these topics. Quite timely to say the very least welcome.

Renee: Thanks for having me, Jim.

Jim: Yeah, and I understand you’re a new mother in the same way as people listen. I’m a new granddad. Yeah.

Renee: Yeah, it’s exciting. Yay babies.

Jim: Yeah. That’s yes to babies gives us a reason to fight for the future, right?

Renee: Exactly.

Jim: Exactly. All right. Before we get going here on the body of the show, let me do a non-ad. The reason I say a non-ad is because nobody’s paying me to say this, I like to call out as worth watching, for sure, the new movie, the social dilemma on Netflix. This documentary drama explores the dangerous human impact of social networking with tech experts setting the alarm on their own creations. My friend Tristan Harris of Humane Tech, and the previous guests on the show has a significant role in the movie. Renee just told me she’s in it too. Folks watch it, the Social Dilemma on Netflix. This is important stuff.

Jim: This is a theme we’ve come back to again and again on the show, previous episodes on the topics we’re going to talk about today include Tristan Harris back a ways. Steven Levy, where we talked with him about his great new book about Facebook. Most recently, Philip Howard, where we dug into the research. He and his people at Oxford have done on paid manipulators of disinformation on the internet. Renee comes to us from the Stanford Internet Observatory, as we mentioned. Let’s start with what do you all do over there?

Renee: Yeah. It’s a great question. We are a relatively new center within the Cybersecurity Policy Center at Stanford, started by Alex Stamos, who was Facebook’s Chief Security Officer. We have three main areas of work. We look at forensic analysis of attributed influence operations. What that means is, when there is a data set in the world that is linked to a bad actor, and we can talk a little bit about attribution, and what kinds of actors those are when we chat, but we look at what were the tactics, techniques and procedures that that actor used to execute that influence operation? How did they do it? Why did they do it? What was the goal? How successful was it? We do, basically, the very thorough analysis, and then we release those. Oftentimes, we’ll release them with Facebook or Twitter. They’ll do a takedown and then we will be one of the independent researchers that analysis the data set. That’s bucket one.

Renee: Bucket two is we develop technology and methodologies for proactive detection campaigns. We believe that finding influence operations after the fact is, while it teaches us a lot about them, we should be taking that learning and transferring it into ways to find and mitigate the operations in the earliest possible stage. To that end, right now, we’re working on a lot of election integrity work, looking specifically at how do we do early detection of narratives related to voter suppression or misleading information about ballots, that sort of thing.

Renee: Then the third bucket is taking all that other research and turning it into something that policymakers can use to better understand how the information ecosystem works, and then how it should work. If there are gaps there, where we see repeatedly certain types of manipulation or misleading processes, what are the ways that we can implement change to be more preventative so that those things don’t happen? Sometimes that’s with policymakers, at a state or federal level. Then other times that’s actually engaging directly with policy teams at the tech platforms. One example would be saying something like, “Hey, you guys should really label state media in tweets. That would be a great thing to have done.” Just one example of the thing where when you see state media repeatedly being involved in spreading particular types of narrative saying, “Hey, allow them on the platform, but maybe we could do more to ensure that the public is properly informed.”

Jim: Yep. State media, talking about people like RT, who are quasi state media. What would you say state media? What do you mean by that?

Renee: Some of our early research on that, that informed the policy was actually looking at China. It was looking at CGTN, trying to Daly, range of China’s remarkable state media properties which have very, very many followers, including on western social media platforms. We look at the relationship between broadcast and social media. That’s on the broadcast front, we do kind of include print in there because there’s no good way to say. “All media but social.” We’re looking at the all possible channels and we treat social as yet one more channel. We’re understanding how broadcast media information on the internet is also oftentimes a part of achieving influence. For example, during the emergence of the coronavirus pandemic, we began to pay pretty close attention to what Chinese state media on Facebook and Twitter we’re saying and contrasting that with not only the secret surreptitious, automated and persona accounts that China was running, but also looking at how the covert side of the operation, and then this very overt attributable state propaganda operation, work in concert to put out a particular narrative or convey the Chinese point of view on Coronavirus. That’s just one example.

Renee: Many different countries have state media. It’s not a defacto problematic. It’s more a matter of certain state media is oftentimes will be a little bit looser with the truth. When that’s happening repeatedly, just creating a system whereby anybody who encounters this content, at least knows that they’re getting information from state media, I think is a goal that we had towards improving informativeness of the public. It’s pretty tough to know the name of an editor of a Chinese state media account on Twitter. When you see that tweet, or you see that content, it’s not immediately obvious that that person may have an agenda or an editorial line. Including a little bit more of a labeling function to let people know that that’s happening is something that we thought was a policy worth advancing.

Jim: I have the platform’s taking you up on that?

Renee: The platforms didn’t make that change, actually. Twitter has a label now that’s actually very, very well done. They label not only the state entity itself, but they also label significant employees, meaning the main editors. Now when you see tweets from a variety of state media, and they’re constantly evaluating what entities belong on that list, about right now, you’ll see a note that says there’s a little label under the account that lets you know that this is attributable to a state entity. It just provides a little bit of extra context.

Jim: It sounds like a good win. You’ve had some influence. Just curious, do they also tagged, say, something like Voice of America as state media?

Renee: They currently do not? That is a very interesting debate. Right now, the question is, how do you define state media? What they did was there was a focus on independence of editorial standards and funding, very spectrum there. BBC and Voice of America are not currently labeled as state media, because they are editorially independent. That is something that is, when, when you put out a tweet related to something related to Chinese state media, there will be people who will come and reply to you and tell you that Voice of America should be labeled as well. That is a really an ongoing question. Where, which of these entities should be labeled?

Jim: Interesting. Yeah. It’s all these interesting corner cases, we have to think through, right?

Renee: Yep.

Jim: We’ll talk about some of them, because that’s unfortunately, this whole idea of bad faith, discourse, and vandalism on the internet ends up leading us to a bunch of corona cases damn difficult. That’s where of the art maybe some of the science of this can make life better for people. On your website says among the internet observatories, first policy goals to deliver recommendations and how to jointly protect the 2020 US presidential election, and deliver those to congress and the major technology firms. Obviously, this is top of mind to a lot of people would be all done in this area.

Renee: Yeah. We have an entity called the Election Integrity Project, which has its own website, actually. If you Google for Election Integrity Project, embarrassingly, I don’t have the domain off the top of my head. But we have a team of four core research organization. There’s us at Stanford. There’s University of Washington, Professor Kate Starbird’s team. There is the Digital Forensics Research Lab, DFR Lab out of the Atlantic Council. Then there is a company called Graphika, which if you spoke with Phil Howard at Oxford, he works very closely with Graphika to do some of his research. Graphika has an excellent team. Researcher Camille François over there working as of four of us have both quantitative and qualitative analysis capabilities. We’ve chosen to focus, again, as I said, rather narrowly. We don’t want to be the fact checking police anytime, President Trump or Vice President Biden say something about the other that’s not true. There are other people who are working on that.

Renee: What we’re doing is we’re really focusing quite narrowly on misleading information related to the ballots, the process of voting, the rules of voting, so voter suppression narratives. Misleading narratives about ballots. We’re focusing on looking at how those situations, really the mechanics of voting are playing out. What we’ve done is we’ve built a broader network outside of just the four research organizations that connects with CISA, Department of Homeland Security. Some government stakeholders, connection to state and local, secretaries of state, and folks who are responsible for ensuring election integrity in their locales, civil society, which oftentimes are the first to see manipulative information targeting their community. Variety of, again, this is nonpartisan. Variety of civil society organizations, including some that are seen as more on the left or more on the right. Then let’s see. The last stakeholders, of course, are the tech companies. We do work with and communicate with the platforms to ensure that we’re able to … when we see something that merits a second look, communicate that to their teams as well.

Jim: Yeah, by giving you access to their data?

Renee: We use a variety of tools. We have CrowdTangle. Through Facebook, we have various API research access through Twitter, again, the things that any academic can apply for, so nothing. We’re working with them on … we develop tools using a variety of APIs and ways to ingest data that is accessible to researchers. Then, because there is stuff that we don’t have access to because of user privacy and other constraints, that we would take something and then surface it to the platforms and say, “Hey, this merits a second look, with the additional visibility that you may have into what is this account? Is this account behaving anomalously in terms of its logins or its device, or is it connected to other accounts in certain ways, as a co-admins of the page?”

Renee: Things that we can’t see, but the platforms can see. We have this pipeline where any entity that sees something anomalous can flag it, and a consortium of researchers will look at it and then it will be elevated to the appropriate people both to investigate it in the case of the platforms, or if there’s something that needs to be communicated to the public, then it would be communicated to potentially the media, potentially local media, in particular secretaries of state or others who would be able to put out a PSA, again, to mitigate the impact.

Jim: That sounds you guys have a pretty good working relationship. I know in the past, both Tristan Harris and Philip Howard have complained about the inability to get good data from the platform.

Renee: I think, we’ve made some progress. Phil’s team did an analysis of the Russia data set provided to the senate. So did I. When the senate asked for that analysis into the social media data sets back then, that was actually the first time that information was provided to researchers. We weren’t in communication with the platforms at all during that process. It was very much a platforms’ provided the data to the senate. The senate asked independent researchers to analyze it. Relationship has really changed quite dramatically, beginning in, I would say, early 2018. For a couple years now, there’s been more progress. It’s not perfect yet, but it’s definitely light years beyond where we were in 2016.

Jim: Wow, that’s good. What do you all seeing? Do you have any trends or issues that you’re seeing particularly in this narrow era of people trying to do voter suppression or put out misinformation about voting rules, things of that sort?

Renee: Yeah. It’s pretty fascinating. The adversaries have evolved since 2016 as well, as one should expect. What we see is there’s foreign and domestic misinformation and disinformation. This is not something that only Russia does. One of the challenges has been, how do we recognize that there are certain types of activities that are okay when Americans do them, but in authentic when executed by foreign actors? There’s always that process of looking for disinformation campaigns executed by manipulative actors.

Renee: Then there’s the dynamic of misinformation that goes viral. Somebody’s got something wrong, said something wrong, made a claim. Sometimes there’s a deliberateness to it, a hoax or dissent. We use this information to refer to something that’s deliberately misleading and misinformation to refer to something that is accidentally misleading. In the case of misinformation, the community continues to spread the story, because they sincerely believe it. They think that they’re altruistically helping their community, whereas the people who are involved in spreading a disinformation campaign in the early stages, know that what they’re saying is inauthentic or false or being manipulatively distributed.

Renee: We’re looking for both types of activities. Again, either can be executed by a domestic or a foreign actor. We are not really distinguishing along those lines when we’re looking at the narratives. But in terms of how the platforms respond to them. There’s a little bit more variability in what kinds of accounts they decide to take down versus what is labeled and continues to stand as a free expression issue. There are certain … Again, with voting misinformation, the platform’s have all recently changed their policies within the last six months, to make it quite stringent, to ensure that even misleading domestic information does come down quite quickly as the fact checked quite quickly. They’ve articulated a range of topics and areas that they’re going to intervene on, and everything from labeling tweets to throttling virality ways to minimize the spread of the stuff.

Jim: Interesting. You combine, let’s say, it’s domestic, and let’s say, voter suppression, with micro targeting, you got a pretty powerful combination. You target very precisely who you know to be the other side’s voters and hit them with voter suppression stories. Is that what you’re seeing?

Renee: Well, that’s one of the things that we’re looking for. That’s certainly a possibility. That’s where Facebook has been even just last week releasing changes to their ad targeting, what kinds of entities are allowed to run ads? Again, as you mentioned earlier, a lot of the challenges is the edge cases. A lot of political advertising is quite valuable, particularly if you are a new candidate running in a small local election and, say, you want to take on an incumbent. That’s the thing where you wouldn’t want to create policies that would prevent somebody from being discovered. Of course, if they’re running a local race, they would want to do something like target by zip code.

Renee: However, at the same time, that same something that’s a tool in one person’s hands can be used as a weapon in another person’s hands. A lot of the challenge is how do you set policy in such a way that recognizes that there are these bad actors who can use the same things that you’ve tried to provide to enable … something that you envision is enabling democracy, in fact, in the wrong hands can be quite detrimental.

Jim: Yeah. Let’s take that case. Interestingly, just, frankly, for shits and grins, when Facebook first announced that you could register as a political advertiser, I did. I went through and went through the minor hoops to get approved to run ads, political ads. I haven’t actually run any. But I do run the occasional ads to promote my podcast episodes, and I have not really looked into are the targeting tools available for the political ads different than they are for the other ads? Do you have anything to say on that?

Renee: I’m also registered to run political ads. Back in 2015, I started a page related to vaccinations, pro-vaccination page just as like a mom activist at the time. Because vaccines are considered a hot button issue, in order to keep the page going, if we wanted to continue to run ads in the future, all of us admins had to get licensed for fill-out-the card that comes in the mail and stuff. I have gone through the process.

Renee: The targeting tools have changed. But particularly since 2015, God. But the targeting tools, some of it is, oftentimes, an investigative journalist will find a loophole or will kind of reveal a way in which manipulation can take place. Then the tool is adjusted after the fact. With vaccination, running ads related to vaccines back in 2015, if you typed in vaccine, and you wanted to add target based on interests, only anti-vaccine results would appear, which was very interesting. It was because people were … the tool was drawing on what people were putting into their profiles, what pages very high profile anti-vaccine pages.

Renee: You could add target somebody who had liked the National Vaccine Information Center, which is an anti-vacs organization. But there was no comparable large pro-vaccine organization. This is a problem with social media. There’s asymmetry of passion. You have a lot of the true believer in conspiratorial groups will be far more active, creating far more content growing pages that were far larger. This was a dynamic that we saw even in 2015. Then the ad targeting tool would surface that activity would recognize that this was some distinct interest group and would afford you the ability to target them.

Renee: But if you wanted to target the opposite, there was no similarly passionate group that you could target. What we wound up doing was actually doing that zip code level targeting. Just saying like, okay, we need people calling representatives to advocate for vaccination policy in the following zip code areas. That’s how we’re going to run our ads. We’re going to abandon interests and just go with zip codes and certain age demographics and stuff. It was always a challenge. When you are a small entity that has a very limited budget, the value of that targeting is that you can execute activism campaigns with relatively low spend. That’s, of course, again, the ideal form of what this allows grassroots organizations to do is for not very much money, grow a movement. Again, the challenge is at the time, there was a lot of concern about if you were to limit, for example, anti-vaccine groups that would lead to a slippery slope of what group would be limited or banned or prevented from targeting people next.

Renee: The tool has really gone through so many different iterations. What is a political issue has gone through a range of iterations. Now, there’s a distinction made for political candidates. That is I’m not a candidate. I can’t quite see what that interface looks like. But there’s just such a range of changes that are made to on a constant rolling basis as loopholes for misuse become apparent.

Jim: Yeah. It’s a classic. When you read a business contract, or we recently bought a real estate property. It was quite funny to see how over the years these purchase and sale contract are five times longer than they were 20 years ago, where every weird corner case, has its own paragraph to do with it. I’m sure the platform ad policies have to be similar. As a adamant pro-vaxxer, I’d have to ask, are we guys successful in getting your page up and running and getting good followers?

Renee: Interestingly. We were … Yes. The answer is yes. We started the page, in part because we wanted to pass a very particular law in California in 2015. It was a little bit of a different process. Rather than growing a sustained movement, we set out to pass a particular bill. That means that it was … we really organized ourselves more for like the sprint as opposed to the marathon. It’s interesting now in the age of Coronavirus, and, again, vaccinations are such a heated topic of conversation right now is the project what am … Oh, my God … was the operation Warp Speed is going on to try to get us a coronavirus vaccine, the strength of the anti-vaccine movement as it’s grown over the last four years, five years since we got that law passed.

Renee: A lot of the early learnings that we had as we watched how extraordinarily connected the anti-vaccine movement organizers were, the extent to which platform algorithms were inadvertently amplifying them, the extent to which the real downstream harms, the offline consequences of allowing the anti-vaccine movement to really explode in size and coordination, and as it gradually became quite interlinked with other conspiratorial communities, ways in which that the nascent infrastructure of the networked activist communities that we saw in 2015 continue to grow over the subsequent five years. Whereas the pro-vaccine side, has had some successes and increasing visibility, particularly as measles was come back in the US and people are concerned.

Renee: At the same time, did not really enjoy that same kind of algorithmic boosting and did not really invest to the same extent in growing a sustained counter-movement. Watching how that’s continued to evolve over the last five years, and how that continues to be … How that community that, again, I got me into a lot of this has continued to be such a core dynamic for understanding how misinformation and disinformation spread on the internet in 2020.

Jim: Yeah, that’s interesting, because if you think about it from … let’s call it a biological, evolutionary perspective, the nets are an ecosystem. Mimetics, mean Plexus got clusters of means, evolved to adapt to that ecosystem. Sometimes they’re done intentionally. Sometimes they’re not unintentionally. They’re essentially more or less accidental theme and variation until something succeeds in the ecosystem. They happen to trigger the repetition. They trigger the recommender on YouTube, for instance, or they used to be the trending topics thing on Facebook. I think they still have it on Twitter. We’re essentially looking at a classic both Darwinian and human engineered set of means that are trying to propagate on an ecosystem.

Renee: Yeah. That’s very true. I think … I mean, I love the metaphor of virality is a double entendre in this particular case. But one of the things that’s very interesting is watching how just new features or new prioritization of features, shapes, both the engagement with these pages and also how the pages tailor their content and response. Facebook made some efforts to change how anti vaccine content was surface just to continue using that example. It was conspiracy theories a little bit more writ large, but anti-vaccine health misinformation, in particular, because it was having deleterious effects on public health.

Renee: They began to make these changes. They stopped them, for example, from running ads. They stopped accepting money for pushing out health misinformation, while continuing to allow them to run ads for political advocacy. You can run an ad that says, “I believe that vaccination is vast government overreach.” But you can’t run an ad that says, “Vaccines cause autism.” There’s again, that kind of carve out, how do you preserve free expression, while not allowing factually incorrect health misinformation to put people’s health at risk.

Renee: We see those ways in which then you see the anti-vaccine pages, which for years, I’ve talked about the autism thing, all of a sudden move more into the we’re parental rights organizations, we’re just a libertarian. We have libertarian sensibilities about this. Now they’re running most of their content to stay on the right side of the algorithm. The core belief is suppressed. The angle that is still palatable for the social media company that is providing them infrastructure is still emphasized, is emphasized instead. We see little shifts like that.

Renee: The other thing is something, okay, now they can’t run ads, but they want to continue to grow an audience. Interestingly, they’ll run Facebook Lives. Facebook’s Watch Tab is increasingly prioritized. They are using their live video. They want people paying attention to it. When a page is creating live content, that gets bumped up to the top of the feed. Inadvertently, you’re surfacing this content, if you’re following one of these pages, and they go live, that live engagement, the engagement on the live videos, continues to serve as the content in the feed, even though the anti-vaccine topic itself has been suppressed to a large extent in search and in the recommendation engine. While they won’t recommend the group, the live feed content will still be surfaced. It’s almost like a constantly evolving arms race. You fix one problem, and then there’s a unintended consequence that comes with a different feature. It’s very constantly evolving …

Jim: Yeah. Whack a mole

Renee: … ecosystem. Yeah.

Jim: Any evolutionary system is an arms race. That’s just goes with the territory. We’re going to come back down a little bit later and talk about what I would call wackadoodle conspiracy theories, like anti-vaxxer, and QAnon, et cetera, what are some of the dynamics of that. But let’s go back and talk about the 2020 election. We talked in passing about domestic actors who have good game theoretical reasons for micro targeting, and vote suppression against the people who they believe will vote for their opponent. What about foreign actors? What are you all seeing with respect to foreign actors and folks fresh?

Renee: Yeah. Facebook just took down a collection of pages attributed to the Internet Research Agency just last week, on Thursday, I think it was maybe Tuesday. This was a small website called PACE Data, P-A-C-E Data. It appeared to be targeting the left, just nominally the Bernie Sanders left. An anti-Biden theme, that thing, but also, not pro Trump. There is definitely demonstrably at this point, attributed activity from a foreign state actor, involving itself in conversations around the election. Of course, they don’t limit themselves to conversations about the candidates. They continue to do what they were doing, beginning back in 2014, which is targeting social issues. They insert themselves into the culture wars, and they take, again, the American culture war is doing just fine on its own. It’s go on Twitter. That’s the vast majority of, unfortunately, what is hitting trends or these culture war grievances in the France.

Renee: They do take that content. Then in this particular case, they had made a website. They were writing articles. But the advances, the things that we had begun to see Russia testing out, out of the US last year, we’re now appearing in the US with this attributed site, and that’s hiring local journalists. Hiring real people, real journalists, paying them a couple hundred bucks a piece, they don’t know who they’re writing for, of course. They were actually reaching out to laid off journalists and offering them an opportunity to write regular columns, regular short political pieces. Some of these journalists who have now found out that they were inadvertently unwittingly writing for Russian Front have begun to speak out about what the recruitment process was like, basically.

Renee: There is, again, when you are an investigator, and you’re looking at one of these sites, and you’re seeing real people with bylines, where if I Google this author’s name, here’s their Twitter account, here’s pictures of their family, here’s their vacation photos on Instagram. These are real people. These are not thin front sock puppet personas that are very thinly backstops. This is instead a real person. That franchising, that hiring of real people who are largely unwitting to incorporate them into the operation is something that is happening. They are also I mean … They’re mixing the real and the fake. There were some fake personas that were the editor of the publication appears to have been a persona, but the journalists writing for it were real. There’s that dynamic. Then, of course, again, we see amplification. Clusters of accounts that exist to amplify content, for example, on Twitter, where they’ll all post a link to an article that they want someone to see, and they will message it to an Influencer in hopes that the Influencer will see the article and retweet it to their million followers.

Renee: Right now, I would say the two big themes are amplification, again, of existing American grievances, and then this weird hybrid model of trying to have unwitting real people kind of do the dirty work for you. Those are the two themes that we’re really looking at with regard to foreign activity.

Jim: Now, how does that actually tied to elections. Strikes me that while permisos, from my point of view is not entirely obvious why that would be illegal or wrong to be stirring up culture wars, for instance. I mean.

Renee: Sure. Yeah. That’s one of the really interesting debates is what impact does this have and who should be allowed to do it? If we look at the 2016 model, there were three things that were going on. There were attempts to hack online voting systems. There was the social media operation, the internet research agency activity, and with the social media operation, the infiltration of communities, so trying to turn unwitting activists into participants in the operation. Then the third piece was the hacking leak. The GRU, which is Russian military intelligence, completely separate organization from the Internet Research Agency, went hacked the Democratic National Committee and began to release the emails, the committee and the Clinton campaign began to release these emails. They were really leaning into releasing these emails to journalists, and then to WikiLeaks.

Renee: A lot of the conversation that we’ve had, those of us who have investigated … my team has looked at both the Internet Research Agency data set and the GRU data set. What we find is that the GRU Outreach to journalists that hack and leak operation really had a remarkable impact on changing the American electorates conversation about what topics mattered going into election day. If you recall, the first tranche of emails was dropped as a distraction from the pussy gate tape coming out. The, “Oh look here,” Hillary’s emails immediately following the access Hollywood tape, and that changed the conversation. People focus less on this revelation about then candidates, Trump’s character and treatment of women and the conversation instead shifted to what was made to sound quite salacious content in these emails. Some of it was actually interesting. Some of it was, this was where the Pizzagate conspiracy actually originated. Weird interpretation of emails about getting dinner.

Renee: All this is to say, there’s different ways, different degrees of impact, depending on what kind of attention you managed to capture, and to what extent you manage to shift the conversation to be focused on the topics that you want the citizens of a country to be talking about. With the social media operation, the value of continuing to perpetuate and exacerbate the culture wars is, yeah, as we’ve seen, there are actual in the streets skirmishes, unfortunately, with some regularity happening right now. Protest movements, for example, the protest and the counter protest, the ability to really rile up both sides of a grievance or argument to entice them to go out into the streets and engage in skirmishes is a thing that we saw Russia do. We saw the internet research agency, goading Americans in 2016 into going and protests literally across the street from each other.

Renee: They made one page for that was pro-Islam and one page that was pro-Texas secessionist. They created two events and had two different groups of people go out to the same street and literally protest across the street from each other one pro-Islam, the other anti-Islam, and police had to come and monitor the situation. There’s YouTube footage of these two different groups of Americans screaming at each other across the street. That’s back in 2016. Now, when you have a much more heated, much more volatile environment, this is where you see the amplification of suggestions that violence is imminent, really, in some ways, acting almost as a tinderbox for nudging it to happen. Does that make sense, Phil? That was a very long winded explanation. I’m sorry for that.

Jim: That look great. That was actually good. That was very rich. I do remember the Russians and the Texas secessionist and the pro-Islam. That was a very dirty trick, though, of course, I have to wonder, don’t we do the same thing in Iran? Probably do. I hope we do. Actually, right.

Renee: That’s always the question is the US doing it, too. I don’t think we do it quite the same way. The mechanics of what US government can, can’t do, or quite different. Of course, historically, yes. That was happening.

Jim: Yeah, for sure. Well, let’s exit here the 2020 election discussion by … What is your sense that foreign manipulation will be greater this time or less? We learned enough to down regulate some of this manipulation? Because there’s been learning curve on the actor side that it could be great. What’s your thoughts?

Renee: Yeah. There are very bright lines around enforcement for foreign actors. The platform’s have this idea of integrity. They’re called the integrity teams or the teams that are doing this investigation, who kind of constant monitoring of their platforms. The idea of integrity looks at the actors. Are these accounts what they say they are? You can be a real Texas secessionist. But if you’re a Texas secessionist, front persona in Moscow, that’s considered an inauthentic, Texas secessionist. There are some funny thought experiments you can go through on what exactly is an authentic Texas secessionist. But there is a belief that a foreign person pretending to be an American is inauthentic and needs to come down.

Renee: Then the other two criteria are really looking at the content. Not from a narrative standpoint, but more from these websites that were created yesterday. Are they blatantly manipulative? Then the other piece is the behavior, the dissemination patterns. There are some bright lines that say, when this is happening, there are certain types of manipulation that won’t stand. The problem is those bright lines don’t really exist when it’s domestic people. When it’s the authentic Texas secessionists, who maybe are coordinating to amplify content, there are some real gray areas around what is acceptable versus unacceptable types of coordination.

Renee: If you tell 30, of your closest friends to all post the same thing at the same time, well, that’s a thing that real activists do. You want to achieve a sufficient share voice all at the same time. If you send that out to your mailing list of 30,000 people to all tweet at the same time, again, the question is, where are the lines around, what coordination is acceptable versus what coordination starts to veer into spam or manipulative territory? That I think is our real challenge, which is, there are clear policies in place that relate to what criteria justify a takedown, or what criteria justify a flag for the domestic stuff.

Renee: What you see instead is anytime the platform makes a determination that some domestic activity is inappropriate, or does violate a policy line, there is a second wave of drama, and activism, honestly, associated whether that platform call was fair or unfair. Whichever political partisan side feels that it won or lost in that call, this, the idea of working the referee, is going to come out and either vociferously protest that they were censored, or actually potentially try to nudge it even further and say, “Well, the platform didn’t go far enough, they should have taken down all of this content, too.”

Renee: Unfortunately, what you have is second order domestic battle, taking shape around what the platform should do, whenever the platform does anything. I’m not at a platform. I don’t know what the internal dynamics around that are. But this is where you have the sense that everybody feels that they’re being censored. Everybody feels that the platforms are doing a terrible job moderating. Everybody feels that their side’s voice isn’t being heard. This is where then you start to see prominent lawmakers who are provocateurs who will get in and saber rattle about legislation or penalization, or even the president signing an executive order to defend the freedom of speech of some group of people.

Jim: Yeah. There’s no doubt that it’s become a very hot button, at least, at parts of both sides, as you point out, are complaining about the platforms themselves being able to put thumbs on the scales. I actually worked in the Bernie Campaign in 2016. I can say at the end of that campaign, many of the other workers, not necessarily myself, but many others thought that Bernie had been screwed by the platforms. Then the force on the red side, there’s a lot of screaming as well. The platforms are deeply biased. Well, as somebody who helped build some of these current platforms. But earlier generation platforms, I happen to know that, of course, you could put your thumb on the scale if you wanted to. How do we police the platforms, to keep them from becoming biased actors on the political scene?

Renee: Yeah, that’s great question, I think. There have been a couple of independent audits. It would be nice to see more of a consistency there, more transparency rather than occasional two year-long audits. There was one looking at conservative bias. There’s one looking at civil rights audit. Again, very different communities that were concerned about how Facebook was treating their content and their community. The platforms do put out these transparency reports where will they’ll tell you how many reports were filed, how many pieces of content they actioned on. There’s not a whole lot of visibility into the specifics there.

Renee: One area where there is a little more visibility into the specifics is actually in DMCA takedowns, as in copyright takedowns, where there’s a database that is maintained that lets people have a little more visibility into the specifics of the complaint. I think it’s hard because the other dynamic that’s happening here is the privacy dynamic, which is what should the platform’s be making public versus what should the platform’s, the prevailing sentiment, particularly in Europe, but also in large parts of the US, is that the platform’s have too much information about people. If they were to put out more information about the kinds of takedowns, or specifics or ways that they made a particular call, that might have some potential privacy implications. It’s a whole range of challenges.

Renee: I think, ultimately, there is no oversight body for the industry. One of the things I think about a lot is the way the financial industry has these multi-tiered systems of regulation. There’s the SEC up at the top, then there’s FINRA, and some of the self-regulatory bodies that act as internal industry watchdogs. Then at the bottom level, the exchanges themselves can set rules and make determinations about how to maintain market integrity on their particular part of the ecosystem. I feel that the tech industry would benefit from a system a little bit more like that, where the platforms have their policies and can do these rapid responses, change their policy in response to some manipulation. You want them to have that ability to act quickly. Then you have the industry consortiums, particularly on topics terrorism or child exploitation, those networks do exist. Now we have that same body for election integrity. Again, platforms operating in such a way where they’re communicating with each other, as opposed to only monitoring what’s happening within their own walled garden and not communicating threats or manipulation out to their peers.

Renee: But then there is still that gap up at the top. There’s no SEC type entity that’s responsible for looking at overall digital information, ecosystem, health, and constructing regulatory policy that would treat this as an ecosystem, which is what it is.

Jim: Yeah. It’s an interesting and difficult problem. Because certainly they make mistakes, or at least from a reasonable perspective, seems they mistakes all the time and what they take down. Their appeals processes seem to be nightmares of non-action. One of my good friends who’s been on the show twice, Jordan Hall, who’s a real serious …

Renee: Oh, yeah. Yep.

Jim: No one can doubt his good faith and analytical skills. He wrote a very deep article about QAnon on media, and he posted it. They took it down. He goes, “Wah!” He went through the appeals process, and they refused to put it back up.

Renee: Yeah.

Jim: All of our friends of Jordan’s said, “What a fuck dude, what’s this?” Yet they seem to be incapable of reversing what’s clearly erroneous decision?

Renee: Yeah. Hey, I mean, not small amount of that, unfortunately. There’s the first tier moderation is some combination of AI and contractors, depending on which platform you’re talking about, and what the topic is. The AI will get things wrong. It’s really hard to police in context. If you say the word “bitch,” you can mean it in quite a nasty way or you can mean it as a term of endearment depending on how you and your friends engage.

Jim: Or you could be a dog breeder.

Renee: Right. That too. There you go. There was I think … There was a Bush’s Baked Beans when the Facebook ads interface, when the ad requirement came in for political ads, they all of a sudden started getting algorithmically flagged for running political content without a permit, because Bush’s was in the name, even though this is a bean company. There’s ways in which the AI doesn’t behave in the way that it should. That’s trainable to an extent, but you’ll never have … for a while, yeah, I don’t think we’ll see anything close to the level of nuance required. There’s those flags that result in … that comes down because the algorithm made a decision, or there’s a content moderator somewhere who, again, maybe doesn’t have the cultural fluency or doesn’t fully understand what’s going on. They don’t take something down, that some group of people think should be down, or they take it down and people feel it was a false positive. You wind up in the queue of emailing, what feels like the robot. Again, it’s going into another ticketing system, where somebody is going to spend all of two seconds on it.

Renee: To address this, the platforms you’re looking at millions and millions of these things each day, or each week, depending on which platform you’re talking about. There’s always going to be some amount of errors. If the error happens to a high profile enough person, or the situation feels particularly … it really hits the right notes and emotionally resonates with a large audience, then you’ll see the algorithm got it wrong, or that the moderator got it wrong, story will go viral. Then the platform will reverse the decision. Then again, there’ll be kind of a second wave of debate about how could they have gotten it wrong and who’s running the show over there?

Renee: It’s a real challenge to think about how do you have a moderation system that doesn’t … what’s the appropriate amount, which side do you err on more, false positives or false negatives? How do you think about what moderation infrastructure you want, and then how do you think about what appeals process you want. It’s a morass at this point. I wish I had something optimistic to say about it. I think Facebook’s got this oversight board, which hasn’t quite gone. I don’t think as … it’s not operational yet. But we’re all waiting to see what that turns into. I think that’s supposed to be an independent group of people who weigh in on major moderation decisions, meaning at a policy level, as opposed to edit individual per content piece level.

Jim: Yeah. Of course, per content piece level that people just get totally pissed off. Legitimately so, when a good faith article gets whacked.

Renee: Yeah.

Jim: I have a crazy idea, run by it, tell me what you think, which is that any author should be able to put up a stake of money, any amount they want, up to let’s say a million dollars, or as little as, say, $10. It’s an even money bet with the platform that call has to be sent to an objective third party arbitrator, or the American Society of Arbitrators, et cetera. Whoever wins gets all the money. If you think you’ve clearly been dealt wrong, it’s $1,000 Facebook, god damn it, and they’re required to then send it to the Independent Arbitrator. Whoever gets the call from the arbitrator gets 1,000 bucks from Facebook, or gets 1,000 bucks from the author. That’s true. That’s an interesting way. It’s only the really important things we get pushed that way, but it would kind of make a put up or shut up.

Renee: No, I think, the skin in the game argument.

Jim: Exactly.

Renee: I totally get it. Well, the challenge was Facebook $1,000 is they’re earning that in a microsecond. I think that’s one of the challenges. But I do think that the … how do you ensure that bad actors aren’t flooding the moderation appeals line, just to distract people from looking at other things. It’s, of course, a tactic that trolls do use.

Jim: Yeah. It’s game theory all the way down, unfortunately. It’s predictable. I remember in the relatively early days of Reddit …

Renee: Oh, yeah, yeah.

Jim: … when Brigading got started, can you believe it when Brigading was brand new, that I think as far as I know, it started on Reddit, because of the fact that in Reddit the up-votes is down, votes are so significant what gets attention, and there were organized armies, and they were very public about it, wasn’t even against the rules.

Renee: Oh, totally. Yeah. No. Brigading, we just did this takedown analysis with Facebook took down a set of accounts out of Pakistan, a story came out last week on Tuesday, so first week of September on Tuesday. This was what they were … a lot of what they were doing was these were groups of people who were coordinating in Facebook groups to go report accounts that they saw as being enemies of Islam, or enemies of Pakistan. This is an international phenomenon. That’s not a thing that is unique to American trolls or even American culture, is the thing that happens everywhere.

Renee: It’s funny hearing you say Brigade, it’s a term I use also. I’ve spoken with a couple folks who cover tech, and I use the word and they’re like, “What is that?” It was actually this very, very old thing where you motivate people to go take action against a hated other community. It’s this is human nature on the internet. Brigading is an old, old, old phenomenon. It’s just how it manifests depending on which feature set you have, or what algorithm is going to up rank or downvote … up rank or downright content on Reddit, it’s kind of the upvote, downvote phenomenon you mentioned.

Renee: Facebook will actually … there’s debates about whether the sheer number of comments will level up a post, because it’s based on engagement, and leaving a lot of comments, or a lot of likes, or a lot of react, or whatever can potentially trigger the algorithm to show something. You’ll see Brigades trying to propel certain things to the top, just by going and you’ll see them all eat the same hashtag on the post, that it will also surface high in search results or if somebody is searching for a hashtag on Facebook or Instagram.

Renee: This is really, I think, the big shift of the internet delivered us. It’s that people are active participants in the curation process. I feel like that is the one key takeaway for me, as I look at both conspiracy theories state, sponsored trolling, disinformation campaigns. Ultimately, it is all about getting groups of people to feel invested enough to take an action, getting groups of people to feel invested enough to work to shape a conversation. That is what the internet is really, really good at. Why it’s increasingly broken down into the series of factions, where it’s one faction battling against the next for attention for the … to steer algorithmic curation or algorithmic recommendation by providing the signal that the algorithm is going to use to then take that content and propel it even further.

Renee: It’s this idea of participatory morality, that’s the fundamentally different thing in propaganda and information operations today, that was not there 10 years ago

Jim: Or was there 10 years ago, but it wasn’t as widespread. I would say these games were being played on Reddit, 10 years ago, even 15 years ago. But now they’re being played at massive scales on Facebook. Yeah. We get the interaction of a couple of perverse situations, first, particularly once the world goes to advertising based model for services, both the service and the various partisans are all engaged in attention hijacking. I want your attention above all else. Then there’s these game theoretic ways to do it. I don’t know if you’ve ever heard of Campbell’s Law and Goodhart’ts Law. These are really interesting concepts. Basically, Goodhart’s Law was the original, which is that in business, once you start measuring something, it’s going to get gains, essentially.

Jim: Campbell’s Law was an extension of that, and things like social media, once some set of behaviors produces, from the agents perspective, beneficial outcomes, for instance, everybody putting their same hash tag on i.e., our favorite post gets more attention, and hence wins the attention, the economy game, then those algorithms will become subverted by agentic, game theoretic behavior. We’re caught in this amazing rat race. It’s hard to see what the bottom of it. I’ve been helping build the online world since 1980, believe it or not, when I went to work for the Source, which was the very first consumer online service.

Jim: I actually designed our second generation email forum system. I’ve been thinking about this stuff for a long time. I think back on how naive we were, even in 1990, when the EFF started rolling out, and then the mantra was tools, not rules. We thought that we could develop good enough tools to have emergent good behavior. But god damn it, it turned out that you add Campbell’s Law to Game Theory, tools themselves, at least so far, those will be able to get the job though.

Renee: It’s interesting because a few years back, Reddit had a terrible reputation as being this massive hive of trolling and brigading, and outrageous behavior, and so on and so forth. One of the things that their moderation framework has now is it really puts a lot of power in the hands of mods at a local level, which is interesting, because it’s something that Facebook and Twitter can’t do. You have this interesting framework on Reddit, where there’s varying degrees of tolerance within a community, and people who were choosing to participate at a smaller size, versus Facebook, which has to make things palatable for a much larger audience size, and same with Twitter.

Renee: The distinction between how Reddit operates now, or there’s the top level mods, the site wide people who are responsible for making sure that nothing outrageous or egregious is happening. Then the lower level, smaller community mods who do more to like set culture and norms within a community and intercede it at a lower level. There’s even some really basic ones like subreddits for dogs standing on their hind legs, I think, or maybe it’s cat standing on their hind legs and trying to … I’ve seen both, where the rule is, you can only post pictures that are that one thing. If you were to come in and post a picture of something that was totally different, your post would be deleted. Eventually, you’d be maybe kicked out or prevented from posting.

Renee: There’s an interesting dynamic there where the rules are much more specific to local communities. That’s something that you don’t see as much on Facebook or places that have to be more broadly appealing, or places where a more one-size-fits all rule set.

Jim: Though, that’s changing on Facebook, more and more of the traffic is going to private groups or to public groups, for that matter. There, you do have curation power. I am a lead mod on to pair fairly large Facebook groups. We have very powerful tools. We can take anybody off we want. We can ban them for a while. We can make … Who gets in decisions. We have all kinds of interesting little tools. The Facebook group space is actually forming up to be quite similar, in some sense, to the old subreddit space.

Renee: I’m happy to hear they’ve improved the mod tools. I’m not a moderator of any Facebook groups, member of many mod is zero. But I know that that’s been a thing that a lot of moderators have been asking for, to what extent can you have that. But, again, it’s an interesting dynamic outside of the view of the public, the secret groups, or sometimes it’s hard to get a sense of what kinds of behaviors are happening and some of the secret groups and how the platform should handle that, from an abuse standpoint. There’s also this move, of course, towards nudging people towards even smaller groups. WhatsApp groups or group chat dynamics that is yet another increasingly private space, where people gather outside of the oversight of the algorithm at all if they’re encrypted. A lot of changing dynamics around how people organize their … where do you put your factions? Where does your faction live?

Jim: Yep, interesting. You have to make hygiene, for instance, in my two groups we make and I’ve actually fought for the maintenance of this standard, is that while you have to be admitted to the group, it’s world readable. Because I believe that hiding in a secret group is bad for the hygiene of the group.

Renee: Yeah.

Jim: If you’re not prepared to have the world read what you wrote, you shouldn’t write, god damn. For certainly there’s some cases where that is not the case, domestic violence, or people who have embarrassing illnesses or something. But I would suggest that one should be somewhat suspicious of secret groups, unless there’s a damn good reason for them. That’s just my own personal bias. I’ve actually gotten two big fights in my group, if you want to think of private. I said, “Oh, secret, I should say, make the distinction, ain’t happening. That one was only that and you want to vote me out, put somebody else in, fine, but for the time being, and that’s how it’s going to stay?” Well, that’s all very interesting.

Jim: Get a little short on time here. Let’s move to another topic, which you’ve written about. When I was very, very interested in still am, and that’s Deepfakes.

Renee: Yeah.

Jim: Could you tell people what Deepfakes are and what the state of play is there?

Renee: Yeah, sure. Deepfakes, it’s a term that refers … it was originally a term that refer to generated video, so algorithmically generated video. This is not video that is produced and then edited in a manipulative way. This is video that is generated from whole cloth by an AI. In video … Let’s use the example of a speech by the president. If you were to take footage of Barack Obama, you could of course using various image editing and video editing tools, potentially splice in different audio or something along those lines, but there would be a video to go back to. There’s something where you can look at it and see forensically, that this video has been edited or altered.

Renee: In generated video, the AI is producing it. The original output is this video of purportedly the president speaking. There is nothing that you can check it against. It’s just a video that’s produced entirely by the AI. Originally, it started off … Deepfakes was … because some of the first application was porn, actually, some of the early work was looking at adult content and having superimposing or having a version made with somebody else’s face on it. That was where the Deepfakes really took off. Then now, of course, in addition to generating video, you can have AI generated audio, text, and still images as well. It’s come to be a catch all term for this generated content.

Renee: Now, I think we’re at the point where there have been a number of ways in which the technology has become democratized or ordinary people can use primitive generators, primitive versions. There’s a website called thispersondoesnotexist.com. They just constantly are putting out AI generated faces. Essentially, it’s for educational purposes to show people what the technology can do. But then you also see manipulative actors going and taking those faces, and using them as their social media profile pictures, because, again, you can’t reverse image search and see that, “Oh, this was a stock photo that was cropped, or an Instagram picture that was flipped.” Instead, it’s just a face that exists nowhere else. You might be more inclined to think that it’s a real person, because there’s no immediately accessible way of disproving that.

Renee: Then with text, the most recent iteration is this tool called GBT 3, and that is AI generated text. You feed it a prompt, and you give it a degree, it’s called temperature, a degree that of creativity. It produces text for you in response to the prompt. If you were to prompt it with the start of a news article, for example, it could generate the remainder of a news article, or it could if you give it a prompt of a couple of tweets, it’ll generate you more tweets. It in tweets based on the format of the prompt, or the instructions that you give it, what textual output you want to see. Now there’s this whole world of AI generated text, again, that is unique and not repurposed or plagiarized, and reshuffled but isn’t said something generated by the AI.

Jim: Yeah. It’s quite interesting. When the Deepfakes videos came out, I was very concerned that this could cause some form of information apocalypse. But it’s interesting that it didn’t, at least not in the west. I can’t think of a single really major exploit that was done with video Deepfakes.

Renee: I think there’s a couple of reasons for that. I think, first, just the regular edited videos are still quite effective. The one of slow down Nancy Pelosi’s speech and she sounds drunk, right, was, you didn’t need a sophisticated AI generated Nancy Pelosi video to do that, somebody just kind of slowed down selective parts and rereleased it and like, boom, you had a viral video. But the other thing that I think is interesting, and my work, my thinking has gone on the AI generated the risk, relative risks, is that when somebody makes a video, you’re going to achieve a … it’s going to create a short-term sensational moment, everybody is going to be talking about it.

Renee: But when you have that dynamic of that short-term, sensational footage, this moment, tons of investigative journalists go and begin to dig in, tons of researchers begin to go and dig in, authenticating it or trying to figure out where it came from, who made. It really draws a lot of attention to the content of the video itself, but also to who would have put it out and how it got amplified. It’s one of these things where it’s not something that you’re surreptitiously and subtly influencing over a long period of time, the way that you could do with generative text.

Renee: With generative text, you could just have tons and tons of generated content posted as comments that would be undetectable. Or the ability to have a bunch of Twitter accounts that are tweeting out generated text. Again, this is a thing that state sponsored actors do. They usually have humans running it. But here’s an opportunity to just reduce the cost of doing that, again, reduces the discoverability because you’re no longer plagiarizing. That’s a very subtle, slow thing that happens over time. A little bit of a different strategy. Influence in a more of a slow burn long game approach, as opposed to the sensationalism of a viral scandalous video.

Renee: I think it’ll be interesting to see. I’ve been curious if there’ll be some sort of October surprise leaked audio of some sort. We’ve seen particularly in American politics, how many times as a politician been undone by some … it was Mitt Romney’s, gosh … he gave that speech.

Jim: That makers are the takers. Yeah.

Renee: Yeah. There have been a couple of these … the leaked audio comes out. It’d be interesting to see if there’s fake leaked audio. There’s a lot of ways that you can do this. But again, that’ll be a very scandalous, sensational moment, and a lot of people will begin to go and investigate. It’ll be interesting to see how these things are used when they’re used.

Jim: Interesting. Yeah. With respect to the Deep videos, and if you actually hit on something here without quite naming it, which is that we developed a social immune system, combining people who will dig into and find out where this thing came from, and also, we probably now have, most of us at least, some reasonable amount of context in which we would look at a video, I suppose we saw a video of Hillary and Bill Clinton telling racist jokes, for instance, while there’s probably some people, most would say, that’s bullshit to me, and probably one of those Deepfakes. In both the feasibility domain and in the fact that you point out that there could be rapid forensic investigation, the deep fake video thing did not seem to happen anywhere nearly as much as the alarms that were being wrong about two years ago.

Renee: I think there’s also a lot of public service announcements about it, in a sense. It’s one of the few technologies that as it was developing, researchers, civil society, academics, were both developing countermeasures, and, detection methodologies, and even Facebook, Google, range of different tech companies began to put money and resources behind detection competitions and things like this. As the threat was emerging, there’s always any new technology favors the aggressor in the beginning, but until the countermeasure or policy or rules were put in place.

Renee: In this particular case, you had that dynamic happening concurrently with the improvements and developments to the technology. The public became aware of the fact that these things existed also. That creates an interesting dynamic too, which is that the … it actually became potentially … what you started to see Adobe Voco, Adobe had this product called Voco, which was going to be an audio generator. When it was announced, the early beta, early announcements of the product, unlike 2017, I think, what you started to see was actually, in some of the president’s surrogates, Jacob Wall in particular, publicly speculating that maybe the access Hollywood tape had, in fact, been faked and was a Deepfake audio generated with Adobe Voco.

Renee: The mere existence of the technology led to certain people insinuating that the technology had been used, even in cases where it hadn’t. One of the interesting dynamics was it kind of creates just a skepticism, almost like, unfortunately, a cynicism. Among people were the belief in the video or belief in any video, whether real or not real, increasingly became like a tribal Rorschach test. Is this a thing that I’m inclined to believe? Well, how do I feel about the person in it as opposed to waiting for an investigation or assessment or take on it? That’s a weird place for us to be now I think, too. The idea that even real video is impugned by high partisans, real audio impugn by hyper partisans, because the mere existence of the technology to fake it is known to the public.

Jim: That’s interesting. I call epistemic question. Where reality itself may be assumed to be fake, because we know that it’s possible to fake, which can essentially highlight with some of us call information nihilism, where we say we can’t believe anything. I think is a wrong statement. But nonetheless, are people falling into that. But as you point out the applications for GPT three, four, or five, six, et cetera are perhaps more insidious and maybe more difficult to detect, though, I do understand that there are adversarial networks already being developed to detect at least in some context, GPT three. I’ve actually played with GPT three some, and it’s good, way good, but it’s out pretty quick.

Renee: Yeah. No. I’ve mastered that for the last couple of weeks, too, or on a variety of different projects. I’m fascinated by the … I mean, my job is always, what are the ways that this will be misused? Yeah.

Jim: Yeah. Yeah. Yeah. That’s your job.

Renee: We have a researcher account, and I looked at it from the standpoint of, is this more effective for long-form versus short-form generation? How much human curation is required? The sense I came away with is, yes, there is … it does tend to go off the rails and ramble as you get towards long form. Then, of course, depending on how much freedom you give it, you get better or worse outputs. But I felt that the thing I’m interested in is to what extent does it reduce the cost to produce a unit of misinformation, so to speak, where it’s still better to hire an army of trolls to write independent content, or does this just … do you just generate it with the AI and give it to your one curatorial agent who then populates the Twitter account, and is that the dynamic that starts to take shape? There’s a lot of interesting things, I think that will come out of GPT three?

Jim: Yeah, that makes sense. As long as you keep it short, and as long as you don’t try to write an 800 word op ad or something, but for a tweet response is be good? Probable.

Renee: Yeah, I think so.

Jim: Yeah. All right. Let’s see what else we want to talk about here in our remaining time. You guys had a very interesting bit of work that you did on the virality project, about COVID-19 pandemic, this information, in particular, the look quite deeply into that Wackadoodle video plandemic and how it spread around the world. Would you tell our audience a little bit about that?

Renee: The reality project is a project that we’ve had now, since about March. But we’ve been looking at is the phenomenon of COVID-19 and how different state actors and different the information environment in particular countries are reacting to coronavirus. There have been so many different conspiratorial angles related to everything from where the disease originated, what drugs or treatments work, there’s a lot of politicization, because of the impact that the disease has taken on certain populations. There’s been a lot of unrest and discontent with government responses that has led to interesting narratives emerging as well.

Renee: The goal of the Morality Project was to say, we’re in a unique environment in which nearly every single country is talking about the same thing. How can we look at how these narratives are taking shape in different parts of the world and on different types of channels? We’ve been looking at everything from, again, over to covert is one angle, which countries are using state media to shape narratives, which countries are reverting to troll armies and bot farms. We’re looking at which countries are using COVID offensively, so to speak, meaning using it as a way to disparage geopolitical rivals in service to advancing inflating their own perception or attacking opposition of some sort. Then we’ve also looked at how they’re handling these narratives internally. Are they using the opportunity to blame an adversary for coronavirus? How are they messaging information about cures to their people?

Renee: We’ve looked at everything from Chinese state media and what they’ve had to say about it, to conspiracy theorists in the US, and the propaganda they’ve produced and assessed. What spreads? What goes viral? What hops from country-to-country versus staying confined within a country? How are different social platforms handling these information outbreaks on a policy level? It’s been a really interesting ongoing research project for us. It’s been fascinating to have this opportunity to realize that the entire world is talking about the same thing and to really watch how those narratives spread internationally and across platforms.

Jim: What did you find out?

Renee: Yeah. We’ll be doing a writeup, I think, in the next month and a half. I’m on maternity leave. But as soon as I’m back, that’s my … in two weeks that’s my main project. What we’ve been seeing is a lot of use of available broadcast channels. There’s a huge focus on social media, Coronavirus Misinformation, and what social platforms should be doing about it, particularly in the US. But what we see is the ways in which state regimes are using all of the information channels at their disposal. What you’ll see is Iranian state media, putting out particular narratives to advance the idea that COVID was a bio-weapon created by the US. Then social media accounts may echo that, but it’s a very top down, narrative spreads through what we call blue check influencers, regime leaders, and regime mouthpieces, and state media mouthpieces.

Renee: One thing we see is when state media from an ally … from one country put something out, oftentimes, other state media from countries that they’ll have a close relationship with will pick it up and amplify it. We see our T putting out commentary talking about, “Well, the Iranians are saying that the US created coronavirus.” It allows them to amplify the narrative without taking it on as their own. They’re constantly saying, “Well, these other guys over here are saying it.” But they’re still using the opportunity to put it out to their audience. A report by another state media organization becomes “newsworthy,” and is used to continue to spread the narrative.

Renee: With China, we’ve seen a lot of … they rely heavily on censorship within their own media ecosystem. But they don’t allow Facebook or Twitter their citizens to use these platforms. But they themselves, the government, blue checks, and state media are using them to put out the Chinese party line. They’re running ads actually to push out content related to their perception of their handling of the coronavirus and the story that they want to tell about how China saved the world from a much worse pandemic by acting very early. They’ll put out articles on that. Then they’ll use Facebook to boost the posts to ensure that that content is seen very much outside of their borders.

Renee: You see this within the US, you see conspiratorial communities. Again, the Voice of America wasn’t really doing very much on coronavirus. We have a compare and contrast post looking at how VOA was talking about it at the same time that Chinese state media was talking about it. You didn’t really see that conspiratorial … those conspiratorial tactics from Voice of America. You didn’t see them amplifying other state media. The Russians are saying, the Chinese are saying et cetera, et cetera. They were just covering the story as it was emerging largely quite neutrally.

Renee: In the US what we saw was much more activity from bottom up accounts, groups that were pushing along QR conspiracy theories was really what was taking hold and receiving a lot of attention. Insinuations that our own government had created coronavirus, we;re also taking hold, interestingly. Rather than insinuating that it was a bio-weapon created by others, our conspiracy theorists said that it was a bio-weapon created by us, this idea of vast cover-ups in the vaccine program, concealing the fact that the coronavirus was a disease that was spread through vaccines and a range of these kind of outlandish conspiracies.

Renee: Basically, it was a huge range of conspiracy theories that were alternately spread in some countries by blue check influencer accounts, in other countries, much more of a grassroots phenomenon, ways in which these narratives were used both to bolster regimes popularity internally by saying the virus was caused by outsiders, or the regime communicating to outsiders, that it had behaved responsibly. A lot of this multifaceted analysis looking at over to covert broadcast to social and top-down and bottom-up.

Jim: Okay. Well, thanks. That’s very, very interesting. This actually is a good chance to pivot to it, which I guess will be our last topic, because we’re getting late here on time, which is, as we talked about earlier, there’s good game theory, bad faith competition reasons why the state actors or maybe even corrupt business interests, what acted bad faith and spread bad ideas. But maybe what’s even scarier is the fact that there’s an awful lot of good faith crazy shit out there that’s spreading like crazy. You’ve talked about the anti-vaxxers. What the hell is wrong with people? If you look at the numbers, a hundred Americans died from vaccines since 1950. $no doubt, millions have been saved. I mean, it’s not even a closed question. The 9/11 truth is, they seem to be over now. But that was a crazy ass thing on the internet. The crazy thing of the moment is QAnon.

Renee: Yeah.

Jim: I know, you’ve got a little research. I know, you’ve talked to some psychologists. What is it about really batshit crazy stuff that gets so heavily up regulated on our networks from time-to time?

Renee: I think what we started to see was, first, you mentioned groups. There’s been a significant growth of prioritization of groups by Facebook and others, where people are nudged into like-minded communities. This is a normal human behavior thing that’s existed since the internet. But then it’s really been much more of a focus of the platform to push people into those communities. What we started to see in 2015, was the conspiracy correlation matrix taking hold, which was that if you joined an anti-vaccine group, the recommendation engine, would promote a Pizzagate group to you and then as QAnon began to emerge, would promote a QAnon group to you.

Renee: Even if you had never typed the word Pizzagate, or QAnon in, the algorithm rightly recognized that the greatest predictor of belief in a conspiracy theory is belief in another conspiracy theory, because it’s more indicative of a particular alignment around trust. If you distrust the government, if you believe that they’re concealing that vaccines cause autism, maybe it’s not that much of a stretch to believe that there’s these vast pedophilia rings that Trump is fighting, or that Pizzagate is a thing.

Renee: There’s this phenomenon by which conspiracy theorists were pushed into other conspiracy theory groups. I think that was where you started to see the groundwork really being laid for the interlinking of these communities. QAnon in particular, really became an omni-conspiracy theory where it as it grew in popularity, there were so many ways to read into the “secret knowledge” of cue drops, that as various investigators participated in unraveling the secret hidden meaning behind these communications, they would bring in their read. If the group was populated by people who had been referred in through the recommendation engine, because of their anti-vaccine proclivities, naturally, some of that read would be incorporated into the body of knowledge that began to constitute the QAnon canon. Again, expand that out to a whole range 9/11 Truth, there’s chem trailers, anti-government, you name it. It all got read into this massive omni-conspiracy, became this umbrella group for it.

Renee: That’s how you have the interesting community and mythology development, then you just have the online factional dynamics, which is the people who are true believers are very, very inclined to be incredibly passionate about this stuff. They go to Twitter. Actively are in they’re constantly engaging, because they believe that they’re fighting a war. They’re soldiers in this war for truth. They’re engaging constantly, and pushing this content out. Unfortunately, at times resorting to tactics, like harassment of celebrities and things like that, who get caught up into the mythology. Then media, of course, plays an amplifier role as well and covering it. The challenge for mainstream press is always what do you call attention to versus where do you employ and selective silence?

Renee: In the early days of QAnon, when there was coverage, it was almost gawking. But then as QAnon began to become increasingly tied up in the Trump wing of the party kind of dynamics with a number of candidates who really used QAnon supporters in their primaries as a source of support, it became increasingly something that was part of the American political ecosystem, where House of Representative candidates were winning their primaries on this energy. That’s where you started to see more and more coverage of it in the last couple months, which is again, then it’s the question becomes, how do you cover it in such a way that explains the zeitgeist and explains the dynamics, while at the same time not inadvertently pushing people into it in some way. That’s how … You’ve seen this mainstreaming of the topic, where increasing numbers of people have heard of it. Then again, the question becomes by informing the public, do you inoculate them or do you potentially make them more susceptible to participation in the group themselves?

Jim: Yeah. That’s, of course, a moral question. I mean, these people are allegedly adults. They are constitutionally allowed to believe any kind of nonsense they want.

Renee: Yep.

Jim: As a confirmed atheist, I frankly have the same view about organized religion. Just fucking compounded nonsense. But yet it hangs in for a long period of time. It isn’t anybody’s job to say that you should not be QAnon leader. Interesting question.

Renee: That’s where the … Some of the dynamics around where is the line between conspiracy and cult, is an interesting question. Particularly when that online factional dynamic, that community participation, where the orientation is around everybody is a member of this thing because of the shared belief system. Are we going to see more of this decentralized cult dynamics, as various people come to participate in online groups aligned around various weird things that are released onto the internet, claims that people make or whatever that inspire curiosity and then adherence?

Jim: Yep. As we talked about earlier, then that’s provide an evolutionary construct in which those crazy theories that stick get up regulated, and they get modified, the more sticky. We should expect more of this, I suspect.

Renee: I think that’s true.

Jim: Okay. Final exit question, what can individuals do to make themselves less susceptible to all these various kinds of exploits? I’ll say one thing people can do, but most people won’t, is every year, I take a six month break from Facebook and Twitter. I’m on month two, just about the end of month two of my six month break, just clear your head of that shit. But most people don’t seem to want it … don’t seem to want to do that. Of course, when I come back, I find same shit different day hasn’t changed. Literally, I mean, did I miss anything at all in six months? That’s usually my response. When I come back is “No.” Let’s assume people aren’t willing to be that extreme. What can people do to have a more valuable positive experience from their use of social media?

Renee: I think one thing is recognizing. There’s an interesting dynamic, how do you step outside yourself and understand that you’ve just seen content that’s designed to rile you up? What is your first inclination when you see something that you think is outrageous? Is it to re-tweet it, to DM it to your 10 closest friends? Do you feed the outrage cycle, or do you recognize it for what it is? I think that’s the thing that I personally have gotten a little bit more, maybe jaded, because I look at this stuff all day long at this point for years now.

Renee: But the question of do I need to weigh in on this outrage at this moment? Who benefits from me continuing to forward along that outrage article? Is there ways to make people think about their role as active participants in the transmission process of this stuff? That I think is a worthwhile effort in some way. Maybe we teach that alongside media literacy. It’s not just checking your source. It’s also what is the purpose of this is for to create content, so that people like you forwarded along. Are you helping somebody by forwarding this along or are you just feeding a culture or narrative that keeps people perpetually riled up and angry? I think that self-inventory is a place that I … that’s what I tried to do now, at this point. Do I need to weigh in on this? No. Does the world need my hot take? Nope.

Jim: That alone would make a big difference, wouldn’t it? If we all said, is the world going to be a better place if I respond to this? The answer more often than not, is indeed that. Well, thank you, Renee, for a very wonderful passionate deep dive into what’s going on on the net.

Renee: Thank you.

Production services and audio editing by Jared Janes Consulting, Music by Tom Muller at modernspacemusic.com.