Transcript of Episode 71 – Philip Howard on Computational Propaganda

The following is a rough transcript which has not been revised by The Jim Rutt Show or by Philip Howard. Please check with us before using any quotations from this transcript. Thank you.

Jim: Today’s guest is Philip Howard. Philip is the Director of the Oxford Internet Institute and a Professor of Internet Studies at Oxford. He investigates the impact of digital media on political life around the world and he’s a frequent contributor and commenter on global media and political affairs. Howard’s research has demonstrated how new information technologies are used in both civil engagement and social control in countries around the world.

Philip: Hi, Jim. Thanks for having me.

Jim: Yeah. This will be good. This is really interesting stuff. We’re going to talk today, mostly, about Howard’s new book, the Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations and Political Operatives. Howard’s got a cherry job, can you imagine having the job of Professor of Internet Studies at Oxford? Holy moly.It’s pretty [inaudible 00:00:53].

Jim: How could it be any better., the perfect job. Talk about timely. In addition to this particular book, he has written several other books on the impact of internet and technology in society. Including one that I’m going to dig into, because it just looked interesting. I looked at the brief write up on it, called Pax Technica: How the Internet of Things May Set Us Free Or Lock Us Up. And of course, as often in the world, the answer is most likely both. None of this stuff just cuts one way. Let’s hope into it.

Jim: From the preface of the book, this is an interesting place to start, you said this book is about the teams of people who do the work of embedding new information technologies into our political lives, but use those technical systems to misinform us, distract us and discourage sensible deliberation. This ties into something we fairly often talk about on the show, which the group of people I work with, we call it bad faith discourse. It really is a plague. And as you also point out in the book, this is a direct quote, our most valuable resource possible in a democracy is our attention. As a person with significant background in cognitive science, I often point out that frankly, we are our attention. And you combine bad faith discourse with hijacking our attention and we’re really talking about a plague. Something that is really undermining our ability to be who we are.

Philip: And there’s interesting ways the COVID plague has this informational side to it as well. It’s making many of us sick and it’s the source of a lot of health misinformation. There’s a lot of health misinformation around COVID that is just confusing people, it’s getting people to take risks they wouldn’t normally take. It’s part of the big problem.

Jim: Yeah. It’s really been quite astounding to me. I would say what’s been particularly astounding and disturbing about the COVID misinformation and bad faith discourse, I don’t know if it’s bad faith discourse or rampant insanity, is that even intelligent, well educated people, a significant percentage of them seem to be adopting all kinds of seemingly ridiculous ideas. It may well be that whatever the social media phenomenon has been doing to our ability to sense make has reached some level of criticality. Undermining trust in general, has gotten to the point that people have no ability to make sense at all anymore. And we’re going to dig into that in some detail.

Jim: I think an important qualification, for our discussion about the book, is to be clear to our audience that you’re basically talking pretty much only about organized, paid disinformation. And you don’t really delve into some of the things we’ve talked about on this show before, which is the emergent unofficial sector of political discourse. Some of it, bad faith. The 4chan, 8chan thing, meme wars et cetera. But this is a very important sub domain of bad faith discourse, that which is organized and paid for by government’s political actors, lobbyists et cetera.

Philip: In a sense, both domains feed off each other. If a campaign can get a bunch of volunteers to generate a story that attacks negative campaigning, that attacks an opponent, they’ll do subtle things to encourage it. And formally, under the law, they’re not supposed to coordinate. A campaign shouldn’t coordinate its strategy with those volunteer groups, but it’s not uncommon for a major candidate to have friends who do things under the table for them in advance of the same agenda or the same goal of getting elected. The volunteers definitely can work in tandem with the paid consultants.

Jim: And also, I would say, not even the volunteers, just the players out on the net. One of the things we’ve explored in the past is how the mimetic payloads are often created by independents or out in the chans often. And then they circulate up to Reddit, to The Donald, which is has just recently got taken down by Reddit. But has amazingly rapidly reappeared as a independent platform with as much traffic as it had on Reddit, which is something worth thinking about. And then the memes get up regulated on The Donald, there’s are the pro-Trump memes and then they get picked up by Breitbart and then they get picked up by Fox News and then Trump repeats them, a few of them. There’s a cycle that is basically not driven by these operatives we’re talking about, but is accelerated by them. That’s an important other part of the architecture I would say, which we’re probably not going to get into in great depth. But we’ll talk about how things like bots can facilitate that.

Jim: Another definitial thing, before we drop into some more details is, again from the preface, what connects these arguments is that first and foremost, politics is a socio-technical system constituted by both ideology and technology. Maybe you could explain a little bit, I think you did a very good job of it in the book about talking about how it’s naïve to think of technology as now a completely separate domain. It really is part and parcel of how we do politics. Talk a little bit about that.

Philip: It’s about two different platforms. It’s about the social system that includes these campaign lobbyists or troll farms in Russia. It can be small groups, there are always formal groups. They have job ads and performance bonuses and secretaries and telephones and office space. This isn’t about lone wolf operatives, this is about formal organizations. And then the other part of the lie machine is the technical, the algorithms that take content generated by those organizations and deliver it to your inbox, your social media feed, your Twitter feed. Those algorithms help ensure that it’s relevant content, you’re likely to respond to. It’s tailored in some way for you. I don’t think you can tell the story of modern politics without some appreciation of the technical side, the technical platforms and the social organization of it all. It’s one big complex system and it’s a global system. It’s not just based within the US, it involves political players around the world.

Jim: I think that’s one of the things that’s probably mostly new, at least embowed into the West, is the degree to which state actors or closely affiliated to state actors are taking a role. Now, to be honest, the West has manipulated elections all around the world since the 50s. In some sense, it’s turnabout, but the relatively low cost and anonymity that the networks provide has actually made us eat some of our own food that we’ve been doing by more brut force methods for a long time.

Philip: I think that’s one of the big surprises that there’s actually multiple governments who are trying to target voters in the US and Canada, Australia and the UK. I think for many years Russia had some interest in moving public opinion in the US. And China didn’t seem to have much interest, but in the last two years China really has started pushing out more propaganda, more messaging, more media messaging in English and on social media platforms that Chinese citizens can’t actually use. It’s pretty clearly what we would call computational propaganda directed at voters in the West.

Jim: Yep. Not just China too, even Iran apparently is in the game. We’re not sure to what degree and probably anti-Trump, but they’re playing too.

Philip: One of the interesting things as I’ve been doing this research is finding how some governments, some regimes are actually not very good at it, not very good at doing misinformation. I think we’ve caught organizations in Venezuela and Iran, India and Pakistan, they all have little IRAs, little Internet Research Agencies that try to copy what the Russian government set up, perhaps around 2012. But they don’t tend to have a big impact, they tend to mostly be involved with Twitter. They tend to get caught and exposed and most of the creativity, frankly, happens amongst the PR firms in the West or with the Russian information operations.

Jim: China, too. I wouldn’t underestimate China. They have a lot of skill and they often do blunder at first, but they also learn pretty rapidly. I suspect that we’ll see China as a big foot in this in the years ahead. Now, let’s talk a little bit about what differentiates these social media platforms from at least qualitatively similar forms of targeting in the past. We’ve always had targeted direct mail. I happen to know Richard Viguerie who was the inventor of modern direct mail back in the late 70s. He had an incredible data operation, bought micro targeting information from Acxiom and some of the other data aggregators. And a big part of the Reagan political machine was essentially fueled by Richard Viguerie and his mastery of direct mail. And then, a little later, there’s the email thing. It was pretty funny, my mother and her republican friends were passing around some of the most absurd shit imaginable by the mid 90s. And of course, we’ve always had less fine grain micro-targeting available today on things like TV advertising for politics.

Jim: Famously, at least in the United States, almost all of the political advertising goes into the so-called battleground states. A handful of states in which it’s thought the election will turn and further, they target specific geographic areas on specific TV networks and then even in specific shows. We’ve always had a fair amount, or at least since the 60s, a fair amount of targeting and such. What, in your mind, makes the world of social media platforms qualitatively different than what we had in the past?

Philip: One of the big differences between direct mail campaigns and the modern computational propaganda machine is the narrow targeting. I think it’s fabulous that you know Richard Viguerie and his work for Reagan. That was big innovation at the time. That was a system that took voter registration files and address records, sometimes they were kept on punch cards and fed through the computers that used punch cards just to calculate what kinds of letters should go to what kinds of prospective donors and prospective voters. Most of that messaging was pretty broad, pretty general. It wasn’t AB tested, the way a modern ad is on Facebook. If we look at some of the big campaigns, if a candidate for office or a lobbyist has enough money, they’ll produce a couple of different ads. They’ll test them to see which ads play well, they’ll test them to see which ads play well for men, which ads play for women.

Philip: Behavioral researchers tell us that women respond well to a deep voice and men respond well to a high pitched female voice. And there’s all these other things that have to do with, frankly, skin tone and look and appearance and age that will make it possible to customize an ad so that you get a message from someone who looks like you. And that narrow casting delivery is a pretty big differences from what we used to get over TV. Before, political campaigning in the US started playing with direct mail and direct mail data bases, most of the political messaging was only ever broadcast around on election day or during major wars. And then, it was mostly the big governments that would spend big money on political advertising in significant ways. Now, it’s incredibly cheap, relatively inexpensive to run some ads, to buy ads yourself to do this AB testing that I was just talking about or to run a basic campaign on social media. It doesn’t cost a lot of money and that’s exciting. It’s exciting in the sense that it’s democratizing, we all have access to the technology, we can all express ourselves politically.

Philip: But, it means that there’s a lot more peripheral voices and extremist voices and sensational content because to get those clicks, to grab somebody’s attention online, you need to use potty mouth words and lots of exclamation marks and have crazy, slightly altered, images. That’s what makes a modern campaign grab attention.

Jim: Yep. Of course, it is true that in the Richard Viguerie and the direct mail world, there was a lot of AB testing. In fact, ABCD testing, typically. I actually ran a business back in the 80s that was principally driven by direct mail and we did a 16 part test actually. AB testing itself isn’t new and I know for sure Viguerie was doing it. But rather, it’s the fact that you can go from 16 to 2000. You can test 2000 ads simultaneously with subtly different wording, subtly different coloring, positioning, et cetera. And unlike direct mail, the cycle time to test and learn was about two months, that was the real problem. You could only test a couple of times during an election cycle. With internet advertising today, it’s literally real-time. You could start to down regulate your spend on the ads that aren’t performing within minutes and then up regulate the ones that are testing. It’s actually quite interesting.

Jim: And as you talk about the democratization, that’s interesting that you mention that because just for fun, I went through the process on Facebook and qualified to be a political advertiser. I don’t know, about three months ago. I don’t know why I did, I’ll tell you why. I was getting some of my Facebook ads for my podcast were getting disapproved, I go, what the hell? And I’d contacted them and they said well, this could be interpreted as political. Even though I don’t generally talk about political topics perse, more scientific or social trends. I said, “All right, what do I need to do to qualify to be a political advertiser?”. And they told me and I went through the process, it was fairly straightforward, so now I am a registered Facebook political advertiser.

Philip: What kinds of questions do they ask you?

Jim: I think this is probably correct, at least if you take a free speech perspective, which I do, which is they basically wanted to ratchet up the quality of my identity to make sure they knew who I actually was. For instance, I had to send a copy of my drivers license, I had to give them a physical address. They sent a hard copy, I think it was a postcard or something, with a code on it, which I then had to log back in and give that code number. It was essentially a ratcheting up of the quality of my real name identity and being able to link it firmly with an actual physical address.

Jim: It was a low barrier, but it was enough to probably make it more difficult for bad actors to be able to advertise on Facebook. As I understand, this is new, this did not exist in 2016, but is something they put in place the last couple of years to at least attempt to trim back some of the abuse of their political advertising system.

Philip: And now that you’ve got permission, are you placing more overtly political ads?

Jim: No. I have not. Only occasionally do I advertise my podcast. Turns out, it’s not very effective advertising medium for podcast. Much better is within podcast apps themselves. But I might, as we get closer to the election, just for fun, I might run some experimental ads and just see what happens. I’d be interested in the phenomenon.

Philip: It’s interesting, a lot of the firms in a lot of countries have decided to stop carrying political ads altogether, because it’s too difficult for a firm to monitor who’s a political actor and who’s not and the reporting requirements. There’s multiple countries where Facebook and Google, they just decided to stop taking political money for political ads. I’m not sure that’s a good solution, because I do think that it’s important for political candidates running for office to be able to advertise what they stand for. It shouldn’t be just about raising money, they should be able to get their ideas out into the public sphere. And I think media has an obligation, a social obligation, to carry some of those ads. I’d rather see Facebook take political ads and then do the due diligence to make sure that they’re high quality, from real political actors, saying things that are constructive and not hate speech or nasty and destructive for public life.

Jim: Yeah, that’s interesting. What is that line? What is the content line? I have to admit as a free speech, fairly much, fundamentalist, I’m always suspicious of attempts to constrain speech and I’m actually even skeptical of the hate concept. Because, I’ve noticed already it’s expanding to become a politicized hammer for one side to hit the other. For instance, you’ll see people referring to good faith commentary about limiting immigration as hate speech. Or, the famous case in the United States where a very religious baker refused to bake a cake for a gay marriage. You could disagree with his perspective on it, but it’s not necessarily hate. Again, I am suspicious of these broad categories like hate. I think you got to get much more detailed about behaviors, but we’ll get into that.

Jim: But, before we do, let’s go into a key concept which again, is fairly broad in your book, which is a political lie. You define it much more broadly than, say the Catholic sin of telling a lie and including being incendiary. Why don’t you give us a little bit about your definition of a political lie?

Philip: For me, a political lie is an untruth, something that’s patently false, that’s put in the service of some ideologies. It’s bald faced lie that’s got some political agenda behind it. And every week we see new examples of what these things look like. I think the real tough thing about studying misinformation over the last few years has been that the types of misinformation are getting really subtle. Just last week, one of the big incidents for those of us who watch this stuff, involved a string of some 20 or 30 fake commentator profiles who have been successfully filing commentary essays with an agency called Newsmax and the Washington Examiner. These are experts who are talking about the Middle East, but they were fake personas. They had particular perspectives on particular countries in the region. But those news outlets didn’t do anything to check who these people were and whether they were real or not. Turns out, they were not real.

Philip: There’s an example of very subtle campaign strategy that took a long time to catch, that was coordinated and lasted for months, involved 19 fake personas with 90 opinion pieces and 46 publications over the last year. Some of these operations can get quite large. There’s different kinds of lies at different ends of the political spectrum and unfortunately the variety of political lies is growing.

Jim: Who was behind that? Did somebody find out?

Philip: This was something reported in The Daily Beast, I don’t remember that they located it. I think there was some well placed articles critical of the government of Qatar in support of tough sanctions against Iran. These are particularly conservative news websites. I don’t remember from the coverage if they could attribute blames. And this is one of the other tricks for doing this kind of research, it’s very difficult for us as independent researchers to verify who’s responsible. And one of the only data sets I’ve ever played with are verified trolled accounts, are the data sets that Facebook turned over from the 2016 election on known Russian fake Facebook users. Accounts that were clearly set up from Saint Petersburg and designed to pretend to be fake American voters. There’s about 3500 of them and the firm gave this data set to the US Senate Select Committee on Intelligence. And the US Select Committee on Intelligence gave it to us at Oxford University to analyze. That’s one of the few data sets of verified troll accounts that we’ve played with. It’s very hard as an outsider to know who’s these fake accounts.

Jim: Yeah. That’s one of the things that’s a little bit different in this new world. Though Facebook is actually verifying who runs political ads, the ads they contract, but as you point out in your book, perhaps it’s the junk news that’s actually more powerful than the ads. However, interestingly, this example you just gave, a likely suspect would be Saudi Arabia, I suspect. If Qatar and Saudi Arabia are famously antagonistic, and if it’s anti-Iran, that would point in that direction, which is interesting.

Jim: But let’s go back to this political lie concept a little bit. In the book you actually give a much broader definition than just an untruth. For instance, I’ll read this direct quote from the book, content that promotes undue skepticism, negative emotions, contrarian views for the sake of teaching the controversy, or text and video messages that bring anxiety or adversion to dialog and new evidence also fall into the broad category of political lies. Could you say a bit more about that much broader definition?

Philip: One of the things that’s useful about that definition is that it also captures visual misinformation, which is something we didn’t have as much of in 2016. Visual misinformation is slightly doctored images or images taken out of context. Visual misinfo can bring home the punchline of a political lie, but itself is difficult to verify and difficult to catch in a systematic way. If misinformation is designed to provoke you, to ask a question that doesn’t need to be asked, it’s having an impact. For example, when COVID really emerged in the US as a phenomenon, several prominent public figures wanted to call COVID the Wuhan virus. And the Chinese government responded by tasking its national media agencies to ask the question about whether maybe the virus had originated in a lab in Colorado. And the virus didn’t originate in a lab in Colorado and it wasn’t born out of a lab anywhere. But, the strategy, the rhetorical way of presenting this political lie has been to write news articles that say and that ask the question where the headline is, did the virus emerge from a lab?

Philip: And then you find a doctor, I think it was a doctor from Northern Italy who would ask the question and there’s no evidence. But simply asking the question allows you to explore, take some pictures of labs in Colorado and paint a story of how other viruses maybe originate in labs through a variety of experiments. These big arcs, these big false stories can emerge not when somebody makes an overt lie. The Chinese government never said that the virus originated in a lab in the United States, but their media agency asked these provocative questions and found a few experts who are asked other questions and that’s what creates this element of misinformation around the origins of COVID-19.

Jim: Yeah. That’s a very important, I think, distinction that we’re talking here. Not just about explicit lies, but we’re talking about using manipulative messages, hooks to emotion as you say, provocative images, to insert misleading information into people’s minds. Even without actually explicitly lying. And I think this very important distinction and comes down to a lot of what makes our current social media perhaps a mismatch for our cognitive capabilities, which we’ll talk about here in a little bit.

Jim: Let’s go onto the next thing and this is really the main model of your book, which is, you take the concept of the political lie and then you posit that there is a lie machine. There’s a three part system that you’ve identified where lies are essentially industrialized and put to work for political state actors, lobbyists, et cetera. How about you lay out the concept of the lie machine and what the three parts are and maybe talk a little bit about what the implications of that are?

Philip: As we talked about earlier, the lie machine has a social aspect and a technical aspect. The technical side is the social media algorithms or search algorithms that deliver content to you. And the social system is the small groups of people who make this stuff up, or the politicians who pay for it and commission it, or the lobbyists who spread it around and fake news journalists who turn it into longer news stories. But when they produce this content, or distribute this content, there’s three cycles.

Philip: There’s a production stage and it’s the very first stage where you work out what kinds of messages you want to push or who your target is, which politician you want to take down a few notches, which you politician you want to promote. That production stage often involves the testing of messages, creating a narrative or a long story and making up long backgrounds that if somebody searches, they might fight a little history to the misinformation.

Philip: After the production stage there’s a dissemination stage and that’s simply about as we’ve discussed, it’s about paying for the ads. It’s about getting an organic network of humans to distribute stories across Twitter or Instagram. And that distribution stage usually involves trying to gain social media algorithms. Trying to get them to push a story further than it would normally go, or to take advantage of maybe how YouTube catalogs entries to make sure that if somebody searches for keywords around your topic, your story or your fake news article, your fake news video will get to the top of the search results.

Philip: After that dissemination phase, there’s a marketing phase where there’s a separate group of people, a different group of people who promote the story and carry it over many months, exacerbate it. As you said, push it down back into 4chan where it can get new energy and get relaunched and sometimes, it’s difficult to anticipate which fake news stories, junk news stories will actually do well in the public sphere. And sometimes, the vast majority of the stories die, they don’t go very far. But then, six months down the road, somebody picks up a nugget of something that was forgotten and runs with it again. There’s this aftermarket of taking stories that have been produced and disseminated and turning them into a long term part of a public conversation. Production, dissemination and marketing.

Jim: Yep. I think the part that was the biggest illumination to me, I should not have been surprised, was how the marketing component has now really come to the fore. And I suppose it’s frankly not that much different than Richard Viguerie and his direct mail marketing, but now applied to the affordances that have been enabled by the social media platforms. Are these marketing firms, are they parts of big marketing companies, the global ad agencies, or are they typically independents?

Philip: They’re a little bit of both. I’d say that the big ad agencies, the big PR firms, do have units now that specialize in this toolkit. I remember seeing another story from Buzzfeed where the author, the journalist, reported how many of the contemporary misinformation campaigns we know about actually have their origins in a major PR firm, not a foreign government. It’s an increasingly common service that the PR firms will offer. That said, the most aggressive and manipulative firms that do this work tend to be overseas.

Philip: In the book I write about one in Poland and another one in Brazil. They work for politicians, political clients and lobbyists. In fact, for one of them, actually the one in Poland, the primary customers were not politicians. The primary customers were pharmaceuticals firms. And the example that was given to us was that the firm would hire 10 000 fake Facebook users to be sick in some way, to have trouble managing their migraines and say as much over Facebook. And then, they would hire 10 000 other fake Facebook accounts users to go in and have a new medicine for managing migraines. These fake users would have conversations with each other on public boards about the exciting new medicine and the ways of managing migraines these pharmaceutical firms had come up with. It’s not always about politicians some of the most innovative work comes from regular firms that want to push their products, but then, political campaign managers will take the good tools and use them for political campaigning.

Jim: This actually hits on something I had in my notes later, but let’s talk about it now, which is information that’s presented as if it’s coming organically from real people hits people in a different way than what is obviously paid advertising. One of the things that we can say about us humans is we may not have had good defenses against advertising in 1890, when mass advertising first started. But we now have a relatively high level of skepticism. I see a political ad on TV, my first assumption is it’s manipulative horseshit. But, if what appears to be a good faith conversation between actual people on a public board, we think about that differently. Could you talk about that a little bit?

Philip: I like your expression, good faith conversation, because many of us have faith that the internet… For the first decade maybe, the internet brought us mostly truths, it brought us access to information we didn’t have before. There were a lot of free speech values behind the construction of the internet and a lot of reasons to be excited and hopeful about the impact of the internet, the role of the internet and democracy and deliberation. I think we tend to trust information that comes over digital media more than we trust information that comes over broadcast. Now operationally, that means it’s easier to take advantage of people. If you can slip something into the social media stream and you can get somebody’s family members of friends to forward something without actually reading it, then that information comes to us through our networks of family and friends. And we tend to trust it more. It’s one of the cognitive biases you were hinting at earlier, we tend to trust information that comes from people we already know a little bit more than stuff that comes from people we don’t know, who aren’t like us.

Jim: Yep. Hence, the real deep play for these lie machine people is to get things out of circulations that look like obviously paid placements or even journalist point of view things. You got to Breitbart, you know it’s right wing crankery. But if some people start spouting that same lie themselves, it’s likely to slip in under our discrimination radar to quite a good degree.

Philip: Yeah. In the biz we call it the organic content. It’s the content that comes from volunteers, from real people, maybe it has a typo or spelling mistake in it. It’s not clean copy written by a professional communications person. That’s the stuff that we tend to respond more to. And in an interesting way, I think there’s a different style to Russian state propaganda from Chinese state propaganda on this question of style. For example, most of the Russian state-backed profiles that we’ve studied actually have a long history and they make posts about flowers and soccer scores and soap operas. They have a long, what we call, legend. They’re around for years and then they start talking about politics. They get into your social media networks, you don’t really notice them or we don’t think of them as plants obviously. But then, they suddenly start going on strong for one political candidate. And that seems to be the Russian state style of producing misinformation.

Philip: The Chinese government style, because it’s really just emerged in the last year or two, is a lot more fast and blunt. They’ll simply buy 20 000 fake Twitter users. The user accounts will be all numbers. They won’t have any pictures, they were created yesterday. They will wake up and push a message or echo something that the state media broadcaster is. On the one hand you’ve got the Chinese style, which is by volume, huge numbers of fake accounts that most of us would spot as being fake accounts. And then the much more subtle, long legend, long story line accounts that the Russians have been grooming for a much longer period.

Jim: That’s interesting. And my guess would be that per unit of posting, the Russian style would be vastly more effective. However, quantity versus quality, one would have to do the research to determine which is the better overall strategy. The Chinese are not stupid, if they continuing to do this, it must work. At least for some segment of their audience.

Philip: Yes. That would be a great research question. I don’t know which one’s more effective. I think, one of the challenges for trying to get a political lie into circulation is getting the nugget to go from something that your bots and your trolls are trading around to having a real human or a prominent Hollywood actor, or a political figure to tweet. That’s a threshold effect, because there’s a lot of dumb ideas that only bots kick to each other. But when you can get a few humans to notice the conversation and forward the content without reading it, or without thinking about it, there’s this threshold effect. It crosses over into human networks, of real users and that’s when more people are likely to see it. There must be some volume threshold if you buy plenty of accounts and set them up quickly, you may cross that threshold pretty quickly. But I don’t know which strategy’s actually more effective.

Jim: It all depends on the nature of network propagation. At the Santa Fe Institute, where I’ve been involved for many years, some of our researchers study propagation on networks. And it’s very subtle, you can simulate it, which is fun, can make the following assumptions about the network is connected and how people pass things along or not. There’s typically a, either it goes viral or it doesn’t, so-called phase change or bifurcation point. One could imagine either strategy working, depending on the nature of the payload and the nature of the network and how it’s connected and how it filters. It’s a very interesting problem. I suppose today, people are just solving it empirically by trying things and see what works and then tune and tune and tune. But, if one got to be able to actually build predictive models, we’ll talk about it a little bit later, one could in an automated fashion create content specifically to take advantage of the network propagation attributes.

Philip: Absolutely. And that’s something that’s, I think, easier on Twitter, because Twitter has an open API. A programing interface that lets us pull raw data. And with Twitter, you can see social networks, you can see the structure of who’s following and who’s followed by everybody else. That lets you see when a message gets posted, who retweets it, how quickly, what edits are made. And you can do that analysis over Twitter. It’s much harder to do over any other social media platform. You can do it on Reddit, I’m sorry. You can do it on Reddit. It’s much harder to do on platforms other than Twitter and Reddit. Those platforms just don’t share.

Jim: I did quite a bit of analysis on Reddit, I don’t know, two or three years ago where I sucked down all the content on The Donald and analyzed it in various ways. It was actually fun. I thought it was a good thing that was available, I could actually track memes in their early days of getting up regulated and predict what memes might go viral. I probably had about 50% chance on it, which was way better than most people. That transparency of data is useful in certain ways, but can be misused in others.

Jim: Let’s switch directions a little bit. In your view, and you seem to be the man, you’re the internet dude at Oxford, how critical is the internet these days in electioneering? I know that Trump has hired his 2016 social media dude as his campaign manager. And here’s another interesting thing, I couldn’t remember his name so I typed into Google, Brexit internet genius and of course it came back with Dominic Cummings. And what I particularly loved was the headline from The Guardian that was titled, Dominic Cummings: Brilliant eccentric or evil genius? Well, guess what? Dominic Cummings is now one of the key advisors in general to the UK government. In your view, how critical is the internet in electioneering in the political technical process today versus everything else?

Philip: Obviously I think it’s critical. I don’t think it makes sense to talk about modern politics much, without some appreciation for how the technical system, the technical features of the internet, work. It’s hard to imagine the modern political candidate winning office without a sophisticated digital media strategy. It used to be that the most important technical side of the house was the polster. In Richard Viguerie’s day, the polster was the one who ran all the communication side of things where the webmaster reported to the polster and the digital strategist reported to the polster. That’s switched now and now it’s often the webmaster, the digital campaign strategist who manages all the other technical resources. I don’t think you can win without a tech savvy strategy and today, that means having an Instagram campaign and a TikTok strategy and a Tinder strategy, if you need it. If that’s where you think your voters are, that’s where you need to be with a digital strategy.

Jim: Interesting. I just looked it up because I was curious what the experts think about where the dollars are going to go. Something on Forbes’ recent gestimate was that in the US 2020 elections, probably 47% of the dollars will still go on TV advertising, it’s actually more than that.

Philip: I think your instinct is right, that television advertising still is 80 or 90% of a presidential campaign’s buys. And those are million dollar ad buys, with very high production value campaign ads. That’s the bulk of the expenditure. I would wonder whether the television ad buys could stand alone without a savvy internet campaign. I would also bet that the internet infrastructure that gives a digital strategist a sense of who’s in which TV ad markets, is actually vital now. No presidential campaign covers the entire country with exactly the same ad, they rely on the supply of data from their own analytics firms, to figure out which ads need to include which messages. And they run them in specific TV ad markets. I totally buy the argument that the bulk of the money goes to broadcast TV, but those [crosstalk 00:41:48] are a lot more strategic now. And the strategy comes from big data that mostly gets collected over the internet.

Jim: That’s a good point. I hadn’t really focused on the fact that the two together operate synergistically. If you can pre-test memes, pre-test images, pre-test video even, on the internet, then you can probably do a better job of targeting your TV. Actually, I’ve read the article quickly. And it’s approximately 75% TV, 20% digital, 5% radio, is a reasonable way of looking at what the experts think in the 2020 campaign. That corresponds pretty closely to your 20% gestimate.

Jim: The next step up in your building of your argument from lies to lie machines to computational propaganda. And there, you talk about a quite interesting network of networks. One of the things I found particularly interesting and never would have thought of, you gave an example of Tinder even being sued as part of a network of networks. Could you talk a little bit about the highest level framework of multi network feeds of junk news, Tinder, Facebook, Instagram? Provide a systematic view if you could.

Philip: I call it computational propaganda because that refers to the fake news and the political lies along with the computational systems that take data from your credit card or location data from your mobile phone and figures out a little bit more about who you are and what you’re likely to respond to. That’s computationally intensive work. With the Tinder example, it’s a fun example, we caught in the UK election three years ago, 2019, two years ago where some campaign managers hired some creative programmers to create fake Tinder profiles, who would flirt and then talk about Jeremy Corbyn.

Philip: And the only reason we know about it is because the campaign managers who created the Tinder bots went onto Twitter on the night after the election and thanked their Tinder bot and named the districts where they thought the Tinder bot had given them a few percentage points advantage and helped their candidate win. The punchline in this example is that the computational propaganda doesn’t need to reach everyone. It doesn’t reach all voters across the country. It drills down into swing states or districts where the votes can be really close. And one percent, two percent, makes a difference between who wins. And that’s what makes it pernicious too. Because the messages that go into those narrow districts or in particular states aren’t seen by the rest of the country. They aren’t seen by the neighboring communities.

Jim: Yeah and of course, that was true of direct mail also. You could micro target your message. Though it is true that Facebook, for instance, does have an archive of all political [inaudible 00:44:40]. One of the things they do tell you when you sign up to be a political advertiser, if you run a political ad, it will end up in their political archive. But, the reality is, only researchers ever take the time to look at that.

Philip: That’s true. There’s a couple of caveats I’d say there. Facebook only recently made it possible to get access to that archive and it’s not easy to get access to the archive. There are hoops to jump for, it’s not for all researchers. And then, they only do that for a few countries around the world. I know they do it for the US, it’s up and running now and it’s accessible. We think it includes most political ads, but if somebody hasn’t gone through the process you went through, hasn’t self declared as a political advertiser, but is putting up political content that content won’t be in the archive. And then there are other democracies around the world where elections run and Facebook doesn’t provide the service. Large technology firms definitely have obligations to serve the voters in the US, the country where they were born, as firms. But there’s a large number of other democracies that deserve the same level of attention.

Jim: That’s a good point. And the other one is that merely a repository of data doesn’t do anybody any good, unless somebody does something with it. I would call out to some investigative journalists with a big budget, let’s say the New York Times, or Wall Street Journal. They really ought to have somebody on that beat going through those ads and seeing if you can surface what cans games people are playing. An individual citizen can’t do it, even a small academic research organization will probably have a hard time doing it. But the Wall Street Journal, New York Times, they could do it. That’s their job. Guys, get on it.

Philip: And most of the big media outlets these days have some kind of data science team. It’s really hard to do good journalism now without doing, actually, what they call computational journalism, sifting through the big data that comes out of government offices and the big technology firms and looking for stories. That’s an important feature of modern journalism. And I’ll bet it will be important for 2020, for covering the 2020 election.

Jim: Yeah. I hope somebody does that before the elections, says here are the games both sides are playing. They’ll tell this message to this guys and this message to that guy.

Philip: Catch them out.

Jim: Catch them out and talk about it a little bit, if anybody cares. I suspect it’s going to seem like inside baseball, most people aren’t going to care, but at least it would be nice to have some good eyes on it.

Jim: One final thing before we switch, we’re getting short on time here, on some proposed remedies is an important thing that you talk about several times, is that not only is computational propaganda used to push ideas and advocate for candidates, but it’s often used to generally disrupt conversations, increase noise and to suppress voting or interest in elections at all. Could you talk about that a little bit?

Philip: What we found from several strategies now is that the goal is rarely to get one particular person elected. Especially if Americans are the target, or Canadians, or Australians. Usually, the goal is to confuse people, to get people to not vote for the most serious candidate. The Russian strategy in 2016 was much more about taking Hilary Clinton down than it was about pushing Trump up. It was much more about provoking race and angry race conversations and polarizing people on free speech issues. The goal there is a long term degradation in our ability to make decisions, our ability to choose politicians who believe in evidence. If you want to bring down your enemy government, one of the best things you can do is make sure they have lousy leaders. It means they don’t play with evidence, they don’t use evidence, they don’t make sound decisions. And I think that’s what the existential threat to democracy is with these kinds of campaigns.

Jim: Yep. A group of folks that I work with on the internet, we call that the attack on sense making. A society, to succeed, has to be able to make sense of evidence. It has to [inaudible 00:48:39] at a high quality fashion. This vast, both intentional and unintentional circulation of misinformation, general stupidity, insanity, whatever you want to call it, it has without a doubt, decreased our social sense making capability. A number of us believe, frankly, that social media networks many to many networks and their couplings between networks and these cycles that we talked about, the chans and back up and down, may actually be overwhelming our cognitive capability as individuals. It’s just beyond the average person and probably even the above average person’s ability to make sense of it. And that if we are going to try to make sense, we probably need to club together into collective sense making. In fact, we have a little trial group on the internet called rally point alpha, on Facebook if anyone wants to join, that’s committed to trying to make sense. And people post things and we comment upon them, is this bullshit or is it not, et cetera. And I will say, even as a person who’s pretty well informed, participating in rally point alpha has been a definite upgrade in my own ability to make sense of the world.

Jim: That may be at least one way for people to respond to technologies and data sources, frankly, that produce results above the ability of a single individual to even be able to understand and deal with.

Philip: That totally makes sense to me. Looking around the world, the countries that seemed to have been slightly inoculated against the effects of misinformation are the countries that have a national broadcaster. Now, I don’t mean that these are countries where the government owns the media. I mean countries where the government and where the public money is put into something like the BBC, here in the UK, or the CBC in the US. Australia has one too. The countries that have a public broadcaster seem to have that club effect. High quality, professional news, sorts out the dreck. Occasionally, they makes mistakes and there’s always a little bit of bias. But it’s not nearly as bad as some of the other outlets that work in modern journalism. Having a public broadcaster figure out what the top three issues are that we really need to be talking about right now. Turns out, that’s a vital part of modern democracy. And I think we’ve lost that when there’s so many different multifarious sources of information, some videos, some text, some extremists and some commentary masking as news. I think your instinct is right, there’s probably too many sources, too many low quality or mixed message sources.

Jim: Yep. We have to be able to curate the information that we consume and it may be beyond what us, as individuals, can do and working together with a group of people to co-curate our information flows and inform each other, might be an answer. But, let’s go down to your answers. The last chapter in your book, you talk a fair amount about what you think could be done to level the playing field between the cognitive ability of mere humans and computational propaganda.

Philip: The starting point for this is thinking of ways to try and identify ways to prevent the best information about public life from being hoarded by a few private firms in Silicone Valley. There was a time when the Library of Congress was the greatest library in the world and The British Library had data and evidence and public records and everyone could do research and build a business, come up with a new law, right a book. Right now, the highest quality data about our behavior, our attitudes and our aspirations sits in Silicone Valley and it’s privately held. The basic institutional level change has to be that some of that data gets shared, gets anonymized, gets distributed, sits in the Library of Congress, or sits in The British Library so that journalists and independent researchers and faith based charities, all sorts of civil society groups can have access to it, can play with it.

Philip: Now, getting there is going to be tough and I think the first step in getting there involves changing who our devices report to. Right now, when we buy a phone or when we put in a smart fridge, or put in some device in our home, the data that flows from that device only goes to the owners of the networks and we as users we can’t look at our phone and ask it to tell who’s benefiting from our data. I think we need to rebuild, in the sense that we need to be able to look at any device we’re putting in our home and ask it to tell us who the ultimate beneficiary of our behavioral data is. Once we can get devices to do that simple step, who’s benefiting from our data, I think we should be able to add organizations to that list of beneficiaries. That way, I think we express ourselves civically. If I want to support my favorite coffee collective or I want to support a particular politician or COVID researchers, I would happily donate my data to medical researchers if I thought it would help with COVID research. But I don’t have that ability right now.

Philip: Just getting devices that are accountable and then having the ability to add actors to the list of beneficiaries, that’s the big infrastructure change I would like to see.

Jim: Do you think that would really work? I don’t know. Isn’t that just adding more gasoline to the wars for disinformation, because people on the left, some of them will give that data to Antifa or anarchists or Marxists, Leninist. People on the right will give that data to White supremacists, to white nationalists, to other kinds of crack pots. Why do you think that will end up improving discourse, rather than just adding gasoline to the fire?

Philip: Maybe my answer by metaphor would be, it would make it a controlled fire. In the sense that, right now, we have no side of this. I don’t think we can take away Facebook. We’re not going to get rid of Instagram, we can’t give up Twitter, we’re not going to take everybody’s phones back. We can’t get rid of social media. The best we can hope for is to rebuild, do a little rebuild and try to change the flow of information. If there are White supremacists making use of my behavioral data and the behavioral data from my credit cards, I want to know and I would end that, I would choose to end that relationship. Right now, as citizens, most people most of the time wouldn’t closely police where their data is going. But, if given the chance, I would go through the list of beneficiaries and I would strike some out and I’d want to add some too. The flow of data is actually now our primary mode of expressing ourselves politically. We vote every four or five years, but every day, we generate tons of data that goes off and people make political inferences from it, but we have no control over that process.

Jim: That’s a radical idea. Still, I’d say I’m fairly skeptical. One, where that won’t just make the problem worse and two, if people were willing to do very much with it. For instance, both Google and Facebook have long provided the ability to see how you’re defined in their advertising databases, according to your areas of interest and all that sort of stuff. And you can even edit it out. They know for instance, that you belong to a hunting special interest group and you can actually edit that out. I don’t know one person in 100 who’s ever looked at those ad profiles. I’ve looked at them several times, just because I’m interested in such things, but I’ve never done anything about it. Do you really think anyone’s going to take the effort? My model, as someone who’s been building online products since 1980, is that what people really want is appliances. They don’t have any interest, in fact quite the opposite, they don’t want to have any of their time or attention spent on fiddle fucking with the details of their appliances. And asking people to dig in like that, just strikes me as very unrealistic.

Philip: I think your instinct is right that most people most of the time wouldn’t care a lot about this. They do tend to care about these things in the few days before people vote. In the week leading up to an election, that’s when everybody’s mind comes around to who you’re voting for, what’s going on with my data, who has my records, who has my voter registration files. I think right now, even if we wanted to, we couldn’t go around to all of the devices. Aside from the examples you offered, Google and Facebook, we’d have to do that with every platform. We wouldn’t have full control, we’d have no say in the data they’ve already collected and already distributed. And frankly, if you tried to do what you’re describing, if you tried to cut back on your data flows, you’d get restricted services. Gmail wouldn’t work the way it should. You wouldn’t have the same internet experience. The firms will say, we need the data if you want that internet experience, but I’m not sure that’s so true. I think a little bit more user control would let us restore that high degree of trust that we had 15, 20, 25 years ago.

Jim: Yep. It’s an interesting problem. As you point out, letting the services have details about our user behavior actually does provide for us benefits in terms of the products themselves. For instance, I’ll use this example because I’m a technologist and still a computer programmer, when I type python into a Google search it knows I’m talking about the computer language, not the snake. If I turned off history, it would not know that. I have actually experimented with turning history off on Google, or using no history services like DuckDuckGo, much worse experience, sorry. I also experienced with turning off the Zuck algorithm on Facebook, which you can still do, even though it’s a pain in the ass and it will reset at about every 36 hours. And guess what? The flow of information is less interesting and less appropriate than using the Zuck algorithm that’s all about manipulating our behavior. However, I think there is a place where you can have the better products without the misuse, the information, at least by paid actors and this is very radical. And this is [inaudible 00:58:59] Tristan Harris about, I think he agrees in principle, he thinks it’s too radical. And that is, to ban advertising period.

Jim: For a long time, the internet was not mostly about advertising. In fact most of the products I built were paid subscription products. And today, with the low cost of platforms, those costs could be quite low. People don’t probably quite have internalized this yet, but the total revenue is about $2 per month per user. That’s the revenue. And we know they’re highly profitable, we also know a fair amount of their cost is in their ad infrastructure. I suspect that one could run a social media platform for a buck or two a month at 100 million person scale. Almost all the economies of scale are reached by 100 million. Today, we could have alternative business model of very low monthly payments for these big platforms. I don’t think anybody would object to paying $2 a month, least in the United States or in the West.

Philip: A subscription [inaudible 00:59:57].

Jim: Yes. Think about this, the incentives are suddenly reversed. In a ad driven model, they have every incentive to keep you on as long as possible. They pull you into arguments, it up regulates the most controversial shit that you’ll send to other people to start more arguments. Their incentive is to keep you on as long as possible. If it’s a subscription model, say two bucks a month, guess what? Their incentives are exactly reversed. They want to provide as much value to you, so you’ll stay a subscriber, in as short a time as possible, so you’ll expend the least amount of their computational resources. Suddenly, your values and their values are in alignment. Rather than today, where you don’t want your attention to be hijacked, but their economic incentives is to not only hijack it at the micro scale, but at the macro scale by circulating inflammatory rhetoric. I continue to believe that is the silver bullet. Have our social media, but convert it to a modest monthly subscription fee.

Philip: I think that one of the other advantages of the monthly subscription model is that I think you would get more competition. You’d get other businesses saying that they could deliver better services, slightly less expensive and you might get some diversity in the platforms there.

Jim: Yeah, absolutely. And it’s true that advertising is now micro targeted but, the biggest advertisers still want to buy in bulk and it’s easier to convince individuals of a value proposition than to try to get the interest of the Unilevers or the Procter & Gambles to advertise on one’s little start up platform. I think you hit on it exactly right. This will provide an ecosystem that will allow more competition. Well, this has been an interesting conversation, Philip, I have to say. This is an area of huge importance for all citizens everywhere in the world and I’d encourage those who want to learn more about it to read Philip’s very interesting book, called Lie Machines. Thank you very much for being on the show.

Philip: Thanks for a great conversation, Jim.

Production services and audio editing by Jared Janes Consulting, Music by Tom Muller at