The following is a rough transcript which has not been revised by The Jim Rutt Show or Zak Stein. Please check with us before using any quotations from this transcript. Thank you.
Jim: Just a quick plug before we get started with today’s show. If you haven’t checked it out yet, take a look at my mobile game Network Wars on the Apple App Store, on Google Play, or you can find links to both at networkwars.com. Give it a play. It’s a really good game, and it’s a good brain teaser. It’ll make you smarter. So, anyway, end of the plug now onto our show.
Jim: Today’s guest is one of our popular recurring guests, Zach Stein. Zach’s a writer, an educator, and a futurist working to bring a greater sense of sanity and justice to education. He’s also one of the key folks at the Consilience Project and full disclosure, I am an advisor to the Consilience Project, which is focused on research publication and the building of a decentralized movement towards enhanced collective intelligence. Welcome Zak.
Zak: Hey Jim, it’s good to be back.
Jim: Yeah, it’s always fun. I have yet to have a non-interesting conversation with Mr. Stein.
Zak: Dr.Stein. Dr. Stein.
Jim: Oh, Dr. Stein. Yeah, yeah, yeah. Oh, that’s true. I have found that the people insist on doctor, it’s usually inversely proportionate to the rigor of the field they came from.
Zak: Okay. Fair enough.
Jim: Dr. Stein. Anyway, today we’re going to talk about the recent article from the Consilience Project titled Technology Is Not Values Neutral: Ending the Reign of Nihilistic Design. Interesting topic. Definitely got me thinking as a person who has been a designer of products and businesses for over 40 years and most of them, the technology space. So I think we can have a really good conversation here. I’m going to start off by reading a couple of snippets out of the article. Then we’ll just sort of jump in and see where she goes.
Jim: Let’s see. Technologies encode practice and values into societies that adopt them. This happens in many ways, often unpredictably and unintentionally as the second and third order effects of technologies. Well, absolutely. Why wouldn’t that be the case?
Zak: Yeah. It’s in one sense it’s obvious that technology’s created to actualize certain values. You make a hammer because you value constructing a house more quickly without damaging your hands, for example. But then once it’s created the idea that the technology then makes certain new value systems possible and actually makes certain value systems more likely than others is less well-considered. And there’s a common belief that technology is values neutral. That in fact, you can use the hammer to kill somebody, or you can use the hammer to build a house. It’s really not the hammer. It’s about the values of the person using it.
Zak: So in a sense, what we’re trying to do here is return to what you just indicated, which is it was a common sense way that this is just obviously true. And trying to understand why it is that people believe that technology is value is neutral, and the disconnect between value and technology is what we’re referring to as nihilistic design, which is to say, doesn’t really matter what’s going to occur in the realm of value as a result of this technology. And there’s many reasons that that became the dominant way of thinking about technology design, where you’re just agnostic to value. You’re not really worried about it. And it follows from a form of design, which was naively optimistic. Or it wasn’t that they didn’t care about value was that they thought technology always creates good value.
Jim: Or there’s a, I would say a higher level called the Adam Smithian type paradigm, which is the invisible hand will sort out which technologies do good and which don’t.
Zak: Right. Yep.
Jim: Of course that makes a giant leap of faith, which I would argue is definitely wrong, which is profitable, not equal good.
Zak: Correct. That’s exactly right. But it’s still that naive sense that there’s a way in which the issues with concern to value as they regard to technology will kind of take care of themselves in a good way.
Jim: The invisible hand sort it all out. Don’t worry about it, people. And ignore that man behind the curtain. I love the fact that you chose one of my favorite examples as first example, which is the automobile. We live in the west, especially in the United States and Canada and Australia in such an automobile-centric world. We forget how utterly formed our world is by the invention of the automobile. Everything from where we live, to how we work, to how we get our food. I would say at least so far, far more impactful even than the internet. The only one of our relatively recent sort of high tech inventions is electricity, might have stronger impact. But the car impacts everything.
Jim: In fact, I read recently a hypothesis, seems reasonable to me, that the automobile led to the sexual revolution. Young teenagers, hot to trot, could easily escape the supervision of their parents by the famous method of driving to some country lane and hopping in the backseat. And I then thought about that a little bit. And I said, “Hmm.” I recall drive in movie theaters when I was a young kid, that was a cheap and fun family outing. We would go to the drive in and watch some Western. The whole family and sometimes some other kids from other families that would come with us, et cetera. It was a good time had by all. By the time I was a 16, 17 year old, pretty much that it stopped and mostly drive-ins were just for making out or…
Zak: No. It’s interesting that you relate the affordances technologies make possible with the libidinal economy of the human being, which is to say that the technologies interfaced with the most basic structures of our desire and our psychology.
Zak: I’d say the automobile also made possible, so we know all of the kind of the negative things that, but also made possible the national parks, for example. You wouldn’t have had the national parks without the roads through them, attracting people to them. And the ability for people to set out from the city, drive for a couple days, and get to a national park, and be able to drive around it and camp wherever they want it. And you see these pictures of the early days of the national parks, we saw the early Ford Model Ts just filling the area. And that’s just the point is that as soon as the car makes it possible to travel those distances in that way, a whole new bunch of value opens up. Value, which was implicitly in the car, which was built for another reason. Many reasons, but one was to solve the horse shit problem in cities. It was originally horseless carriage, mostly because horses are a pain in the ass to have in complex urban environments. So the car was built to solve a certain kind of problem and that we value transportation.
Zak: But then it opened this realm of value, both positive and kind of negative valances of value, which was unpredictable. It was very unpredictable, but it was created at that point, because there was a sense of naive optimism about it couldn’t be a bad thing. That it wouldn’t end up being a bad thing. And as you’re saying, it changed the face of the country, literally the face of it, the freeway systems, the nature of the urban landscapes.
Jim: Yeah. It used to be, the suburbs had to be clustered around the rail lines. The Brooklines in Boston, the Tacoma parks in Maryland, the Evanstons in around Chicago because there were no cars, they had to be on the street car lines or the local commuter rails. But the automobile, particularly after World War II, they became reliable and weatherproof and all those things, basically led to the classic suburban sprawl of suburbia.
Zak: Yeah. Well, and it’s known also that the automobile requires other technologies. It requires the refinement of gasoline. It requires delivery of gasoline. It requires the making of roads. It requires certain types of engineering feats in urban environments to create highways and other things. So you get a technology which creates an ecosystem of other technologies around it. And then the case of the car creates an infrastructure, which becomes a basic presupposition of our social functioning. And that’s the key thing to get is that eventually the technologies which are novel become normal and we begin to take them for granted and the values that they presuppose and make possible seem to be just a part of what humans are. When in fact, and this is kind of the point of the article, that there’s something about the nature of technology and the nature of humanity or sapiens that are deeply, deeply connected.
Zak: I came to these problems through my reading of Lewis Mumford, and I read Lewis, who’s one of the great philosophers of technology and theorists of the urban environment. One of the great last humanists. And I read him as a philosopher of education, because he taught me that all the technological environments are educating us all the time. That’s basically what he’s saying. He’s saying that the city, which is basically an environment of technologies. A high rise building in a city is a piece of technology. It’s got all these moving pieces, and it all inter relates and it’s got plumbing and electricity. These days it’s literally run by a computer. Most of these big skyscrapers. He was basically saying that these are educational, socializing, or civilizing, or decivilizing contexts. And it was profound.
Zak: And I’m someone who already works with a broad definition of education. You take it out of the schools, it includes radio, it includes TV, it includes all. But to think that it also includes the technological ecosystems and infrastructures. They form our humanity, and the more powerful the technologies get and the rapider the pace of technological innovation, the more disruptive they can be to our sense of what it means to be human. And so this question of at this point in history, and at this point in the development of technology and specifically digital technologies, we need to take a much more deliberate and considered approach about value in relation to technology and start to really think through, although we can’t think them all through, to really start to think through the second and third order effects of what it would mean to have this type of technology, aside from the fact that it would make you rich or that it will sell.
Jim: Yeah. And of course, Game A is a local hill climber. If I put in $10 million and on a risk adjusted basis, will I get more than $10 million back? That the venture startup model, essentially.
Zak: Exactly.
Jim: You talk about the other things that are necessary. The ecosystem. One language I always found useful from Stuart Kauffmann is the adjacent possible for instance. Until the automobile exists, and keep in mind is that real chicken and the egg problem with the automobile that was not solved immediately, was fueling. If you look at some of those old pictures of people going to the national parks, the famous trips that Thomas Edison and Ford took. They’d have five gallon cans of gas on the roof, in these roof racks. And the level of adventurousness to go set off cross country in say 1912 was really quite substantial.
Jim: By 1922, much less so, because the adjacent possible ecological niche of the gas station had appeared and was filled out extremely rapidly. The number of gas stations built between 1920 and 1930s, just some ridiculous number, like faster than ISPs grew between 1990 and 2000. Because the niche was suddenly there. Then of course tire companies, everything that comes around.
Jim: And then you guys give another example, which is fateful, I believe it’ll turn out to be, is the smartphone.
Zak: Yeah.
Jim: The smartphone didn’t necessarily have to be an open ecosystem for apps. And it took a little while actually for when it first came out, the first six months, maybe even year of the earliest smartphones, there wasn’t much on those app stores that were very interesting, but the adjacent possible combined with human creativity plus hill climbing behavior and all kinds of other behaviors, resulted in this explosion of uses of the app phone and it turned into a radically powerful or very open-ended ecosystem.
Zak: Yeah. No you’re right. And to say one more thing about the car is that it also changed. So it made adolescent romance possible, but it also changed the relationship to our localities, to our families, to the communities that we lived in. And that sense of adventure you spoke to, it made possible a new sense of adventure and freedom. So there’s a bunch of things aside from just land values changing, and other technological accoutrements. It actually changed the culture. It changed the way we think about what’s possible for our families and ourselves, and the smartphone of course does that even more?
Jim: Yeah. Getting back to the car. The other one when I tried to abstract what is the car actually, when I start thinking about things like mass transit. The huge advantage of the car is random access in time and space. I want to drive over to visit my girlfriend in Tacoma Park. Whenever I feel like it, I just do it. When I was a high school kid. Well, at least if I could talk my parents into using the car. I usually could in the evening, because they were home bodies.
Jim: But if I had to figure out, okay, I got to walk a mile to the bus stop. Then when what’s the bus schedule? What’s the variance on the bus schedule? Then there’s another half mile walk from there. You’re very much constrained in time and space by the geometry of mass transit while the automobile is random access in time and space. And that’s huge.
Zak: Yeah.
Jim: And those of us who grew up in suburbia and the car culture think of time and space differently than those who grew up in say New York City, never had a driver’s license.
Zak: Yep.
Jim: And always had to think in terms of all right, it’s the R line, and then I do this, then I do that. It’s just completely different model of reality.
Zak: It is. It really is. And that’s an important distinction in that the mass transit, which is also a technological system, will instill a certain relation of time and space and a certain form of sociality and a certain relationship to the rest of your culture.
Zak: And then the smartphone, similarly, is not an isolated piece of technology. It requires a vast planetary scale, computational stack. Even though you hold it in your hand and it looks like it’s just this phone, it requires your personal computer, requires wifi, requires broadband networks and server farms, and literally almost an enumerable list of things if you count.
Jim: Well, at least the built out ecosystem does. Keep in mind, the first iPhone didn’t have wifi. The only connectivity was over the telephone network.
Zak: Yeah. Yeah. But you can see that it allow, like you’re saying the adjacent possible fills out. So the full possible ecosystem of surrounding technologies grows, including ones you don’t really need. You can hook your smartphone up to your wash machine to tell you when your clothes are done. It’s like, do you really need that?
Jim: Yeah. Talk about a solution in search of a problem.
Zak: Right. But that’s what it does. Once it’s there, this whole network grows. And then because the smartphone is an information technology, not just a kind of like technology related to moving matter around, which was what the car is doing is moving atoms around. This is moving bits and information around. And so it falls also under the kind of the McCluhan style reading of technological development. So it makes it even more profound that the smartphone bridges the gap between the moving around of atoms, because you can call an Uber and you can get food delivered with your smartphone, and the moving around of information. Because you can be propagandized on Facebook on your smartphone. And it affects the basic structures of our communication, which is I think the most profound thing that the [inaudible 00:16:03].
Jim: I think it’s not the most profound thing. I think it’s one thing more profound, which is that it actually significantly modifies our cognition.
Zak: Yes.
Jim: I wrote an essay on medium called Reclaiming Our Cognitive Sovereignty By Giving Up Your Smartphone. And I brought together a bunch of research on how amazingly powerful the smartphone is. And particularly the way the people who are heavy users of smartphone, and I’m not. I often forget to even pick up my phone when I leave the house. I’m just not a smartphone person in a deep way. But for people who are, it breaks up their attention, it changes how they value things. My favorite piece of the research, and this was good quality or applicable laboratory psychology research, if your cell phone is in sight, even if it’s turned off, it reduces your attentional capacity substantially.
Jim: And there’s a long list of the things that this does to you. And none of them are good by the way. And you look down the list of things that it does. There’s not a one of them that it does cognitively that you’d say, “Yeah, I think I want my nine year old kid to do that.” And yet here this damn thing has pushed itself into the world.
Jim: Anyway, we’ve talked about cars, we’ve talked about cell phones, some really big ones. One of the things I really love was you guys, then your team of authors, one of the things about Consilience is that it’s a team of authors, as I understand. You talked about a technology at a completely other scale, which was the bathroom scale. Don’t mind the pun there, but…
Zak: Yeah, this was an example I encountered in my research years ago when I was just studying measurement systems in general. And any measurement system involves measurement technologies, even a ruler is the measurement technology. And it was fascinating to see, first of all, the creation of, it’s a sophisticated technology, the bathroom scale, relatively speaking, if you compare it to, let’s say medieval measurement. It’s sophisticated and it’s intimate. It sits there next to your toilet, and it gives you a number about yourself that’s extremely personal. And so the ready availability of the bathroom scale, changed the way we relate to our own health and our own self understanding. And this is most pronounced when you deal with cases of anorexia as a psychologist, because the relation between the anorexic and the bathroom scale is very, very, very complex. And there’s a very strong correlation between incidences of anorexia in societies and the ubiquity of the bathroom scale, which is almost universally in houses.
Zak: And then of course the hyper trophy of weight itself, body weight itself as an index of health, most people believe that losing weight per se is healthy for you. And that’s not actually true, but it’s the simplest number reduction to one number that tells you whether you’re healthy or not. And none of that change of value and self-perception of the anorexic, and none of the change of value in the broad discourse about health would’ve happened if the bathroom scale hadn’t been made as popular as it was. Now, there is the case that there are certain societies that have bathroom scales that don’t have anorexia to the extent that the United States does. So it’s not causal. It’s about the fact that this technology makes a certain self understanding possible, specifically a quantitative self understanding possible, which creates a feedback loop between you and the technology, and you and basically your self understanding.
Zak: It’s a very interesting case. And of course around the bathroom scale grew up industries of weight loss and diet pills and fashion and a whole bunch of stuff. So it’s just worth noting that that’s an interesting case, but it’s one of many cases of measurement technologies actually being very important in setting whole trajectories for societies.
Zak: So I looked at standardized testing. So the SAT. Any standardized test is a complex technology. Even more so these days, the way that they’re scored and administered. They involve digital technologies very intensively. And again, there, you have the reduction of personal experience and self understanding to a single number, the hypertrophying of that number in institutional context that surround that created technology, and then a focusing and a doubling down on the importance of that simplification enabled by the measurement technology. That’s kind of a broad pattern you see in many industries where you have a new measure, and it’s great, and it’s an innovation of technology, and all of these new ways of valuing things become possible as a result of having that number. GDP is another one.
Jim: Yeah. Another one that the people history of technology often talk about in an evolution of society, the invention of the clock in the 12th century.
Zak: Yeah.
Jim: Before that you look at the sun, you have a sun dial. Okay. Plus or minus 15 minutes. But now with the clock, you could synchronize everybody. Now, initially it was just locally within the sound of the clock tower because clocks in the 12th century, 13th century, very expensive, very intricate. These were a trophy for a town, if it could afford to have a clock in its tower.
Zak: Yeah.
Jim: And as soon as they could, they did, because it was such a high status thing.
Zak: Oh, it was. And you don’t get capitalism without the clock. And this is one of Lewis Mumford’s main thing is that the main technology that drives capitalist modes of production is the clock, because it’s about labor time.
Jim: And being able to measure productivity and all these sorts of things. And of course the other interesting and odd thing that most people today would not know is that while we were synchronized locally, we were not synchronized across space until very late in the day. Every town had its own time until, what about 1870? Something like that.
Zak: Until the railroads, it was the railroads. The railroads forced us to have to consider something like time zones, but it wasn’t until the early 20th century that we actually had them. And this is true of a lot of standardized measurement. It’s actually fairly recent. It’s a whole other diversion, but it is technology. And it’s important to get the clock is technology. It’s a very complex piece of technology that was ornate, that took guilds, had to be created to hold the technological mastery to create the clocks. And it fundamentally changed consciousness in a way that we can’t even really imagine what it was like.
Jim: And then things like standardized time zones allow international commerce to be more efficient. Now you could do it without. You could have had yourself a fat book. So, okay, the time in Zurich is 12:17. The time in Geneva is 12:14. And we’re all going to [inaudible 00:22:37].
Zak: Still have variability in the clocks. The clocks would run at different rates. Synchronization of one basically world clock is a fascinating technological innovation. And again, not all the values instantiated by technology is bad. So let me say that clearly, one of the things we’re trying to do here at the Consilience project in general is talk about what high tech futures have to look like. Because there aren’t really any futures that are not high tech futures, which is worth saying that. So this isn’t a luddite design manifesto where we’re trying to say, therefore, do not innovate with technology. We’re actually saying technology innovation is so important to the future, and the technology is going to become so powerful that we have to think in better ways about how it relates to the human being across all its dimensions, and not just its physical dimension.
Zak: Speaking about the psychosocial externalities that are created by technologies is what we’re trying to do, specifically so that we can have more predictable, less negative psychosocial externalities and start to design to have positive outcomes in the psychosocial. And that’s basically what we’re trying to convene with paper is that type of reflection. So it’s not a negative technological thing. The clock did make possible a certain level of coordination between humans, which made the world we live in possible. Because that the main thing it allowed for was people showing up at the same place on the same time to do coordinated [inaudible 00:24:11].
Jim: It was actually a way to cooperate.
Zak: It was a way to cooperate.
Jim: And that is the human superpower, cooperation. So it basically ups human capacity, but it had psychological side effects, which are probably not even still clear about.
Zak: It’s not clear. And it’s worth mentioning, too, we’d always been trying to make clocks. Some of the largest ancient public architectures.
Jim: Yeah. Stonehenge was a calendar.
Zak: [inaudible 00:24:35]. Yeah. These are essentially big, huge clocks. So that we were trying to make a technology that could take the phenomenological experience of time, and turn it into an objective time. So again, the technology, it’s inescapable part of humanity, and technological evolution and innovation is inescapable. But right now the main driving forces and you named it have been a kind of venture capitalist exploratory kind of nihilistic design approach, which has given us kind of an accidental planetary computational stack to use Benjamin Bratton’s language. Which most of the technological ecosystems we have now and their psychosexual, psychosocial externalities and psychosexual externalities, it wasn’t part of the plan. And we used example of the GPS on your phone. The GPS is amazing. It truly is. It can get you where you want to go, but if you use it consistently, it also takes away your sense of direction. It wasn’t designed to take away your sense of direction. It was designed to get you where it needs to go. But it does take away your sense of direction.
Jim: You talk about sense of direction, we lived, well, actually we had a friend that lives in Southern California who amazingly would use the GPS to go from her husband’s office to home. And one time the GPS wasn’t working, and she ended up like 30 miles up the road. She just had no sense of space anymore. What the hell?
Zak: Yeah. And I hear a lot of stories like that. It’s almost an urban legend. And that means it’s just broadly true. That’s what, like you were saying, it fundamentally changes your cognition. Literally your brain changes, because your brain’s like, oh I don’t have to worry about that. And your brain’s always trying to save energy, basically.
Jim: Brains are very cool. They’re very plastic. People who are blind can reuse the neurons that normally use for vision, they use it for touch.
Zak: That’s right.
Jim: It’s how braille words, etc.
Zak: So being liberated from having to do the labor of holding the directions in your mind is great.
Jim: And actually the mental move of going from a map to the territory is actually a high level of abstraction. Requires probably a bunch of processing power.
Zak: It does not. Yeah. That’s why you have orientation courses, where you have the compass on the map and you’d [inaudible 00:26:58] That’s a whole skill set.
Jim: I love that stuff. I love that stuff.
Zak: It’s hard to do. It’s hard to do. And it’s one of those things you don’t think about, like when the grid goes down or other things occur. We’ve been de-skilled as much as we’ve been empowered by the technology and given new capacity and actually see new realms of value. We’ve also been de-skilled by the technology, downgraded as Tristan Harris would say, that the more complex it becomes, the more less complex we become. And so those types of trade offs need to be part of what we’re thinking about when we’re thinking about technological design.
Jim: All right. Let’s move forward a little bit here in our discussion. Because that’s certainly the case you guys say, it may be reasonable to summarize Facebook’s approach as our technology connects people. What they do after that is up to them. And what I’ve been able to ascertain from the history of Facebook is other than probably the first order drive, which was Zuckerberg just wanted to get laid more easily. I do believe that he did have a sincere view that connecting people is good, and full stop for quite a while. And yet I’m not one of those people that say Facebook is totally evil, because there’s been a lot of good that’s come from Facebook. People able to meet up that couldn’t have, the whole Game B community, and the liminal web, and many other interesting sub communities of thought and philosophy probably wouldn’t have existed without Facebook, as weird as that is to say. On the other hand, it certainly shows signs of driving us crazy. So talk a little bit about Facebook and being naive. Is it naive or nihilistic? I think that’s important also to make that distinction.
Zak: Yeah.
Jim: That Facebook’s approach is our technology connects people, what they do after that is up to them.
Zak: Yeah. So a few things to say. One is that again, we’re not anti-technology. And in fact, the social networks and digital media communication infrastructures have made incredible things possible. Things that would never been… We wouldn’t be speaking, Jim, of course.
Jim: Yeah. You and I would never met. [inaudible 00:29:04] The nexus of people, the connections, how we got to know each other were entirely mediated through social media.
Zak: Yeah. And so most technologies are created to actualize a value. So nihilistic design doesn’t mean the absence of value, even if the value is to make money. Value is always in human action. You can’t escape it. It likely ends the case that the majority of the social network technologies that have been created that are now creating problems were originally created with the intention to connect people. But there was a point that occurred when the predominant value became something else. Even though you’re still espousing the value to connect people, and you still are connecting people and offering that value. Other values stepped in as more important. In the case of Facebook, it was I believe revenue. That essentially it became more important to surveil people and to feed them micro-targeted advertisements and particular news feeds to make money by selling ads than it was to actually connect people to one another and to the most important information that they might be exposed to.
Zak: So in this case, I would say that Facebook ends up with a nihilistic design because they go by default to the most base level accepted value of the society, which is money, which is the desire to be maximally profitable, despite what the consequences will be for psychosocial externalities, given your strategy for profitability, which is an attention capture strategy of profitability. And it’s not that making profit’s a bad value. It’s just that it is a value that requires no deep reflection to endorse. So it’s a kind of nihilism, because it’s basically the value that is there when all the other values fall away.
Jim: And I would say that interestingly, I don’t think Facebook actually tried to optimize on pure economics, at least not until it was a public company, but it did focus on two other things which were strongly correlated with financial results. And this is when you talked to Facebook people I’ve read, I actually had the guy that wrote the book about Facebook on the show. Steven Levy, really very interesting. Apparently, the two paramount values in the pre-public company phase, which is really what made Facebook, Facebook was first growth. Zuckerberg truly did sincerely believe he wanted everybody on earth on Facebook. And so the organization was incentivized and motivated for pure growth. And the growth number they used was the number of people who were active in the last 30 days. And so that was prime factor.
Jim: And their second one, and this is the one that really led the astray. Of course, this is more correlated to finance is engagement, i.e. how much time did they spend each day? And when you put those two things, especially the second one into your base, and you get the point where you tell the computer to optimize on that. And again, what famously with what we know happened was that we found that extreme statements, whether they’re true or false produce higher engagement. Controversy, whether it’s about something significant or not, produces engagement. And so optimizing on engagement probably is what led to many of the effects that Facebook has had on it. I suppose you could say that’s nihilistic, saying are we think engagement and growth by themselves are good.
Zak: And again, nihilism is an absence of that. Extreme nihilism is you don’t believe in any value, which is a performative contradiction, because somehow you value-
Jim: No value.
Zak: [inaudible 00:32:50]. But nihilism is just… And you could almost say it’s like agnostic with regards to value. That it’s opportunistic with regards to value. That it’s unreflective with regards to value. That it doesn’t have a considered set of first principles and first values that it will not move from, because they’re so well justified philosophically. And so that sense of growth and engagement and profit are again, lowest common denominator values in our society, specifically. So, that kind of fell into that basin of attraction. And then you get the other forms of the notion of moving fast and breaking things. And valuing disruptive technology for its own sake. That just that it is disruptive means that it’s valuable. Those are all things which speak to a kind of nihilistic approach to design. And we’re using that term intentionally, inflammatorily to get people who are in these fields, who believe they have a strong sense of value, and they do to start to have those conversations in a more robust way.
Zak: And again, making your tech company woke, isn’t what we’re talking about. Making your tech company ecologically aware or greenwashing it, isn’t the thing. This is about the details of the design process and the interface between the product people, and the tech people, and the founders, and social philosophers. So it goes beyond the importing of contemporary common conventional value and kind of planting a rainbow flag, and it moves into a much more complex discussion about what will this technology actually do to people.
Jim: And now this is important. This is a very important pivot here.
Zak: Yeah.
Jim: Because what we’re talking about, our second order, third order, nth-order effects. I recently did a very interesting podcast with Jessica Flack from the Santa Fe Institute where we explored the, I think the title is Nth-order Effects of the Russia Ukraine War. And we explored some very interesting things, but this was the big but. One of the things I learned from my 10 years deeply involved with the Santa Fe Institute and studying complexity is predicting nth-order effects is fucking hard, damn near impossible. Second order effects, as n increases, the ability to have any meaningful prediction gets smaller and smaller. I’m just going to throw out something intentionally inflammatory, does the idea of engagement as a metric for a business, is it reasonable to assume that that results in QAnon? Is that something reasonable for a 23-year=-old dude at Harvard to have been able to make that multiple set of steps to go from engagement? Oh, that’s going to lead to Qanon.
Zak: Right. Well, and could the people who invented the car have predicted the interstate freeway system? And so the basic answer is no. You can’t predict nth-order effects. It’s hard to even predict second and third order effects. And sometimes that statement right there is used to just not consider them. There are obvious second and third order effects, which can be predicted. And then there are possible futures which should be enumerated and thought through, but which can’t be given really solid probabilities. So it ends up being a probability game and an imagination game, because a lot of these things are literally beyond our capacity to imagine, because we haven’t seen this technology ever in our lives be instituted. So the goal here isn’t to come up with a foolproof process. It’s actually to begin having a different kind of conversation during design processes.
Zak: So again, it’s not the engagement itself. It’s the actual technological tools that are being used to increase engagement. So it’s like if you are at Facebook building things basically to addict people to the screen, and you’re using psychology, and you’re using conditioned responses, and the timing of things precisely, and the holding back of likes until people are about to leave, and then you put more likes on so that they stay. So if you’re doing that kind of stuff as part of how you’re designing the thing, there are easily predictable second and third order consequences. If I make an addictive substance, and I give it out to people and I’m measuring whether that’s good or bad determined on whether people like it and take it from me.
Jim: The coke dealer model of business.
Zak: Then I’m studiously ignoring the obvious second, third order effects that are psychosocial externalities of creating an addicted population. So in those cases it seems almost disingenuous to say, “Well, we can’t know the second and third order effect. So let’s just build it.” When you’re saying, “Well, no. There are in some realms you can pretty easily see the second and third order effects. In other realms, you can’t.” And so what that means also is that part of what is necessary as technology development becomes more significant, which is to say increases its power and exponential growth, is the monitoring of second and third order effects.
Zak: So outside of information technologies, there are institutional bodies which monitor the usually kind of physical and material externalities of industries. Watchdogs which like actually try to monitor how much pollution’s being kicked out of this factory. And so we need to think about that in the realm of digital technologies, thinking about what would it look like to systematically monitor the psychosocial externalities that are being created by digital technologies?
Zak: Because we’re trying to monitor the ecological externalities created by technologies. And we’ve been doing that. It took a long time to do that actually, but we’re trying to do that. How much pollution, exhaust, and garbage, and all of the ecological externalities. So at the beginning of the design process, you want to think it through. And then once it’s out there in the wild, you want to monitor, you want to see. Are we getting unexpected psychosocial externalities from this technology? And if so, are we committed enough to non-nihilistic design to actually make changes? We’ve talked about TikTok before. It seems clear to me that TikTok is, it’s the equivalent of a massive polluting entity, but it’s polluting the neuosphere that’s polluting people’s psyches. And so the question of, why don’t we shut it down the way you would shut down a factory that was illegally dumping stuff into a public water supply? But it’s because we don’t think about psychosocial externalities in the same way that we think about ecological capabilities.
Jim: Yeah. I’ve had expression I’ve been using for a number of years meme space pollution.
Zak: Yes.
Jim: And if we say that polluting the meme space is at least analogous to polluting the ecosystem, we have the beginnings of a mental model. But to your point, we’re not there.
Zak: Exactly. And at the end of the day, the meme pollution ends up being almost like brain damage. The externalities end up being deposited, not in a water and the atmosphere, but in people’s neurological profiles. And you’re seeing this with adolescents with TikTok in particular. Identifiable psychiatric disorders that strongly correlate with overuse of TikTok, tics, like actual tics.
Jim: Yeah, there’s a whole syndrome of a performative Tourette syndrome. What the fuck.
Zak: That’s the thing where it’s like, at some point there needs to be like a Rachel Carson, but for the digital. Maybe it’ll be Tristan Harris. We need someone to raise the alarm high enough that we start to actually be concerned about psychosocial externalities during design, and be monitoring psychosocial externalities after the thing is implemented.
Jim: Yeah. I can tell people, as a person who’s been designing online products for 42 years, when I… Because I’d been hearing about this TikTok thing, not that long ago, but six months ago I put TikTok on my phone and I tried to… I immediately, within five minutes I said, somebody has invented fentanyl.
Zak: That’s right.
Jim: This is truly brilliant and evil and bad. And I had it on my phone. I used it three times, and the people know me know that I am a kind of a freak for punctuality. never late for an appointment, very seldom at least. And I feel terrible when I am, and I hold other people to that same standard. Well, my third time using fentanyl, I was 10 minutes late for a zoom conversation that was fairly important. Something that I would never do. I said, “I guess I know how a crack whore feels now.”
Zak: Geez.
Jim: You’re just overtaken by something. Then I said, “All right, let me look at it one more time now, but from a meta view. The fourth time and I go, I said, “This is the most genius, evil thing I’ve ever seen in my life.” And I deleted it from my phone. So especially y’all parents out there, I’m telling you, this is really bad. Get TikTok off your kids’ phones. Letting your kids have TikTok on their phones, I would suggest is at least as bad, maybe worse than handing them a carton of cigarettes. And I’m not kidding. And I’m not being alarmist here. I’m a fairly easygoing guy and sort of live and let live. But this thing is just an example of gone too far by a bunch. But okay. Rant over. Rant off. That’s just bad, people. Check your kids’ phone, make them delete TikTok. God damn it. But anyway, let’s talk about the practicality of it. By chance, I happened to have been at the actual beginning of the road that led to TikTok and to Facebook.
Jim: I worked as a young man, my first job in the tech world, at a company called The Source. It was the world’s first consumer online service in 1980, believe it or not.
Zak: Whoa.
Jim: We had chat. We had bulletin boards, we had mail, we had home shopping. We had stock prices. By 1981 or maybe early 82, we even had a rudimentary thing that was kind of like social media. We had all that stuff in 1980, 81, 82. And it was text only. 300 or 1200 bod. Very expensive, 10 bucks an hour. All the infrastructure was new and mostly designed for business. Why does it exist at all? Well, because there’s always a few nut job, cutting edge pioneers that could recognize, as I did, I saw this and said, “I want to work for this company,” that this idea of the network world is wow, cool, nifty.
Jim: And so we had quickly tens of thousands of customers and a few hundred thousand. And that was literally the beginning of it. You can track what we have today from two places. The Source and CompuServe. Both started at about the same time. The Source had incompetent management and owners, and they eventually got eaten by CompuServe. But I was here at the very beginning. And I can tell you that we thought we were doing God’s work. That is if there was a God, what there isn’t, people. But note that as well.
Zak: That’s another conversation.
Jim: Another conversation for another day. They’ve been talking about that for 2600 years, but anyway, we were absolutely convinced that what we were doing had to be good for citizenship, education, everything. How could it not be to have access to the world’s information, be able to learn from the smartest people, irrespective of whether they had fancy positions or not, be able to keep in touch with all your friends, even though our 80,000 users, the reality of that was wasn’t that great. But we sort of saw where all this was going.
Jim: But on the other hand, this is my point about the practicality of it. Here we are at the root, this was a small, tiny, tiny company. It was funded by with $2 million, was chronically losing money. And we had no resources. We were stretched extremely thin, and to say, “All right, in addition to inventing something that never existed before, you also want to lay on us, somehow thinking through and building a monitoring system for second, third, and fourth order effects? We can barely keep our doors open and our lights on. How the hell can we take on this other stuff?”
Zak: Yeah. Yeah. This is the issue. And when you read accounts of how Google, for example, change to thinking of become basically an advertising company instead of an epistemologically respectable company. That was the main thing. It was practicality. It was how do they keep their doors open? How they keep their lights on? And so a lot of what needs to occur for axiological design. And so that’s the phrase we’re using to counter nihilistic design, and axiology is the philosophical study of value, which includes ethics, and philosophy of mind, and other subfields of philosophy relevant to value. So axiological design requires a whole bunch of changes in the technological incentives system, which is to say the incentives that drive technological design themselves would have to be altered, in order to make possible the space where axiological design could flourish. Under most contemporary conditions, it’s very difficult to out compete Game A doing axiological design. But I believe that ultimately, technology is designed from that standpoint can out compete. Technology is designed from other standpoints, because they’re actually good for humans. So this is kind of the argument of with like processed foods versus non-processed foods. It’s like, okay, in one sense they can’t compete because the processed foods are designed to make you addicted to them, basically.
Jim: Oh, that nice crunch from Cheetos. Holy shit. Carrots ain’t never going to touch Cheetos.
Zak: But if you begin to eat non-processed food and you eat foods that are whole grains and proper balance of nutrition and things, you feel a lot better. And then eventually you stop eating the crappy food, because it makes you feel bad. So there’s something similar in the space of technology design where if alternatives were available, I believe that the deliberately axiologically designed technologies could out compete, but the problem is the initial lift right.
Jim: At that very beginning. Because it’s interesting. I look back, I actually made a few design decisions about products that probably influenced the world. For instance, the email system that we inherited from the time-sharing service we were hosted on, had built in monitoring by the sender, and when you opened your email. And when I designed the second generation email system, I decided that was bad. And so I left it out. And subsequently, all email systems after that did not have the ability for the sender to see when someone opened the email. But of course that’s been subsequently reinvented by various tricks in the HTML space. But for a long time it was… Maybe it was me. Maybe I just happened to, some random 26 year old just made a design decision from a principle, none of your God damned business whether I opened your email or not. Okay, no, we’re knocking that feature out. And if it’s just beef in my own head, probably spent 20 minutes thinking about it, and it probably had some impact in the world. How do you resource these small companies that are making these root decisions.
Zak: Yeah.
Jim: How can the world be organized, so that a company that has just enough, barely enough resources to start up .can also then do this extra work?
Zak: Yeah. So there are, I believe there are, because you don’t get the current tech stack that we have. We don’t get the, what I call the accidental planetary computational stack. You don’t get that without the kind of mish-mash of incentives that come with the current capitalists and system of innovation. So venture capital and those kind of things. So the idea is that there are basic economic innovations that need to be considered. And this is a broad issue with the meta crisis in general, which is that we cannot solve the meta crisis with a bunch of people working on separate problems, not coordinating. It can’t be a rag tag effort where we cross our fingers and hope that it all somehow comes together. And this is true more so in the tech space now than ever.
Zak: So what this means is that we need to set up things that look a little bit like business commons sense. Where you have large scale and small scale in focused expert groups, cooperation across what it would typically be considered to be competitive boundaries. So the thing that drives the kind of negative competitions of tech innovation that make TikTok so successful, as they were basically looking at other stuff and out competing the other attention capture technologies and until they perfected the attention capture technology. That’s a kind of zero sum multipolar trap dynamic between these competing tech companies, which is a race to the bottom of lowest common denominator value. So if you can create something like a business commons where technology companies and startups and innovators agree to pool resources to a certain extent and personnel to a certain extent to change the nature of the way they’re thinking about their design decisions. And again, I don’t know how to concretely move that forward, but I can’t see a way that the future is inhabitable for humans, if we continue this level of power of exponential tech innovation, without that kind of considered coordination beyond the agency of an individual company.
Zak: You need to think about the whole ecology of technologies that are being created. And if one innovation of technology isn’t going to make money, but it has to exist, it should exist. Just because it can’t make money doesn’t mean that it shouldn’t exist. And similarly, just because this technology could make a ton of money, but it shouldn’t exist doesn’t mean that. So those kinds of decisions need to be made. I’m not recommending a top down government oversight panel on technology.
Jim: That’s a note I put. Are we talking here about the government bureau of technical products. That you need a license from the government every time you issue a technical product. We’d still be living in mud huts and eating with spoons. We wouldn’t even have gotten to the fork yet.
Zak: And the space is too complex to actually be regulated in that top down manner. So you do need that sense of co responsibility on the part of the people who are actually playing this game. That type of regulation isn’t necessarily not part of the solution. It may be that’s certain things that can be easily regulated, but you can’t regulate stuff that doesn’t exist yet. You don’t know. So there’s always going to be the moments of creativity that need to be preserved.
Zak: But as technology becomes more powerful, we will face very difficult choices in this realm. And again, it’s not about getting to a place where it’s clear what needs to be done. We’re actually trying to make it less clear what should be done. Right now it seems clear if it makes money, do it. But as soon as you get into the realm of Axiological design, you have to slow down. What should be done> that’s basically what we need to do. Slow down a little bit. Don’t move fast and break things, slow down and build things. Things that will last, that will be inhabitable in perpetuity by humans, which is actually hard to do. It’s easy to make things that humans are addicted to. It’s hard to make things that are good for humans in the long run.
Jim: Yeah. What other thought? I’ve been having this ongoing conversation with Tristan Harris for 18 months, at least. And I think he thinks I’m nuts for just unrelenting on this one. And that is, yes, you could do it that way, but that is really hard fraught with legal and moral questions about when should you coordinate, when should you not. What’s antitrust type fractions? What isn’t? What’s social good? What about popping up to a different dimension? And just thinking about the handful of very powerful parameters, that if you change the settings, you change everything. And the one that I keep proposing and he keeps poo-pooing is what would happen if you banned advertising online? I’m convinced that that single change would massively reduce the harm.
Jim: Because I think back to my days at The Source and then to some other online stuff, and this was all pre-advertising, our alignment with the user was actually much better. Because it was our job to get them the maximum value in the minimum amount of time, because they were paying by the minute and in costs of providing services were pretty high in those days. And so we both had the incentive to provide the most value in the least time. Once you go to the advertising model, that’s reversed. The deep incentive is to keep you on for as long as possible to achieve as little as possible. So you have a libido to continue to stay out longer and maybe achieve a little bit more. And so that one art single parametric design feature poisons everything in my opinion.
Zak: So that’s interesting, I think. And there was a point in history when that choice was made basically to make the internet advertisement based as opposed to subscription based. And that’s kind of been documented.
Jim: There was no choice. It’s no choice. And here I was right there at the time. I can tell you exactly what happened. It used to be that the revenue density from advertising wasn’t enough to amortize the relatively expensive computer networks and computers. Then around 2000 or so, maybe a little earlier, depending on what kind of business you’re in. They were between 1998 and 2004, depending on what type of business you’re in. Moore’s Law kept driving the cost of networks and computers down. People got better at figuring out how to make advertising online more efficient. And eventually it crossed. And at that point, advertising just took off and Chris Anderson wrote this book called Free.
Zak: Yeah.
Jim: Where he predicted all this was going to happen. And he said that if your product can be free, i.e. advertising supported, it will be. And he was absolutely right. Because psychologically, people don’t want to give their credit card to somebody else. That alone is it. Nobody made the decision. It’s just that the economic and technological forces evolved to the point where ad support worked. And at that point, it just subsumed all other models, at least within the consumer space.
Zak: Yeah. Interesting. Yeah. There’s that book by Michael White. I think it’s called The End of Protest, and there’s a post in there where he documents certain meetings that took place with the American Advertising Association and key players in the digital. But it is the case that were that to change, that was the thing that basically made the attention capture business model, the business model.
Zak: But it’s worth mentioning that even if there were no ads on TikTok, it would still be fentanyl, because what you have is user created content, which is categorized by the AI, which is then fed to you based on your preferences. Even if you didn’t have ads, you’d still have customized news feeds for attention capture purposes, I believe, because you would want people to value your site and the more they have fun there and the more they stay looking at it, the more they’re going to value it. So a lot of the negative. So QAnon, for example, is not an issue of advertising. It’s an issue of micro-targeting user generated content, and micro-targeting invitations of people into political subgroups on Facebook, where they’re in discussion with others and they see the posts of others based on their prior likes of other posts. And it has nothing to do with the ads on the left or right. But it does have to do with the AI monitoring and AI curation of what you’re seeing on your screen.
Jim: Yeah. And basically because the metric is engagement. It turns out that for people who are susceptible to the QAnon mind virus, it produces unbelievably high engagement. Drop everything else in their life and become Q people. And so it’s one of the very top engagement.
Zak: So you can imagine a situation where Facebook drops its ads, it charges a little bit from each customer for the service, but the service remains large scale privately-owned behavioral modification. That’s what the service is now. That’s what the whole infrastructure is. And so they could still execute large scale privately-owned behavioral modification for whoever wanted it with that technology, with or without advertisements.
Jim: Yeah, yeah. Though, I think the alignment is different, because it currently, the business model is about maximizing the amount of time you spend. If they were charging, let’s say $2 a month. What is very interesting, Facebook’s revenue is only $2 a month. Because the costs have gotten so low, and Twitter’s is a dollar a month. That’s their revenue. That includes profit and the cost of running their ad infrastructure. So we’re not talking big dollars here, but on a flat, let’s say they charge you two hour dollars a month. They actually want you to be online as little as possible and continue to pay the $2, because it costs them all the traffic they have at half to host cost them. And so they want to have the most value, at least in theory, for the least engagement, least time, but not so much that you stop paying your $2.
Zak: Right. That’s the thing, because they also want you to need it.
Jim: Yeah, exactly. But they want-
Zak: They want be able to raise the price maybe next month, which means that you need to need it.
Jim: That is true. Let’s take the fentanyl model. The smart dealer gives the first one away free. Get them addicted and then gradually raise the price. All right. Actually, we’re well past our time, but that’s all right. It’s been a fun conversation. Anyway, let’s go out with the five propositions towards axiological design. You want me to read them to you and you react to them?
Zak: That’d be great. Yeah. Read them through all at once.
Jim: Okay. All at once. All right. Number one, technology is created in pursuit of values and results in the creation and transformation of values. Number two, technology requires the creation of more and different technologies. Multiple new technologies evolve together as functionally bounce sets, forming evolving ecologies of technologies. Number three, technology shapes our bodies and our movements as a human created habitat, and thus is deeply habit forming both for individuals and societies. Number four, technology changes the nature of power dynamics in unpredictable ways, creating an environment that advantages some humans over others, setting up selection pressures that force personal adaptations to an adoption of new technologies. And number five, technology impacts the kinds of ideas we value, the quality of attention we pay, and our conceptions of self and world.
Zak: Yeah. So these emerged basically as a distillation of a body of literature that’s been articulated since the 1920s. Again, Lewis Mumford started here and then you get Langdon Winner, and then you get more contemporary work and ontological design and values-focused design, values-centered design. And the five propositions are intended to be things that if you agree with them, then it allows you to think about the design process in a more adequate fashion. If you disagree with them, we could argue about it. These are things which when you begin to reflect on technology deeply, that become almost obvious and apparent.
Zak: So for example, the idea that you create a technology because you have a certain value. We already discussed that. And then it creates value. But the deeper one that technology always affects social power dynamics, for example, is very important to get. That the first adopters of new technologies, which become a ubiquitous technology, so you’re the first adopter of a new tech, that gives you power. And you can use that power to be the first adopter of the next technology. And so the basically staying abreast of technological trends is something that companies power specifically. And as we’ve seen with the internet and other technologies, including the car, just the presence of the affordances those technologies make possible, changes people’s own sense of personal power for better and for worse. It can give you a realistic or an unrealistic sense of the power that you actually have.
Zak: And then the notion that technologies form whole ecologies that are habitats is very, very important. So that when you’re thinking about a specific technology, don’t just think about the user and that technology. You think about the user and that technology in context of the total technological surround. And sometimes that really puts things in perspective to realize that, okay, the car results in this whole complex ecosystem of other related technologies as does the smartphone. And if I drop a smartphone app into a technological space that has all these affordances, it’s my responsibility to think about the fact that I’ve done that, see what the ramifications would be given that the phone also does all these other things, given that the phone is usually used in these contexts, what will it mean to have the app on the phone, as opposed to just thinking about how the user will use the app. Thinking about what will it mean to have that app, that present in someone’s life, given that the smartphone, you sleep with your smartphone, and you poop with your smartphone.
Jim: And everywhere you go. Well, I do not. Other [inaudible 01:02:59] people out there, do not put your damn smartphone in your bedroom. God damn it. There’s one life hack that you could do, do not put your smartphone in your bedroom. God damn it.
Zak: It’s also your clock. It’s the clock that wakes you up in the morning.
Jim: Spend $10. Buy an alarm clock
Zak: So that’s another bit is that most of the advanced philosophers of technology always think about the technological [inaudible 01:03:27] or the technological ecosystem or infrastructure. And just doing that allows you to see potential second, third effects that you wouldn’t see because you hold the technology in the actual embodied context.
Zak: So those principles, again, these are not definitive. We’re trying to start a conversation actually rather than having solved it. And in a sense, they raise more questions than the answer, because if you start to take them seriously, then the design of technology based on them is different. And it’s a future that we haven’t seen yet. We’ve only seen a future where we create technology in particular contexts in an isolated fashion and then release them into the wild and cross our fingers and hope that it turns out good in the long run.
Jim: Hope for the best.
Zak: Hope for the best.
Jim: We used to say in business, “I can tell you if you hope in one hand and shit in the other, I’ll tell you which one will fill up first.”
Zak: Right. So in that sense, we’re saying that’s not a viable strategy anymore. We want to have an inhabitable civilization, much more deliberate concern, based on these principled propositions is the beginning of a new way of thinking about design. So I think that’s what I would say. And again, like we’re saying, we’re using the term axiological design, but because it has this connotation with the philosophical study of value, but there are other approaches ontological design value centered design, and a host of others, which are cited in the work. So we’re not claiming to have founded a new field, but we are raising the alarm and raising the urgency of taking a new approach to this kind of design science.
Jim: Yeah. But I agree with you that something has to be done. Or I think as you say, this is just a first step. There’s an awful lot to be thought through here. The complexity combinatorics, that even third order effects, there’s a shit load of them. And beyond that really hard to predict. The issue of, especially when a field is brand new, the people that are setting the foundational directions don’t even know they’re inventing a big industry.
Zak: Yeah.
Jim: And they don’t have any extra resources. And how do you do all that? There’s obviously lots of questions, but this is really important. And to your point, if we don’t figure out some way, we’re just letting essentially the local hill climber of maximizing money on money return, drive ourselves to wherever that’s going to be. And there’s a really good chance that is not a good place. Yep.
Zak: That’s exactly right.