The following is a rough transcript which has not been revised by The Jim Rutt Show or Alex Ebert. Please check with us before using any quotations from this transcript. Thank you.
Jim: Today’s guest is Alex Ebert. Alex is an American singer-songwriter and composer. He is best known for being the lead singer and songwriter for the American bands Ima Robot and Edward Sharpe and the Magnetic Zeros. Welcome, Alex.
Alex: Thank you, Jim Rutt. This is an honor. Been listening to you for a long time.
Jim: Oh, cool. Don’t you have anything better to do with your fucking life, dude?
Alex: I ask myself that.
Jim: Anyway, I know Albert from this most amazing email list that we both belong to, all kinds of cranks and smart people and weirdos and stuff, and lots of interesting conversations. And did response to something on the email. Alex sent a link to one of his Substack essays and I read it and I go, “Damn, I think that’d be fun to talk about on the podcast.” So, I sent him an email and he graciously agreed to come on.
Alex: Yeah. Yeah. Yeah, I’m excited to talk about it. It’s something that relates directly to the art world and production in general.
Jim: Indeed. And it caused lots of cascades of thoughts when I read it. That’s why I pinged you. And just for online coordinates, you can find more thoughts from Alex at his Substack called Bad Guru. As always, that link will be on our episode page at jimruttshow.com.
The name of the essay we’re talking about is Suboptimal Revolution, and you wrote it or published at least on June 23rd, 2023. Why don’t you start with the the very top about the idea that suboptimization is actually important?
Alex: Yeah. We can reverse engineer this by considering what optimization does.
Jim: All right.
Alex: “I think optimizations across any given domain homogenize the outputs of that domain, as optimizing a process eradicates the processual inefficiencies and incidental deviations that produced domain variety in the first place.” So, right there, I’m just reading directly out of my essay, since I didn’t know we were going to talk specifically about that essay, but I’m glad that you’re bringing me back to that.
So, what I just described basically is, in some ways, the full collapse of the spatiotemporal parameters of phenomenology, if we were talking about optimization at its apotheosis. When we get to a place where we are fully eradicating the inefficiencies and deviations from energetic efficiency of any given process, we are effectively creating a homogenous output that no longer has outputs which are deviating from the norm.
And so, then when we think about, well, okay, if that’s what optimization leads toward, how do we get a variety? How do we get surprise? How do we get the phenomenon of surprise, the prediction errors that produce that sense of, oh, this feels new and original and interesting? Well, obviously, then there is something suboptimal happening, and that’s where I get to the notion of suboptimal revolution.
Jim: And then, back up just one more level. The context was the discussion a little bit about genius versus and/or democracy.
Alex: Yeah, genius versus democracy. Let’s continue on from, when you technologically optimize a given process, you make that process require less effort, less energy, and less know-how, while delivering the same effect. And so, technological optimization, of course, I’ve seen this in music, suddenly now, “musicians” are given the tools of music production, such that they no longer have to master any instrument.
In fact, they no longer have to master the recording process. Everything is now pre-recorded. You get these sample packs that already come preloaded with every single thing, every instrument under the sun already at its maximum dynamic range. And so, you end up with a process that negates the becoming to land you immediately at being. Or put in other ways, that negates the doing to arrive at the product. Negating the spatiotemporal experience of becoming a master or whatever.
Jim: And let me jump in here if you don’t mind. In game B terms, we would talk about that as it’s collapsing the dimensionality of the process.
Alex: Exactly.
Jim: Way back yonder in 1500, you’re a troubadour, you got to write, you got to sing, you got to play, you got to tell jokes. When you’re seducing the ladies, keep from getting killed by their husbands. You had a whole series of skills that you had to do to be a successful troubadour back in the day. But as each of these things gets automated, it requires less and less.
Your beats come from a drum machine instead of a drummer. It’s going to be less variation, less interesting. Maybe more precision. Probably more precision. It’s the same thing every time. Amazingly, I don’t know how I happen to know this, but there’s a thing called Auto-Tune, which allows singers to be put on tune when they sing, even if they suck at singing on tune.
So, again, yet some of the most interesting effects from singer Janis Joplin, for instance, has this very interesting, just a little off-tune style of singing, which is the essence of Janis Joplin. That’s why we loved her. But if Janis Joplin had been run through Auto-Tune, nah, it wouldn’t have been Janis Joplin.
So, I presume that’s approximately what you mean when you warn against, or I don’t know if warn is the word, but start to say that optimization squeezes the variety out, reduces the dimensionality.
Alex: Absolutely. Absolutely, it does. And also, optimization facilitates the democratization of a given domain because it makes it so much easier. So, now anyone can sing. Anyone can be a musician. So, the relationship between homogeneity, or the war on genius, and democratization is very interesting. And there’s a distinct relationship there, almost causal relationship there, where you can trace optimization to democratization, to homogenization.
Which, of course, homogenization, if we’re going to define genius, for instance, as something which surprises you or feels unique, like a unique contribution to a domain, what we’re really talking about there is something that exceeds your expectations, that lies or falls outside of your predictions of a given product or output or thought. And so, when we’re talking about genius, we’re closely aligning whatever we consider to be genius with prediction errors, with things that we’re not predicting. Surprises, things that fall outside the expectations zone.
And so, we can trace all of that homogeneity. If we’re experiencing products which are the same, or iterating on the same thing, then we’re not going to experience genius. Because we’re not going to experience surprise, or something that falls outside of our expectations.
Jim: Well, let me do a partial pushback here, just a probe question. So, let’s imagine here, we’re now in the age where your instruments, your beats, your tuning of your voice even are all taken care of for you, so that there is democratization of the final creative process to produce the digital artifact that is now the music. However, someone still has to write. Of course, the LLMs have started taking some of this away.
So, maybe one could argue that eliminating the other dimensions allows more degrees of freedom in the, let’s say, songwriting domain, or in the lyrical domain, or in the ideas that are being expressed. Because the number of people that can actually master an instrument and actually master singing is small. That’s a real talent. One in 1,000 probably have some reasonable skill in either. And you put the two together, it might be one to 100,000, assuming some covariance.
So, that basically prunes the subject matter of the writing and the ideas to those people who are also skilled at an instrument and skilled at singing. And I’m sure there’s a positive covariance with the other aspects, but it prunes out a whole bunch of people who might be better at the writing and the ideas. So, I’m just going to throw that idea back as a partial pushback.
Alex: Well, let’s say, so you would think that there would be a relationship between the plurality of participation in a domain, or the breadth of participation in a domain, so that you have so many more individuals participating in a domain. So, you thereby have so much more flourishing of a pluralistic output of idiosyncratic product. Because you have more individuals participating. But actually, for whatever reason, that’s not the case. And I have some notions on why that’s not the reason. But for the most part, the opposite is the case.
The more people you have participating, it’s the decision-by-committee problem. The more people you have participating, the more everything converges on an average. Because it’s still being modified, there’s a feedback between, obviously, creation and consumption. And when consumption is providing feedback for the creation and the creative tools, and then the creative tools are being given to the very consumers who are producing the content, then what you end up with is a feedback loop that converges on an average. And so, you end up with actually a more banal homeostatic outcome.
Jim: Yeah, it makes sense. Though, again, a challenge here. It may not have that much to do with the optimization of the processes, as we talked about prior, but rather the fact that we’re in a late-stage world that we call game A, where everything collapses to money-on-money return. And the players have gotten better and better at metrics and understanding cause and effect.
And then, even more bizarrely, we’re creating a fitness landscape in which this operates in. And as it turns out, it’s at a macro level more efficient for money to have a few big peaks than it is to have lots of medium-sized peaks. So, maybe it’s the competitive financial dynamics of our artistic landscape that causes the homogenization more than the optimization of the actual process itself. Just a thought.
Alex: Extrinsic motivation. There’s actually studies on extrinsic motivation, of course, homogenizes outputs. So, when we have intrinsic motivation, and we don’t care about what our audience is going to think, or we don’t care about making money or whatever, then the products of our creativity tend to diverge from homogeneity. And tend to produce something more idiosyncratic, more interesting, because we don’t care. We don’t have the status anxiety, or whatever the extrinsic motivator is, that ends up collapsing our idiosyncratic outputs.
But I think that yes, we can blame, to a certain extent, capitalism. For instance, the IPO-ing of all of the major film productions. But then, we have to then trace back that those movie houses are then looking at consumer trends to then predict what movies to make, on the basis of basically public algorithms. So, that we have to, at some point, blame ourselves and blame the people themselves for essentially having thoughts that converge on homogeneity.
And this goes back to a more fundamental conversation about the very apparatus of cognition, which is to converge itself on automaticity, on a state of automation where we are in one-to-one symbiosis with our objective environment. Such that we have a subjective map, which matches it exactly to minimize the free energy and minimize prediction errors. And so, when we actually then come to blame ourselves, it actually could be empowering as opposed to blaming the capitalist apparatus, which I agree is definitely part of it.
But I would say that it is a predatory aspect of the more fundamental problem, which is the very way that our minds operate, which is to converge on equilibrium.
Jim: Interesting. Let’s look back in history a little bit, when I talked earlier about the troubadour era way back when. In those days, there was not worldwide, many-to-many communications, or no mass media at all, say, in 1500, to speak of. There was not a tendency to homogenize, even if there had been democracy.
Part of it’s the communications technology and the ability for somebody in France to impact what the music sounds like in Italy. There was a little bit through multipoint propagation. But in general, there were many, many little fitness landscapes that the music people in different regions liked and listened to, and I would presume far more variety.
Alex: That’s part of the game B protocol is to re-implement to re-tribalize the membranes and allow idiosyncratic pockets of difference to arise. Difference, disequilibrium was the very premise, of course, interestingness, but also cosmogony and life itself. And so, yeah, we do need difference to experience phenomenological genius or surprise, or interestingness and so on.
We can also take it out of the domain of arts and just think about back in the 1500s when you wanted to get warm, you made a fire. You had to know how to build a fire. Now, you have central air and heating and you just flick a switch. Pretty soon, technology is going to collapse even that effort, and our central HVAC will probably be connected in some sense to our biometrics. And when it senses, I don’t know, core temperature dropping, the heat will come on. And we won’t have to do anything. So, it’s just moving us further and further toward dimensional collapse.
Jim: Where the number of dimensions that we navigate in our life becomes fewer and fewer. Things are done for us. Back in the day, not that long ago, my mother grew up on a farm in Northern Minnesota with no electricity, no indoor plumbing. They were basically subsistent farmers. Grow their own food, go hunting, make their own clothes. Today’s modern person can’t do any of those things, so the dimensionality of their life has collapsed a lot.
They go to Amazon or they go to Walmart. Vastly different than the amazing set of skills the people that grew up on these subsistent farms within the lifetime of people. How old would she be now? She’d be 95 if she’d lived. So, in theory, she could have been alive today, and yet grew up in an epoch where all those skills were part of a relatively normal American existence.
Alex: Yeah, the dimensional collapse is pretty wild. I’ll give a couple examples, too, that I forgot to give that are in the essay, which I think speak to some of this. So, regarding feedback loops, there’s an app called Shazam. Have you ever heard of it?
Jim: I have not.
Alex: So, if you hear a song at a cafe or something, you’re like, “What is this song? I want to know what it is.” And so, you put on Shazam and Shazam detects the audio and then tells you, “Oh, it’s this song.” And then, you can find that song on Spotify or whatever. Now, the pop music industry, the radio industry now uses the Shazam data to determine what songs they should play on the radio. Now, you can see how this Ouroboros is already working.
Jim: Yeah, this is like the LLMs eating LLM output. This is exactly the same problem.
Alex: Exactly. So, you have this problem where the music that we hear on the radio becomes the music that we hear on the radio. So, it’s all just eating itself. And that problem is everywhere. You have these issues. And I think that at a certain point, we have to blame ourselves. We have to realize that a lot of historical human depth was actually a product of existing epistemic ignorance. We had to create stories and narratives. We had to invent interesting things because we had epistemic constraints that are increasingly slackened by epistemic progress.
Another interesting example that I just spotted, fascinating, was like 1972, here are the top 10 grossing movies. The Godfather, Poseidon Adventure, What’s Up, Doc?, Deliverance, Deep Throat, which is, by the way, a porno.
Jim: I saw it when it came out, 1973.
Alex: Well, it was the number five grossing movie. Jeremiah Johnson Cabaret, which is amazing, The Getaway, which is amazing, Last Tango in Paris, in which Brando asked… You know what happens in that. Lady Sings the Blues. Now, let’s cut to 2022, 50 years later.
Number one, Top Gun: Maverick, sequel. Black Panther: Wakanda Forever, sequel. Dr. Strange in the Multiverse of Madness, sequel. Avatar: Way of Water, sequel. Jurassic World Dominion, sequel. Minions: Rise of Gru, sequel. The Batman, sequel times a million. Thor: Love and Thunder, sequel. Spider-Man: No Way Home, sequel. And Sonic the Hedgehog 2.
So, now, when we think about what happened, we think about two things. One is audience testing. Movies audience-test their movies to hell. I score movies. I compose movies. When I get the first cut of the movie, it’s brilliant. The first cut of every movie that I’ve ever scored is fucking brilliant. Then, they start the process of asking everybody what they think, and it gets increasingly less brilliant. And by the end of it, I’m scoring a mediocre film. And it’s heartbreaking. That’s the democratization of a process.
The other problem is, of course, that all of these movie houses are now publicly owned. So, again, we see that there is a genuine problem that sits and resides with us as consumers. That yes, we can offload to capitalism or the general system, but there is something within us that desires a state of inertia. And of course, it’s a state that we all, in many ways, lionize as a better state. The state of meditation, the state of the zone, the state of thoughtlessness. And it is the preferred energetic state as well, of course, because it’s the state which minimizes free energy and is more energetically efficient.
And so, in a lot of ways, when we talk about the war on genius, the war on genius, I would like to locate not necessarily as external to us, but as inside the apparatus of genius itself, which is to say the brain. The genius is at war with their own brain. And I think that’s in a lot of ways why when we see examples of real genius, we see people who have chaotic lives, who are in energetic disarray. Because it is quite an energetic feat to constantly be problematizing that which you could otherwise just allow to exist within some preexisting schema.
But to say no, “What if this tree wasn’t just a tree? What if it was a boat? What if it was…” All of that is energetically costly.
Jim: Yep. There’s a famous internet meme, shows one person standing in the middle of a crowd. There’s a clear circle where they’re standing, and around them is a crowd of thousands of people. And the guy’s got his finger up, saying, “You’re all wrong.”
That’s, in some sense, what a genius does. Think of Einstein basically saying, “Oh, yeah, Newton and all you guys, you’re completely wrong. Not just a little wrong, you’re completely fucking wrong. The whole generation of physicists, up to 1905.” And there’s this one guy who could see further and really didn’t care that everybody else thought he was wrong. True genius.
Alex: Absolutely. I have a problem with Substack because the feedback loop is almost immediate. You publish something and immediately, you can tell how much gravity, how much grip it has with your audience. And it’s very difficult to not allow that to influence you in some capacity or another. And so, I have little rituals. I do status burning, where I’ll intentionally put out something that I know everybody’s going to hate or something. Because I need to loosen my own psychological grip on my caring for the feedback loop.
These days, if you can be that minority influence in the Solomon Asch sense and stand up against the group, which is all saying, “Well, this thing that is wrong is correct,” and you stand up and say, “No, actually I’m going to be the guy that says it’s not correct, and actually this is correct,” you end up tearing a crack into that conformity and providing a minority influence that can provide a burst of some free thinking.
But then, all thinking then converges on thoughtlessness again, and we take it all to be written in stone. And it’s just the way the energetic apparatus of cognition works. But if we are brave enough to stand up and say, “I want to…” Because what you’re effectively doing in a sociological sense is saying, what is wrong is right. For instance, creatively, when I score something, in order to do something genius, you have to do what is wrong. Literally what is wrong, what is not expected, what seems off.
If you don’t do what seems off, you cannot produce something of genius. And being in a collaborative setting makes that really difficult. Because the more people that you’re collaborating with, the less likely it is that you’re going to be able to pull off something that is perceived to be wrong.
Jim: I have a name for this with respect to things like Substack and even podcasts, which is audience capture. I won’t name them, but some people I know have, for various idiosyncratic and historical reasons, ended up with an audience that wasn’t particularly aligned with their own deep values. Well, guess what? Five years later, when they’re making their living from this, they have become that. And I suspect it’s unconscious, as well as conscious, that they now have been fully captured.
Upton Sinclair said… What was it? “A man can’t disagree with the thing that pays his salary,” essentially. And those of us who are out there in the public view a little bit have to really work hard to avoid audience capture. Frankly, I’m a cast-iron son of a bitch, and I don’t care what anybody thinks, but I’m rare. And it sounds like you’re also self-aware of the fact that this risk is out there, and that we all have to work hard not to allow ourselves to be captured by our audiences.
Alex: Yeah. Do you think though, Jim, that maybe you feel like you have to be a… What did you just call it? A-
Jim: Cast-iron son of a bitch.
Alex: Cast-iron son of… Do you ever feel like, “Well, geez, I better be a… Today, I don’t feel like being a cast-iron son of a bitch, but I better act like one?”
Jim: Actually, this is a discussion I had yesterday with a good friend of mine, and she was critiquing the positioning of heterodox. Too many of your heterodox dudes are just against everything, right?
Alex: Yeah.
Jim: And I will say, that is a failure mode. I probably have a small bias in that direction. The quote that I wrote that I am most proud of is, “The crowd is an ass.” I could find no previous references on Google, and I believe that pretty deep in my heart. So, if you just react against whatever the majority says, you’re going to be right more often than not. But it’s far from the optimal algorithm.
The crowd is right fairly often, so I guess I probably have a small bias towards being more of a cast-iron son of a bitch than I should. But my actual cognitive strategy is to be correct, to come to the correct decision, to weigh the evidence dispassionately as I can, of course, none of us can ever be truly dispassionate, and say, “What is correct, what is incorrect in this situation within the context that I’m trying to evaluate?” Rather than collapsing into tribal alliances or genre restrictions.
I read a lot of books, probably 100 a year, and I read them in all kinds of different genres. I listen to a lot of different kinds of music. Not every kind of music, but a lot of different kinds of music. And so, I do think that I do a pretty good job of not collapsing into this flatland thing that you’re talking about. But I also know that’s rare as bumblebee fur, right?
Alex: It is. But you know what? I think there’s a point of shared experience that everyone listening can grok readily, which is that the suboptimal aspects of most media or mediums end up becoming what Brian Eno, I think, referred to as the signature of that particular medium.
So, if we think of the distortion of a guitar amp, which was not initially designed to distort, or if we think of the grain of film, which now digital applications attempt to replicate, or we think of the hiss of tape, or the crackle of vinyl, all of these suboptimal qualities end up being the signature of those things. And we end up fetishizing the very suboptimal aspects of those.
And in a culinary sense, we’re more prone to really appreciate this. For instance, we like or would prefer if you know what you’re doing with pasta, you want pasta that was extruded out of a cast-iron die, as opposed to aluminum or some other more efficient process. Because the old dies create perforations on the spaghetti that end up holding the sauce better.
Or you can come up with a million of these examples. For instance, the experience of really seeing paintbrush marks on a painting, as opposed to a placid printout of a painting, which you understand to be a poster and not remotely as valuable. Because it doesn’t have that idiosyncratic time and place to it. It has more of like a immaculate origin, as opposed to a tangible origin. And we want cheese from a really artisanal maker, and so on and so forth.
So, there are aspects where we really implicitly understand this. But then, there’s a lot of aspects where we really just can’t see it. And I imagine an interview in the future with artists and they’re like, “So, tell me about the experience prompting this song. What was that like for you?” Is that a sufficient scope of experience to be able to even relay in an interview? Like, what it was like to prompt ChatGPT to write your song? I don’t know if that’s worth an hour of conversation.
Jim: Yeah, probably not. You can probably answer that question in five minutes. “I had an idea. I had ChatGPT write a song for me actually, and it was basically just a satire on millennials.” And I took the lyrics for Don’t Let Your Babies Grow Up to Be Cowboys, put it in ChatGPT and said, “Rewrite this so that its focus is don’t let your millennials grow up to be hipsters.”
Make the idea that you probably won’t have grandchildren if you do, because those people have forgotten how to fuck. That’s the prompt I gave it, essentially. And I did a B-minus job. If I spent another half-an-hour on it, it might’ve been okay, actually.
Alex: Well, you know what? You know what’s funny is that I’ve been playing around. So, by the way, when we think about that Brian Eno quote and that the suboptimal quality of any media becomes its signature, we would think then that the signature of ChatGPT and LLMs generally might be their hallucinations.
And in fact, when we look at the visual image outputs of ChatGPT and whatever, Midjourney and whatever, it is indeed their hallucinations that are the most interesting. If you really have an eye for art, the hallucinations are the most interesting aspect of most of their outputs.
But also, ChatGPT is good at writing bad lyrics, and bad lyrics are very often very compelling. So, the suboptimal quality so far of my interactions with ChatGPT and Midjourney and things like that has been, its signature, most compelling aspect.
Jim: In 2023, I wrote a quite complex program to create movie scripts, full-length feature scripts, using LLMs in the loop with the humans. But the humans had to touch it like 40 times. And under this scenario, the humans nudged and curated and changed a little bit. But you are right, some of the most interesting things to pop out of this were like, “All right, the scene structure this thing came up with is fucking weird as shit, but it’s actually way more interesting than a sane person would’ve come up with.”
Alex: Exactly.
Jim: That’s where curation, of course, is important. Because as we know, there are infinite ways to do things badly. So, just doing things randomly does not produce a high yield of goodness, but randomness plus curation can.
Alex: Yeah. Well, humans are equipped with the rare capacity to be able to spot good, bad things, good prediction errors, good hallucinations, mainly because we’re able to spot these and categorize prediction errors as either good or bad, as either worthwhile or not. And I think about this with regard to, so if technology, if our experience with ChatGPT or experience with film and the grain, or the snap, crackle, and pop a vinyl, or the hiss of tape and the saturation of tape and all of these things, these suboptimal qualities of given mediums, how can we then design for these sorts of things in the future?
Can we make technologies? Can we actually have a suboptimal revolution technologically that ends up facilitating more of… Can we have a version of Waze or Google Maps, which intentionally puts you on the wrong road, or makes you take the scenic fucking route? Can we rediscover those spatiotemporal parameters of the dynamics of living? How can we dimensionally expand, create dimensional expanse by interacting with the technology?
Jim: Yeah, that’s just a brilliant idea. Now, this is going to require a lot of education, I think, for people to understand this concept. If you said, “Oh, yeah, here’s your Google Maps, but it will give you the wrong directions 5% of the time,” the market reader will take that to the VC and that guy goes, “What the fuck is wrong with you, dude?”
I often talk about cognitive sovereignty, that we want to have control of our own attention, our own thinking, et cetera. But everything about the world today wants to hijack our attention, hijack our cognitive sovereignty. Before a person is going to be ready to adopt a Google Maps with a 5% built-in random error rate, they’re going to have to learn how important their cognitive sovereignty is.
Would you agree with that? And then, if you do, how would you think we might go about illuminating enough people to be an interesting community/market who actually like this?
Alex: Well, I think that sovereignty is a great word. That’s a buzzword. Individual sovereignty is this idea that you have some kind of political sovereignty or whatever, that you can make your own choices. But I think when people really understand that even when they think they’re making their own choices, if they’re making those choices automatically and converging on non-dynamical processes, or processes that do not trigger surprise, if you are not problem-solving, you’re probably not experiencing your agency as a cognitive being, even if you do have agency.
You don’t experience your cognitive sovereignty until you’re actually problem-solving, or you encounter a problem. Even if you think you’re cognitively sovereign, if you’re in a state of automaticity, if you’re automatically interacting with everything, you have no idea, or you cannot phenomenologically experience your sovereignty. Because everything is in a state of absolute automaticity. You only really experience your sovereignty when you’re presented with choice, when you’re presented with problems. And so, I think we can link sovereignty to problem-solving, or sovereignty to the creative process. Then, we can produce a desire or even a market for that 5% deviation Google Maps.
Another way to do the Google Maps is to just have a dice icon on it, where you just say, “Roll the dice. Take me somewhere. I don’t know where it is. I don’t want to know where it is. Take me on a fucking journey, and let me figure this out as I’m going.” I think that that’s why people today, we have fetishized it and we keep it in a little place. It’s like, what are these things called, where you go into the haunted house and you have to figure your way out or whatever the-
Jim: Oh, escape rooms. Yeah.
Alex: Right. So, we understand the idea, the allure of using your fucking brain in a sovereign way that doesn’t immediately render you in a state of automaticity, which basically means you’re temporarily a zombie. And so, I think that we can produce that desire, or at least culturally understand the allure of that, but it’s very difficult to resist immediacy. I find it difficult to resist immediacy as well.
So, I think that the reason why, not the main reason why I interact with GPT, but my favorite way to interact with it is to give it something that I’m hoping… And by the way, you can always set your temperature parameters much higher and create more randomness with GPT, and explicitly ask it to send you highly-temperature, parametered responses.
And when you get something that surprises you back, it’s quite fascinating and interesting. And you can interact with technology generally in this sovereign way. Because like you said, then you are there. A sovereign response to an error is one that says, “This error is actually good.” A unsovereign response to an error is one that says, “Well, this is an error, so it is an error,” and isn’t able to see anything or make anything else out of it.
So, I think that we can interact in a way that produces sovereignty. But we would have to have some innovations that actually had this in mind, and we’re willing to risk the capital to produce it.
Jim: Yeah. Before we go into some examples and talk about suboptimal tech, you want to come to what I thought was a nice little phrase that you created, and then some interesting riff upon that phrase, which was… And this explains the other attractor. This is the attractor away from suboptimality. “I may be inert, but I have inertia.” I love that line.
Alex: Yeah. Not to get too psychoanalytical here, so I promise you, I’ll stay away from the psychoanalytical metaphysics of inertia. But when we think about the allure of immediacy, we have to understand the allure of inertia, and how inertia is tethered to the sensation of immediacy and the sensation of eternalization. A sensation where, let’s say, the parietal lobe is now shut off and you lose your sense of time and space. And it’s a wonderful sensation, like when you’re in the cell or whatever.
And yet, when we talk about sovereignty, we talk about dynamics. Really, this is a question of dynamics, is that what is inert is non-dynamical. Now, it does have inertia. The non-dynamical state has inertia. But it’s a very tricky irony that what we’re always after is a non-dynamical state. In the sense that what we’re always after is an arrived state, or a state that gives us a sense of inertia, but renders us inert. And it’s that terminal velocity of being that we’re always after.
So, whether it’s, “I’m a doctor now,” or, “I’m a rock star, I’m a this, I’m this infinitive, I’m that infinitive,” it’s the object of our desires always seems to be this inert state. And it just so happens that thinking always wants to converge on the inert. So, what is this orange? It’s an orange. What is an? An orange is an orange. That’s the most efficient way to deal with an orange. The least efficient way is every time you see an orange, you go, “What is this?” No, that’s a dynamical way to interact with an orange. But instead, we want the non-dynamical way. We want the pre-schematized way.
And so, I may be inert, but I have inertia is the premise of Freud’s notion of drive in my view, which is that, “All living things want to return to an inert state.” But what he didn’t understand at the time, I think, is we just didn’t understand cognition that well back then, is that not only do all living things want to return to an inert state, but that is also all dynamic processes want to converge on non-dynamism because it is more efficient.
Jim: Well, also, you can go even one step deeper and say, what is the actual physics purpose of complex systems is to dissipate energy more effectively. Go back to Prigogine and some of that great work about non-equilibrium dynamics. It’s unfortunate truth that the universe is just trying to burn itself down and reach heat death more rapidly. And we’re part of that.
Alex: I think-
Jim: One of my favorite little factoids is that even a tree, the energy a tree puts out per kilogram is greater than what the sun puts out per kilogram. And what a human brain puts out in energy per kilogram is like a million times the energy flux of the sun on a per-weight basis.
So, essentially, all this cool complexity that we think we’re part of because it’s nifty and consciousness and creativity, it’s just the way the universe to accelerate its heat death a little bit. And so, hence, that if we don’t try hard, the universe wants to just be absolutely as efficient it can be in dissipating its energy.
Alex: And so, this brings us back to your first question. It was like, oh, well, maybe this problem of war on genius and the homogenized products and so on is an effective hypercapitalization. Or we could have just answered that and said, “Well, it’s also maybe just an effect of the entropic compulsion of the universe toward universal heat death.” There’s a real fundamental answer to some of this stuff, and it’s ironic. It’s strange that they all line up.
Jim: Fortunately, fortunately, because humans have emerged high enough up in the stack, and other animals, too, particularly humans, we can fight back against this. In the same way that life is fighting back against the second law of thermodynamics in some level and producing negentropy within membranes that do interesting things, but oh, by the way, burn energy faster.
We have sovereignty now. Free will, whatever the fuck that is. That’s a different topic for a different day. Whatever it is that we think we have that’s free will, we can, if we choose, produce disorder, produce suboptimization, and to use your term, escape from inertia and such. But we have to do so thoughtfully and with will, it seems like. Because it is true, the big forces in the universe are looking for inertia.
Alex: Yeah. Well, I’d be curious to hear your take on this. Because I always still struggle with this idea that, what you just said makes sense, but then it’s also self-contradictory. The idea that we can create disequilibriums, which is basically to say, create entropic bursts. Which then themselves, even though produce negentropic states, by creating these newnesses, these differences, which themselves then converge internal to themselves on equilibrium. So, it’s like every negentropic innovation, or every negentropic moment, is itself in service of the overall slide into entropy. So, I wonder what your take is on that, generally.
Jim: I have a very firm view on that. It’s all about timing, all about timing. In the long run, the universe is dead, heat death. But within epochs of, let’s say, a million years, we can definitely push back against entropy. Life is not really an equilibrium. Life’s in a meta-stable quasi-equilibrium, say, within an organism. It’s not really static at all. And if the dynamics stop, guess what happens to the organism? It dies.
So, the universe, while trying to crush heterogeneity, isn’t that strong at doing so. In billions or trillions of years, it wins. But within the timeframe that’s of importance to humans, they have a million years. We have plenty ability to push back against these universal forces.
Alex: Okay. But then, let me rephrase the question. Given the energetic trade-offs of negentropic innovations, could we possibly say that even though we’re able to create negentropic moments, those negentropic moments may, in fact, be accelerating entropy globally?
Jim: In fact, that would be exactly… Think of life. It’s certainly accelerating actual entropy at the global level, while producing negentropy within itself.
Alex: So, we’re actually… This is the interesting thing. The more we try to create difference, the faster we converge on homogeneity.
Jim: Well, at least the faster we accelerate the heat death of the universe, but that was my point about timing. The heat death of the universe is probably hundreds of billions to trillions of years out. And yes, life is accelerating the heat death of the universe. Human brains are accelerating the heat death of the universe more so than bacteria or trees. And that’s okay, because we’re producing interesting things within the temporal epoch-
Alex: Sure. Sure.
Jim: … that matters to humans. So, it doesn’t really bother me. In fact, it’s probably good that we’re able to generate things that are meaningful, meaningful, surprising within a human lifetime even. It’s nothing on the universe’s grand dissipation of energy. And even a million years is essentially nothing. So, if we assume humans have a million years to do cool stuff in the universe, we’re allowed to accelerate the heat death at the far end because we won’t be around. Fuck it, right?
Alex: Well, that makes sense.
Jim: It’s like, yeah. It’s like, I’ve thought that one through, actually. Because people say, “Oh, that just accelerates the heat death of the universe.” I go, “I don’t care. It’s a long time from now.” We are not a thing that’s likely to be around for more than a couple million years, and that’s if we behave ourselves. And so, anything beyond that, fuck it. Now, let’s move.
Like I like to do in this show, let’s move from interesting theory to some, perhaps, more tangible examples. We talked about the… I loved the Google, this 5% wrong. Or actually better, “Take me to where I want to go.” This would be interesting. Something that tracked everything I had done in my life and had some very interesting LLM-like transformer technology, but intentionally had a significant surprise effect.
And you basically, on Saturday morning, you wake up, “Take me where I want to go.” And it just takes you someplace that you would not expect-
Alex: I like that.
Jim: … and just you have an adventure. That would be cool. What are some other examples to your mind of suboptimal tech that could actually apply to things like film or music, or painting, or things of that ilk?
Alex: Well, baking chance, I think, into any process is helpful. And also, constraints. So, I think when we’re talking suboptimal, I think that the go-tos for me, and by the way, I’m just thinking about this extemporaneously, we’re thinking about chance, which is what you just pointed to, and which is the 5% Google Map error. Or the, “Take me where I want to go,” roll-the-dice thing. “I’m Feeling Lucky” on Google, if we remember that. I don’t know if they still do that.
Jim: I don’t know if they do either. I never used it, I can tell you that. That’s an interesting point.
Alex: Yeah. Yeah, that’s something. But I think the chance, and then, of course, constraints. Constraints are an interesting and tricky one. How do you produce a compelling app that has constraints built into it? And let’s say, it’s a music app, and the only things you can do with it are modulate a given sine wave. How would you, as a musician, compel yourself to use that as opposed to the everything trick?
One thing that I always use as a constraint is time, and I’m not sure how to bake time technologically as a constraint into apparatus. You could have a countdown, where you start using a music app and it starts counting down 10 minutes. And you have 10 minutes to write the song, and then the app turns off for 24 hours. That would be a good one.
Jim: Let me give you another example. This is one of my favorites. I’ve proposed this for 30 years or 25 years. 30 years, and nobody’s ever taken me up on it. Think about something like “social media” or its ancestors, bulletin boards, or even mailing lists, like the one we both participate on. I have long proposed that the author of a opening topic be able to set the rapidity at which people can reply.
We all know the phenomena where one person’s got nothing fucking better to do, replies 12 times in an hour and derails the conversation into their pet little rabbit hole. If I were to start a conversation, which I wanted people to be more thoughtful about, and to your point of time, it’s triggered me to remember this, I could say, I only want any given person to be able to reply twice a day. Period.
And so, if they want to unravel their rabbit hole, instead of 12 times in one hour, it’s going to take them six days to get into the rabbit hole. And oh, by the way, that gives the larger community of people a lot more ability to resist being rabbit holed. That’s just another example, very similar to the one you gave about music, where you could use temporality as a way to change the dynamic landscape.
Alex: Yeah. And I think another one that occurs to me, and these are all… I think this is a third one. So we have constraints, we have temporal constraints, we have chance. And then, I think we have cognitive loads, or even physical loads, to produce what Alexander Bard, who, of course, likes curating the list we belong to, would call Bard Absolutes. But basically, membranes. Essentially a doorman, which would create a natural hierarchy.
And coming back to my issue with democratization, the issue of mastery produces a natural implicit hierarchy. I’m not a fan of explicit, fabricated hierarchies or inherited hierarchies. But implicit, natural hierarchies which arise from know-how or mastery, or 10,000 hours or whatever, those are hierarchies worth looking into. And if we created, for instance, I don’t exactly know, but for instance, you can’t reply to this email until you’ve read these five pages and then run a lap around your block. And we’re using your phone’s accelerometer and geolocation to confirm that you actually did these fucking things. And they could be inane things. It’d be like, you have to stand on your head and wave your arms.
As long as there are blocks to graduation, essentially. Because otherwise, we don’t graduate to anything, to bring up language of Lex Stein. And I’m not sure if I’m using it the right way, but my sense of graduation only occurs at thresholds. And if everything is immediate, you don’t have those thresholds. So, we could create, I don’t know what we would call this, but blockers, essentially, that people would have to resolve in order to proceed.
Jim: Yeah. So, basically, it’s like the equivalent of running with 10-pound weights around your ankle. It makes you stronger.
Alex: Yeah. Or you can only go to this party if you also go to the church. That’s why I used to like playing this game with my daughter, Geocache. But you can only go to the party, or go to this thing, if you go to this one place and grab the little toy inside the article underneath the stone or whatever. Just things that, again, expand the spatiotemporal phenomenological parameters, aka experience, so that not everything is immediate.
Now, maybe that last idea sounds a little too baked up and unnatural, so maybe it’s too hokey and playful. But I think the first three we have so far are good. Yeah. I’ve thought about, we do have certain membranes. I don’t particularly like them, but we have paywalls on Substack and so on, and supposedly those create internal communities. But that’s not a cognitive load. That’s the press of a button. So, looking for more things that increase our sense of surprise.
Jim: Yeah, and effort.
Alex: Effort sovereignty. And really tying effort to sovereignty, I think, is a key here.
Jim: Okay. I think we have had a really interesting conversation. I really want to thank Alex Ebert for a very interesting, different way of thinking about what’s going on in our cultural landscape.
Alex: All right. Well, thank you so much, Jim.
Jim: Okay.