The following is a rough transcript which has not been revised by The Jim Rutt Show or by Melanie Mitchell. Please check with us before using any quotations from this transcript. Thank you.
Jim: Today’s guest is Melanie Mitchell. She’s a professor of computer science at Portland State University and Davis professor and Co-Chair of the Science Board at the Santa Fe Institute. Melanie has also held positions at the University of Michigan and Los Alamos National Laboratory. She’s the author or editor of seven books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems, including her latest book, Artificial Intelligence: A Guide for Thinking Humans.
Jim: We talked to Melanie at some depth in EP 33. Though, you want to get a shortcut to that book, check it out. And she’s also the author of Complexity: A Guided Tour, which is probably my favorite accessible book for people who want to learn to begin about what is this complexity science thing? A really excellent book I can highly recommend. And I recommended it to lots of people who wanted a reasonable introduction to this fascinating field. So welcome, Melanie.
Melanie: Thanks, Jim. Great to be here.
Jim: Yeah, it’s great to have you back. I think this is the third time you’ve been on our show. You were on with Jessica Flack once where we had an interesting discussion about one of your papers, plus the in-depth EP 33. So it’s great to have you back. Always enjoy talking to you. Always learn something.
Melanie: Great. I hope so.
Jim: I certainly learned something in the last couple of days as I’ve gotten ready for this episode. Today, we’re going to talk about a recent paper Melanie published on the Archive Server. And as usual, links to the paper will be on our episode page at jimruttshow.com. It’s titled Why is AI Harder Than We Think?
Jim: So let’s start out with the word AI. From the context of the paper, you mean not just AI in the simple-minded sense of the nest doorbell or some damn thing, but you mean something like strong AI or AGI or human-level AI or something like that. So what do you mean when you say AI is harder than we think?
Melanie: Right. So I was using the term to mean something like what the founders of the field meant when they invented this whole field back in the 1950s to mean being able to do tasks or activities that really define intelligence in humans that are more human level. The kinds of things that you might see in movies, like having an intelligent housekeeper robot or some kind of assistant, like in Star Trek, that’s able to answer any of your questions at all. These have been the fantasies that have driven a lot of the field for a long time.
Jim: And certainly motivated most of the people I know in the field, right? You get them drunk at 11 o’clock at night and, “Oh yeah. The reason I got into AI was for strong AI, not for pushing the benchmarks up on some ImageNet benchmark [inaudible 00:02:53] or something. They really got in it for this big AI.
Melanie: Right. To make sense of what is, and to create things with what you might call true intelligence, whatever that means.
Jim: One of my more humanities-oriented friends actually who’s very smart said, he’s convinced the reason why so many of these folks were obsessed with AIs is they want somebody new to talk to.
Melanie: Well, they haven’t gotten there yet. Although we can talk about GPT-3 a little later if you want.
Jim: Yeah, we’ll do that. As I was mentioning in the pre-show, I was just playing with GPT-3 this morning. And as usual, it’s so weird. Things are so cool when some things are so demented.
Jim: Anyway, you open up the paper talking about something that has been in the news quite a bit, and is a fine example of AI being harder than we think. While self-driving cars, at least the way we’re currently attacking it isn’t quite the same as a computer that can answer any question or even make the bed in a hotel, but it’s still pretty powerful AI. The dream level five self-driving car. And by the way, we did talk about self-driving cars in considerable depth with Jim Hackett recently, on one of our episodes, former CEO of Ford Motor Company. And he’s a very, very practical guy, a Midwestern engineer, not easily spun up. He said, “Eh. 2030, if we’re lucky for real level five self-driving cars.”
Jim: And Melanie points out in her paper that it wasn’t that long ago, people were saying 2020 was going to be the year. And then they quickly did some calculations, “Oh, it’s going to be 2021.” Well guess what? We’re not even, as far as we can tell, even close to level five self-driving cars, unless Elon Musk has stopped smoking his wacky tobaccy and is actually going to deliver on his constant claims that he’s just two weeks away from releasing drive anywhere self-driving car. Maybe you could chat just a little bit about your perception of this whole self-driving car phenomenon, how that plays into your hypothesis that AI is harder than we think.
Melanie: Well, self-driving cars are a great example because it’s something that people have been working on for many decades, and saying for many decades that they’re just around the corner. It’s not a new phenomenon. And in fact, I have a quote in the paper from Drew McDermott, an AI researcher from Yale who talks about how people are over-hyping self-driving cars, and they actually might not deliver the promises that they’ve made. And it might create a new AI winter. And it turns out that that quote was from 1984.
Jim: Yeah. I love that. I thought that was quite hilarious. On the other hand, there’s the famous phenomenon. I believe it’s called Amara’s Law. It’s been attributed to Bill Gates, been attributed to Einstein, been attributed to anybody, but it seems to actually have been a guy named Roy Amara. We tend to overestimate the effect of technology in the short run and underestimate the effect in the long run. You think that’s true about AI?
Melanie: That’s a good question. It might be true, but the problem is we don’t know how long the long run is. And that’s one of the issues I tried to address in this paper is this notion that from the very beginning, from the 1950s on people have been predicting that within 10, 15, 20, 30 years it’ll be here. And that started in the ’50s and has continued to this day. So I think Steve Pinker was the guy who said AI is always 10 to 20 years away and it always will be.
Jim: It’s like fusion energy. The saying there is it’s always 20 years away and always will be.
Melanie: Yes. Exactly.
Jim: Two very difficult problems that we don’t know when we’re going to solve. As I recall, I think I’m right about this. Back in 1956, after the first Dartmouth AI conference, the people wrote a little paper about it and were convinced that they could essentially solve AI in one summer with a dozen people.
Melanie: Yeah. That was the proposal for the Dartmouth workshop. It was to have 10 men in two months, as they said. And they didn’t claim they were going to solve it, just they were going to make huge progress on all of these very fundamental problems.
Jim: And then, of course, we don’t know. And you reference some of the survey work that’s been done and the estimates are all over the place from people saying, “Well, maybe my children will see it.” And I hang out with some AGI people and there are some very smart people who are convinced it’s five to 10 years away.
Melanie: Always have been. There always have been smart people. Including someone as prominent as Claude Shannon back in the 1960s, early ’60s thought that within 10 or 15 years we’d have basically true AI as people call it. So they’re very smart people who-
Jim: Yeah, exactly. Don’t get much smarter than Claude Shannon.
Melanie: No. [crosstalk 00:07:51] don’t.
Jim: I wouldn’t even be qualified to carry his slide rule.
Melanie: Yeah. So one of the people who made these predictions, John McCarthy is one of the founders of the field, basically later said, “Well, the problem is AI is harder than we thought.” That was kind of the inspiration for me thinking about writing this paper. Why is it so much harder than we think?
Jim: Indeed. Before we get to that, let’s do a very brief cruise through the history of AI winters and summers. And then we’ll jump into why is AI harder than we think?
Melanie: Yeah. So people talk about these AI winters and summers or spring, whatever as periods of optimism or pessimism periods of huge amounts of funding, lots of venture capitals, startup companies, and so on. So, there’s been these periods, this kind of cyclical alteration between extreme optimism, a lot of promising of very advanced technology in the very near future, a lot of hype, a lot of investment. We’re sort of in that kind of period right now. And those are, in the past at least, have been followed by AI winters, which are periods of disappointment where these things, these promises didn’t pan out. And companies fold, venture capital dries up. All of this stuff happens. This happened right when I was finishing my own PhD in 1990. There was an AI winter on. And I was advised not to use the term artificial intelligence in my job applications because it was in disrepute.
Jim: If I remember, narrowly dodging the previous spring and avoiding that winter in about 1983, 82, 83, a bunch of my friends got involved in the expert systems movement, List Machines Inc. And I forgot some of the other fancy companies. And they got to raise huge amounts of money, build fancy buildings. Most of them are out in Boston actually rather than in the Bay area. And I was in Boston at the time doing a startup and they kept trying to recruit me into their companies. I’d go talk to them and I’d say, “Boys, I think you’ve got a serious problem here.” In fact, my objection was pretty simple, which is all based on [inaudible 00:10:01]. “And I know a fair amount about corporate technology departments and most corporate technology departments don’t got people smart enough to program in [inaudible 00:10:08] so I think this whole thing is a bad idea.” And so I smartly avoided getting sucked into that one. But then that one crashed and burned as you say, by about 1988, 89, something like that. It is interesting. It’s been a recurring pattern.
Jim: And today, I guess we’re in a spring or summer that started in 2010 or something. When somebody figured out that you could do ReLU on a GPU and do gradient descent, surprisingly inexpensively. And suddenly, voila, we can build much bigger and deeper nets the way ever could before. And amazingly, they worked pretty good.
Melanie: Yeah. And also because of the web, we have the data to train them. You could scrape millions of images or documents off the web and use those as training. So that was a big part of being able to make the progress of deep neural networks.
Jim: And indeed, they have made some amazing things, but maybe not as amazing as some of the people say. I love the fact that people point to go. They go, “Okay, go. That’s an amazing solution.” And it is hard for humans. You point out that it’s hard for humans doesn’t necessarily mean the problem is hard. It has a branching rate of maybe 300, something like that, as opposed to the branch degrade of like 20 for chess and five or six to four or five for checkers. But there are games that have branching rates. One game I fooled around with building evolutionary AIs had a branching rate of 10 to the 60th. Each turn, there were 10 to the 60th possible moves. I guarantee, AlphaZero ain’t going to make new traction against that one at all. And yet any human can learn to play it. Play it a dozen times and you’ll play okay.
Melanie: Yeah. Right. So, that’s a big issue is like how do we decide how hard a game is for any domain?
Jim: Exactly. So with that, let’s transition to you called out four specific fallacies, as you described them, I’d argue one of them wasn’t exactly a fallacy, but that’s all right, that you believe has fooled us into thinking that is this whole real AI, strong AI, AGI, post-human AI, whatever it is we want to call it is harder than we think.
Melanie: So I was trying to understand why is it that people are so optimistic about AI now? And why have they been so optimistic and wrong in the past? So, I came up with these four fallacies. You’re right, fallacy might not be the right term. But something like that about this. The first one is actually a very old one. These are not new observations, but I tried to talk about them in the modern context.
Melanie: So Hubert Dreyfus is the philosopher who wrote the book, What Computers Can’t Do and What Computers Still Can’t Do, which was his Sequel. And one of the things he talked about was what he called the first step fallacy, which means that if you’ve made what you call the first step towards solving a complicated problem, you’re not necessarily on the path to solving that problem because the path might not be continuous. So I stated this as narrow AI is on a continuum with general AI. That’s the fallacy. And what his brother, Stuart Dreyfus, who was an engineer said, “It’s like assuming that the first monkey who climbed a tree was on the path to going to the moon because the monkey had gotten high in the tree.” So that was the analogy.
Melanie: But the problem is as Dreyfus said, everybody who’s made these first steps in AI, like chess. Playing chess is a step towards general intelligence. Playing Go is a step towards general intelligence. People say stuff like that. But they always run into the same problem, which he called the barrier of common sense. And that’s always been the barrier. These machines do not have the common sense, whatever that means, that we humans have. And therefore, they can do these narrow tasks, but they don’t achieve general intelligence.
Jim: We talked to this about a little bit on the last episode that you were on EP 33, about some attempts to build common sense in the AIs, particularly the Psych Project down in Austin, Texas.
Melanie: Right. People have been talking about common sense since John McCarthy wrote a paper about it in the ’50s. Doug Lenat actually said, “We can’t do AI unless we have common sense,” machines with common sense. So he tried to build a system that where every sort of statement, a common sense statement like, “You can’t be in two places at the same time,” was programmed in as a kind of logic, in a logical form. And he tried to get all these statements either programmed in by people, or they had the machine learn them from reading texts and so on. But that also didn’t really pan out in giving computers common sense.
Jim: Yeah. I’ve also talked to Josh Tenenbaum up at MIT and he thinks that in addition to, and perhaps more importantly than declarative common sense, like psych, things like having built-in folk physics and folk psychology and processes like that may actually be really important to get around this problem, climbing this narrow hill climber to better and better chess players may have nothing to do with the really hard problems of general intelligence.
Melanie: Yeah. That was Dreyfus’s point. That was the first step fallacy, as he called it. So, that was my first fallacy.
Jim: And your second one was easy things are easy and hard things are hard. Right? Which sometimes you think that’s the case, but why is that not true?
Melanie: So, this is the idea that if something’s easy for humans, it should be easy for computers. And if something’s hard for humans, it should be hard for computers. But it turns out that’s not the case. This has been called various things, including Moravec’s Paradox after Hans Moravec, a roboticist who pointed out that doing things like advanced mathematics, symbolic integration, and so on, these are things that machines can do better than humans. These are things that are very hard for humans, but the simplest thing, like walking on a crowded sidewalk and not bumping into anyone is beyond current-day robots. Or carrying on a conversation like the one we’re having right now, which is we’re barely even thinking about it. It’s something that’s way beyond current AI. So it’s this fallacy that when, when we have an AI system solve what we consider to be a hard problem for us, that then it’s closer to being generally intelligent. So, that’s kind of the fallacy.
Jim: And we see all these amazing things that we as humans couldn’t do very well, calculate PI to a billion decimal points or something. And yet if you ask a robot to tie its shoes, still can’t do it.
Melanie: Right. Exactly. And I like the example from Gary Marcus, who I don’t know if you’ve had on your show yet, but you might [crosstalk 00:17:30].
Jim: We have.
Melanie: Okay great. Okay. So yeah, he had a great example where the people from deep mind had described Go as one of the most challenging of domains and therefore, amazing that their machines had solved it basically. But he said challenging for whom? The game of charades is something that’s much harder for machines and yet a six-year-old can play because it involves theory of mind. I have to figure out what you’re thinking and all kinds of social interactions and other kinds of things that are way beyond computer vision, things that are way beyond what we have now.
Jim: Yeah, and I thought that was a beautiful example. I read that in a paper. I don’t think he actually mentioned that when he was on the show. But yeah. That’s a perfect example because things like theory of mind, right? Well, how does an AI get theory of mind? Not saying that it couldn’t in the future, but the kind of deep learning doesn’t seem like a road there. One of the ones I like with Steve Wozniak, he’s the co-founder of Apple. And this is popular in the advanced robotics space, which is to take a robot with a powerful AI and near AGI probably, and drop it into a random kitchen in America and tell it to make a cup of coffee.
Melanie: Right, I remember that. I probably couldn’t do that honestly.
Jim: Given enough, you could probably do it given half an hour.
Melanie: Maybe, maybe.
Jim: Where did they hide the coffee? Where’s the grinder?
Melanie: How do you turn this a coffee machine?
Jim: Exactly. At our house, you’d find we don’t have a coffee machine. We have a French press. Do you know how to operate a French press? Hmm, I’d have to Google that. A lot of people don’t know.
Melanie: Right. Or even [crosstalk 00:19:27] watching a YouTube video about it and being able to then do it. It’s something that a machine could never do now.
Jim: Yeah. Never do today, maybe in the future. I think that’s a very much … And truthfully, well, it’s impacted a number of the professionals. It impacts the general public even more. I still remember how freaked out some people were when Deep Blue beat Garry Kasparov.
Melanie: Yeah, that’s right. That’s right.
Jim: People were literally freaking out. On the scale of things, chess ain’t that hard.
Melanie: Right. And now, chess-playing programs are kind of seen as they’re online games, but they’re also training devices for human chess players. Like a pitching machine would be a training device for a baseball player.
Jim: Yeah. A couple of weeks ago I had a former chess Grandmaster on and we talked about that in fact that how bizarre it is. And one of the reasons he withdrew from being a competitive chess player was how annoying having to use AIs as a pitching machine was. Very dehumanizing in some sense. Yeah. That’s kind of interesting.
Jim: All right. Anything more you want to say about easy things are easy and hard things are hard as a fallacy?
Melanie: Well, I guess like all of these fallacies, the problem is that we have so little insight into our own intelligence that we don’t know what things are … the things that are easy for us are so invisible to us. We don’t know how hard they are for machines. That’s part of the problem is that we don’t understand our own intelligence well enough to make predictions about how complex machine intelligence is going to be.
Jim: Yeah, I think we’ve probably made the mistake of thinking that what’s going on in our conscious mind is what’s going on. And most of what goes on that we do is not in our conscious mind. Driving a car, for instance. If you had to consciously drive a car … well, I guess you do it when you’re 15 and a half and have your learner’s permit. And that’s a reason why you have to have an adult in the car with you. By the time you actually get out and drive, you’re not sitting there consciously saying, “Turn the wheel.” You’re just sort of driving. It’s kind of a learned behavior and an affordance program somewhere into one of your easily repeatable modules that maybe calls out the consciousness it really needs to. But most of the time just does its own thing. That’s the stuff that looks easy, but it’s a tremendous amount of processing going on in parallel.
Melanie: Yeah. Same thing for just creating the sentences that you just said. You probably weren’t consciously thinking anything, but this all came out in incredible perfect grammar and fluid English and everything.
Jim: That’s another classic example that they do not understand how that works. The experts in the domain say, “Well, yeah, that might sort of be this or it might be sort of that.” But yeah, it’s all unconscious. If you try to drive it explicitly, you can’t speak fluently.
Melanie: No.
Jim: Not at all. So yeah, that’s I think a big one.
Jim: Now, the next one’s one I’d say that’s not really a fallacy. And that’s the lure of wishful mnemonics. But it’s very true. And I will say I’ve gotten to be old, wise, and cynical enough to use it as a counter indicator.
Melanie: Yes. Right. Sorry about that, using the word fallacy there, I didn’t quite know how to frame that. But it’s cousin of a fallacy. So the idea here, it also, that term came from Drew McDermott, great AI researcher and AI critic from the ’70s and ’80s. And he wrote this paper called Artificial Intelligence Meets Natural Stupidity, which is a very hilarious paper and really quite apt that he wrote in the ’70s actually, where he talks about this notion of wishful mnemonics. And a wishful mnemonic is a word you might use to describe your program or your data that you wish the program was doing, so you name it that. So he used the example of naming a sub-routine in your program “Understand,” and then assuming that that’s the part of the program that understands. And it sounds like something extreme, but people do it all the time. and I gave some examples in the paper, things like calling something in AlphaGo, a goal.
Melanie: So a goal is, is something that’s a technical term in reinforcement learning, but in normal English usage, it means something else. It means something very much richer that a human has that’s not the same thing as a computer. So when people say AlphaGo had the goal to win, we imbue that term with everything that we mean by goal, that it actually had desires and it had the notion, it even had the concept of what winning was the way that we do.
Jim: I want to humiliate my opponent. Or I want to glorify myself so I can pick up members of the opposite sex of the chess groupie. Which my chest master says, “Oh yeah, such things exist.”
Melanie: Another example of that is the term read. We talk about machines reading books. IBM talked about its Watson program and said it would read all of the medical texts of the world in just a few hours or something like that. And the word read means to us, when we read, we understand what we’re reading. We have these mental models or these internal simulations of the world that allow us to make sense of these things that machines don’t have anything like that. They don’t have anything like the understanding of the language they’re processing that we have.
Melanie: So that term read, like “Read the medical texts,” is so different for what Watson is doing than what a human would be doing when they’re reading medical texts. But we use the same term and we then start believing that these machines are actually reading. A more subtle one is machine learning. We use the word learning to talk about the statistical process that machines use. That’s very different from what human learning is. But since we’re using the word learning, we tend to assume that these machines are actually learning in the way that we do.
Jim: And as you point out in the article, one of the big failure modes in at least today’s deep learning, reinforcement learning is the failure to be able to do transfer learning, which we as humans are pretty good at. Famously, don’t use a screwdriver as a hammer unless you have to.
Melanie: Right. Yeah. So, one example is it’s a challenge to get a robot to open a door. Because there’s different kinds of doorknobs. But once we’ve learned to open a particular doorknob, typically we can transfer what we learned because we understand a lot about doors and functionality of doors and door knobs. And we can pretty much figure it out. And you know, as you said, making coffee, that’s another transfer learning thing. I make coffee in your kitchen. Can then I go to somebody else’s kitchen and make coffee? Something that learning, that idea of being able to learn something and then use what you learned in new situations, that’s what learning means. But in machine learning, that’s not always what it means. So it’s kind of a wishful mnemonic.
Jim: That’s a very good one. It’s probably a good time to talk about our good friend GPT-3.
Melanie: Yeah.
Jim: Maybe you could introduce what it is and some of your thoughts and how that relates to the concept of reading and writing and all those sorts of things and how it’s its own weird thing.
Melanie: So GPT-3 stands for Generative Pre-Trained Transformer 3 because there was one and two. It’s kind of a neural network, very large neural network that has some special properties that might be too much to go into for this short podcast where I’m sure you’ll get someone else to talk about it. It learns basically from text and it learns by completing sentences. If I give you a sentence like, I drove the car to the blank. And you can fill that in with different words. Certain words make sense, like store or building or something. But other words like avocado or magenta don’t make sense in that context.
Melanie: So the machine learns to figure out what words make sense in certain contexts. But it learns that way from billions of documents. It’s hard for us to even imagine how much text it’s learning from and how many neural network weights are in this network. There’s billions. So it’s this humongous thing that you and I can’t run on our measly little computers. We have to use the cloud version that the company OpenAI makes available to us. But it’s able to create human-like text. If you type in a prompt and the prompt is like some piece of text, it will continue it. And it will continue it in a very human-like way very often.
Jim: And yet it doesn’t actually have any semantics. It doesn’t know the text that you put in is the prompt. And it doesn’t know what it’s saying.
Melanie: These things are debated actually.
Jim: Yeah. What does it mean to know?
Melanie: And what does it mean to understand? Most computer scientists would say, “No, this thing doesn’t understand in the way the humans understand,” because it doesn’t know anything about the world. It has never interacted with the world. It’s only interacted with text. But other people have claimed that, yes, in a way it does understand because it’s able to produce appropriate responses to your inputs that seem very, very appropriate. So if your listeners have heard of John Searle’s Chinese room thought experiment that he proposed in the ’80s about AI, this is a version of the Chinese room. I think it’s like the closest we’ve come to it. And it is very spooky sometimes how human-like it sounds.
Jim: Yeah, it is very curious. And I actually had on yesterday, probably a week or so apart when they actually come out, a guy named Connor Leahy who is, we went into excruciating depth on GPT-2 and 3, and he’s got an open-source project, people if they’re interested in this might check it out. GPT-Neo that has used an 850 gigabyte text pile that they’ve accumulated from various open-source places, they claim is better than open AIs. And they have written the code to build something that they believe functions a lot like the GPT-Xs and it’s also open source. So this is really quite an interesting opportunity for people who want to go deeper, but we’re not going to go deeper today.
Melanie: Yeah.
Jim: But it’s quite interesting. And to get back to your fallacy, it’s kind of misusing the sense of reading.
Melanie: Yeah. Or-
Jim: Maybe not misreading, but it’s different than the way we do it by a whole bunch. Stanislas Dehaene’s book on the neuroscience of reading will make it very clear that we read in a very, very, very different way. And through a very tiny little pipe, basically, the seven items in working memory. And we don’t even actually read that well, and yet we can extract these bigger and general patterns, which we can apply in different domains, which at least so far … Well, I don’t know about GPT-3 yet. We’ll see. But it’s out there. And this fella Connor, he at first was a skeptic, but he now says “Get to about GPT-5 …” Is that really different than AGI? We’re going to have a part two. We’re going to debate that. That will be interesting.
Melanie: Yeah, absolutely.
Jim: All right. Anything else you want to talk about the lore of wishful mnemonics? Or should we move on to number four?
Melanie: Let me just say one thing in that it’s not just what we call our programs or what they do. It’s also what we call our benchmark datasets that we use to evaluate these programs. So one example is the so-called general language understanding evaluation, which is abbreviated as GLUE. Which is one of the main evaluation sets for evaluating natural language processing programs, including GPT-3. And it has a set of tasks and your natural language system is evaluated on those tasks. And every single one of those tasks, these language models like GPT-3 have outperformed humans.
Melanie: And so the headline is machines are better than humans at general language understanding because they did better on this dataset. So, the name for the data set is what I would call it wishful mnemonic because it turns out it doesn’t really test general language understanding. And I think everybody who created that data set would agree with that. But if you’re calling your dataset something like that, or reading comprehension or question answering, you start to believe that that’s what your machine is doing when it does well on that set. So that’s another example of a wishful mnemonic in AI.
Jim: And a good thing for people to keep in mind is that essentially what these neural nets are just really, really fancy statistical inferences from a given dataset. They can generalize sometimes, but in general, they’re just the statistics beaten out of the given data set by a whole lot of computation.
Melanie: Right. And some people will argue that’s all we are too. I disagree, but that argument has been made.
Jim: Indeed. So let’s go on to fallacy four. Intelligence is all in the brain. That one’s definitely a fallacy and pretty ubiquitous in AI.
Melanie: Right. And this is the one I got the most feedback or hate mail, whatever you want to call it from this paper. Because I did get a lot of feedback and not all that was positive. And most of the negative was on this one. Because the idea of this is that AI treats intelligence as a metaphorical brain in a vat with sensors that passively observe the world and output that does something. In between is like a bunch of neural layers. And that’s what we care about. But a lot of psychologists think about intelligence as not being confined to the brain, but really being an interaction between the brain and the body. Everything that we think about is in reference to something that our body can do with that thing for how it can interact with the thing. Even the most abstract concepts, it’s been shown, have some grounding in more physical ideas, that they’re really metaphorical in terms of physical ideas.
Melanie: So this has all been called things like embodied cognition or interactive cognition. Some people talk about even the extension into the world, like all of these devices and tools we use are extended part of our intelligence and so on. So the idea here is that it may not be possible to create AI of the kind that we’ve been talking about in a disembodied machine, a machine that doesn’t have any way of grounding its concepts in the real world. This has been a debate for millennia, if you will, about the role of the body and intelligence. It goes way back to Aristotle, even before probably. But it’s also seen very much in AI. And so because this is such a controversial debate now, this is why I got so much positive and negative feedback on this particular one.
Jim: What was the negative feedback? I’d be interested. One, do they claim that they’re doing it, or do they claim it’s not important?
Melanie: People said things like Stephen Hawking was completely immobile and paralyzed, but he could still do genius-level physics. Or things like if you cut off your arm, you’re still intelligent. But if you cut off your head, you’re no longer intelligent. Things like that. Which I think are rather specious arguments. But they came from very smart, credible people who think that this embodiment stuff is just not relevant to intelligence.
Jim: Interesting. Another one you mentioned in that section, which I’m very interested in is the role that emotion plays in our cognition. We’re going to have Antonio Damasio on, I think in August. And he is one of the leading researchers in the field. And he also has a clinical practice. And he got turned on to this when he had a patient who was an intelligent, capable person, but had the specific damage in their head, I don’t remember it was from a stroke or something, that basically killed their emotions. And this person could not make a decision on what to have for breakfast because Damasio’s view, at least is that almost all of our decisions, we may set them up in a rational or pseudo-rational fashion, but the area that actually pushes us to actually do something is generally emotion. And that if we attempt to at least use ourself as a working example, if we don’t include emotion, we’re making a gigantic mistake.
Melanie: I agree with that. But emotion is one of those things we don’t understand very well in ourselves and in animals. There’s still a big debate about the degree to which nonhuman animals have emotions. It’s a very ill-understood topic, and so people in AI have mostly ignored it. Or approached it from the idea that, “Well, we’ll get machines to recognize human emotions,” like your Alexa is going to now monitor your voice and recognize what emotional state you’re in. Whether that’s actually going to happen or not, I don’t know. But then, and react appropriately. So the machine itself wouldn’t have emotions but would interact with you by recognizing your emotions. But the question is, can you actually, like Damasio says, can you have something like intelligence without emotions? And I think most of the evidence is that no, you can’t.
Jim: I’m going to push back there a little bit, even though I’m a Damasio fan. And I’m convinced by him and some others that emotion’s indispensable for human general intelligence, but and this is a big question, which is how much like humans does the AI have to be? Maybe it’s not like humans at all. And maybe in that case, things like emotions and how we read and things like that, aren’t all that relevant. This is a discussion that the AGI people have all the time. What are your thoughts on that?
Melanie: I mean, it’s a good debate to have. If we want machines that are going to interact with us in our world, like self-driving cars, for example, they have to have some deep understanding of humans and their motivations and their emotions. I don’t know if that’s possible for a machine that doesn’t itself have something like emotion or empathy or whatever those things mean. But it’s debatable. We see like in Star Trek, we have Mr. Spock, who’s supposedly purely rational. He’s kind of like what AI is aiming for. And you have to explain to him what love is and all of that stuff. But is someone like that possible? Is that even possible? We don’t know. We don’t know enough about intelligence to answer that question.
Jim: All right. Well, that’s I think a good review of your four fallacies. How do you want to wrap that up? What does this mean to you in terms of where we are today and what things ought to be looked into that aren’t? Tell us what your thoughts are about these four fallacies and where we are today and what the road forward looks like.
Melanie: These were meant to kind of be a way of explaining why people are more optimistic about the future of AI than perhaps is warranted, why their timelines have always been so optimistic, and why those timelines often don’t work out. I’m not saying that it’s impossible for us to make machines intelligent. I think it’s very possible. But it’s a really hard problem. It’s harder than we think, as I said. And one person I quoted in my book said that, “How long until general AI is here?” And they said, “Well, it’s 100 Nobel prizes away,” which is kind of a way of measuring time in terms of scientific discoveries needed to make something happen. Which is interesting. I don’t know if that’s true. But I do think that it is science that we have to look towards rather than engineering that is going to help us make progress, that we just don’t have the science of intelligence that we need.
Jim: I think that’s an interesting thought. The idea of 100 Nobel prizes, unfortunately, there aren’t any Nobel prizes in the AI-related disciplines.
Melanie: Right. Yep. But I’m sure something like the equivalent of that.
Jim: Yeah.
Melanie: Yeah.
Jim: That might be. Okay. Well, glad to have you back on the show. I think this is a very helpful and very useful discussion for our listeners because we all just hear about AI all the time, yappity, yappity, yappita. Some people think we’re about to be eaten by AIs tomorrow afternoon. Probably not. But as you said, we just don’t know for sure. So, we all have to keep paying attention to this extraordinarily interesting field. There’ll be plenty of work for a long time to come.
Melanie: Yeah. Well, it’s been fun talking to you. I really enjoyed it.
Production services and audio editing by Jared Janes Consulting. Music by Tom Muller at modernspacemusic.com.