Transcript of EP 221 – George Hotz on Open-Source Driving Assistance

The following is a rough transcript which has not been revised by The Jim Rutt Show or George Hotz. Please check with us before using any quotations from this transcript. Thank you.

Jim: Today’s guest is George Hotz. George is an interesting fellow. He was one of those smart kids selected for the Johns Hopkins Center for Talented Youth. An extremely selective program, was about one in a thousand or something for talented youth, and he participated in that as a teenager. And I always like to point out because it’s just so interesting and curious. Another person that participated in the CYT Hopkins program was Lady Gaga, who would’ve known? Lady Gaga could have been a mathematical physicist if she hadn’t decided to be a pop singer. In fact, Lady Gaga and George are more or less of the same age. Did you happen to meet her while you were there at CTY?

George: Not that I can remember. I actually don’t know what her real name is.

Jim: Yeah, I don’t either. I’m sure the Google does. Anyway, moving on from his precocious youth, still relatively precocious youth. At age 17, George made a name for himself in hacker circles as the first person to break the carrier lock on the iPhone. I remember reading that story when it happened. I go, “Good. Fuck them asshole.” I was an Apple II guy because Apple II is very open system, under the Wozniak doctrine essentially. But then when they came out with the Mac, I’ve been anti-Apple ever since. Their closed systems. Horrible exploiters. So when somebody breaks Apple’s lock on something, I always say that’s good. So that was George. I remember reading that story and it came out. A few years later, he got in trouble with Sony for hacking the PlayStation 3 what did you do, break the copy protection on the game, something like that?

George: Whoa, whoa, whoa, whoa, whoa, whoa, whoa. Allegedly, maybe.

Jim: Allegedly. So they claimed. Blah-blah-blah. Anyway, settled out of court, so nothing actually happened, never, whatever, whatever. But anyways, a pretty smart kid. Anyway, he then went to work for Facebook for a relatively brief period of time. And then as I was tracking down his bio in various bits and places, I saw something else was quite interesting. He got recruited into Google’s Project Zero. Most you probably haven’t heard of it, but if you were a hacker, you would probably know it. Back in the day at least it was amongst the most elite white hat hackers in the world, and their job was to probe broadly distributed technologies or ones that were in critical pieces of infrastructure and find so-called zero days. So actually, why don’t you tell us what the hell a zero day is and what you all we’re doing? No point in me telling it, you know it a hell a lot better than I do.

George: So a zero day is just an exploit in a piece of software. It’s a zero day because it’s a previously unknown exploit. I haven’t thought about this stuff in a long time. I haven’t done security almost for 10 years now. Project Zero led me to AI. I’m thinking like, “Why am I looking for these vulnerabilities myself? How do I write software that looks for vulnerabilities?” This turns out to be extremely hard. But yeah, that’s been one of the recurring themes in my life. How do I make stuff that can automate any work I’m doing, but specifically in that case finding exploits.

Jim: Yeah, I always say the world was built by lazy people. People looking, “I didn’t feel like humping them buckets of water up the fucking hill to irrigate my little field. So let me invent a little paddle wheel thing to send water up the hill.” I’m convinced of that, that much of the progress of the world is driven by laziness, or at least trying to get non-sweat ways of doing things. And he has various other adventures- and then, we’re going to talk about today, he founded a company called comma, where he is currently the president. And comma is a company that has an open source, believe it or not, self-driving car system.

And regular listeners to the podcast know self-driving cars is an area of interest of mine. We had a full episode on self-driving tech back in EP 94, Shahin Farshchi, I think is how he pronounced it. He was a guy who sold his company to Amazon, one of the self-driving tech companies, and now a VC. And then in EP 124, we had Jim Hackett on who had just stepped down as CEO of Ford and we talked about all kinds of things, but we also talked about self-driving cars a little bit. So most of what we’re going to talk about today is self-driving cars.

George, why don’t you start at the beginning. What in the hell motivated you to start your own open source self-driving car software company?

George: So I met with Elon. I was originally going to do a contract to build software for Tesla that could replace the Mobileye chip.

Jim: Which they had spent a zillion dollars on. Who bought that? Intel bought that for a zillion dollars.

George: Intel ended up buying Mobileye. Yeah, it went up for a bit. Now maybe it’s down. But yeah, no, so the contract was to replace Mobileye chip with software. It was my first encounter with Elon, an absolutely fascinating man. The contract didn’t work out for various reasons, but yeah, I was like, “Okay, I’m not going to do this as a contract. I’ll just do this and then I’ll sell to the car companies. I’ll build autopilot clone and sell it to the car companies. No Mobileye.” The first part actually turned out to be easier than the second part. Building an autopilot clone took a couple of months, selling it to the car companies is impossible.

Jim: Explain what Mobileye does by the way, and why that is or isn’t important as a way to solve this problem.

George: They make chips. These chips run proprietary perception algorithms to do things like perceived lane lines, perceived cars, and they go in cars and they enable a lot of ADAS features, Mobileye has… For all the criticisms I’ve ever dished out to them, they do understand that these things do need to be more end to end. They have this thing called Holistic Path Prediction. It’s a computer vision chip for ADAS systems in cars.

Jim: Okay, sounds good. One of the interesting forks in people’s approaches to self-driving tech is those who believe you can do it with just cameras and those who believe you need lidar or other kinds of more penetrative sensing systems. Why don’t you address that question a little bit and where do you come down on that?

George: There’s one system that can drive cars and its human beings. Self-driving cars, most of the ones you see like Cruise and Waymo are really fancy remote control cars. They are not autonomous robots operating in the world as much as these companies might want you to believe that. The only system that is truly capable of Level 5 self-driving is a human and a human does not have lidar, a human has two cameras.

Jim: That’s of course Elon. He’s a contrarian in that regard. He always denounces lidar vigorously because it’s-

George: Is it still the contrarian viewpoint? It’s correct now. Everyone accepts it really.

Jim: Waymo still doing their thing, et cetera. But we’ll get to some numbers in a little few minutes. We’ll show that Waymo, while one way impressive in terms of miles driven, is not very impressive anymore. And that probably is also a good point to remind the audience, because it’s been a long time since we talked about it, about the six levels of self-driving automation, Level 0 through Level 5.

George: So the truth about the levels is they say more about liability than capability. Level 2 was the highest level where the human is still fully liable for decisions that the car makes. Level 2 is supervision of the car. I’m actually curious, you probably have a totally different take on this than me. Level 2 is when the human is always liable. Level 3 is when the human is liable in certain scenarios. Level 4 is when the human is not liable in cities or certain areas. And Level 5, the human’s never liable. It says almost nothing about capability.

Jim: Interesting. Yeah, I guess you could interpret it that way because that’s probably why it was written because a bunch of lawyers wrote it, no doubt. Yeah, I do have to admit, I tend to think of it in terms of what do I as a driver do. Back in 2018 and ’19 when they all said, “Oh, yeah, full self-driving cars are two years away,” they were essentially predicting full automation where you could sleep in the backseat while the car drove you to work. And that’s Level 5, full automation, no humans in. And in fact, Google famously, in one of their first prototypes, built it without a steering wheel.

George: I could do this in a Level 0 car. I can put a brick on the gas pedal and go to sleep in the backseat. It might not be a smart idea, but I could do it.

Jim: Yeah, I was going to say. Not wise, not wise, not wise.

George: It depends what your risk tolerance is.

Jim: Yeah, not [inaudible 00:08:38] nuts, right? Oh, by the way, I’m going to throw this out here even though it comes in later. Because this is… Actually turns out to be hugely important one thing about self-driving cars, people say, “It ought to be easy to write self-driving software because humans, they suck so bad at driving.” When you look at the data, they don’t actually suck so bad at driving. I looked it up and most civilized countries, you get about one fatality per a hundred million driven miles. That’s a lot. That’s a lot more than the Waymos and Cruises and stuff have logged so far. And 100 million miles is about what I think you guys say you guys have driven. And so you would’ve predicted less than one death from Cruise and Waymo and friends, Argo and Uber and those guys when they were still around, and you would predict about one death for you guys if you were at human level equivalent. And-

George: We have zero

Jim: And you got zero. So anyway, 100 million is pretty good. We should not denigrate humans when we’re talking about we need to be better than humans.

George: I’ve heard it’s even higher than that. The number I hear is more like 500 million.

Jim: I looked it up pretty carefully. It seemed to be a hundred million.

George: Interesting. It depends a lot on what type of miles and what car you’re in. I believe your number. It is somewhere in that order of magnitude. But yes, humans are absurdly good drivers. And the simple way that I express this to people is I say, “How many times have you driven to work?” “I don’t know, a couple thousand.” “How many times have you crashed?” “Zero, maybe one.” “And you remember that time?” That’s a pretty reliable system.

Jim: Yeah. Or even how many close calls have you had in your life? I’ve had a few. And I did have one bad wreck through human stupidity. But overall, yeah, humans are better than the AI guys were saying in 2018 when we had all this, “Oh yeah, this is easy. We can certainly exceed human capacity.”

George: I never said anything like this.

Jim: I know you didn’t, but other people did. And especially again, the Google car with no steering wheel in it. Total hubris in 2018 or 2019.

George: I’ve heard self-driving is demo complete. You can build any arbitrary demo and still be arbitrarily far away from solving the problem.

Jim: All right, so talk to me a little bit about how you guys got started and then what did you do first and tell the story.

George: So the first basic idea is I’m going to get a camera and I’m going to have the camera predict the angle the steering wheel should be at. I’m just going to do straight up supervised learning. F of X equals Y, X is the image, Y is the steering angle. Should work, right? This turns out not to work and it’s upsetting why this turns out not to work. You can get a great training set and test set, do all your classic machine learning, do it beautifully, IID, everything’s great, and you get a really low loss on your test set. And then you put it out on the road and it doesn’t drive at all. I can’t even go straight on the highway. It’ll drift out of lane. And the reason for this is because even at test time, your model is not acting in the world. The video that’s being shown to you is all video from the human policy, not from the machine policy.

So I go out and gather a whole lot of data, me driving, and then I want to learn a model to drive like me. All of that data that’s collected was driven with the human policy, meaning my policy. The machine policy, even though I’m approximating the human policy and getting as close as possible, there’s always going to be some epsilon error. Normally, the epsilon errors are no problem if your samples are truly IID, but your samples are not IID, they’re temporal. Independent and identically distributed, it means that every sample is independent of every other sample, that your actions at time T will not affect the data at time T plus one. And this is true even if you have a complete holdout test set that was driven with the human policy. But as soon as you put the machine in the loop, this is no longer true because the action it took at time T affects the input data at time T plus one and it’s that dependence that makes the problem of self-driving cars so difficult.

Jim: And the other one we all often hear about, Gary Marcus often talks about this, AI guru and a headstrong contrarian who says, “Self-driving car would never work,” is that there’s just a zillion corner cases and that no practical learning set could ever capture all the corner cases. What do you say to Gary?

George: That’s absurd. There’s a lot of criticisms for self-driving cars, but it’s definitely not that one. You talk about some corner case. How often does that corner case happen? Oh, it happens once in every 10,000 miles? Okay, I got a hundred million mile data set. I got 10,000 examples of that. Also, how does that explain the human? The average human has seen so much less data than the data we actually train our system on today. So it’s not corner cases that cause the problem.

Jim: That’s Gary’s point actually, because he would argue humans are general intelligences, they are-

George: Meaningless. Meaningless term. Completely meaningless.

Jim: They’re GIs. These things are narrow AIs, and so they can only do what they’re trained to do. They don’t generalize the way humans do. We don’t even quite know how humans generalize. But the weird case of the person who was run over by Uber, I think it was, who was taking a bicycle with bags hanging on it across the street at night and between two cars and okay, we trained on this, this, this, and this, but this particular combination of things which a human trivially says, “Oh yeah, that’s a person with a bunch of bags hanging from their car coming out between two cars.” It doesn’t really have a world model at the level that humans do and the ability to integrate lots of clues and come up with an integrated solution more or less instantaneously.

George: That’s definitely true. You can go into the specifics of the Uber accident and it looks much more like a bug in classical software than any failing of AI. I believe it classified the object as unknown. It didn’t know what to do. It’s been a while since I’ve read about it. But it looks nothing like the failing of deep learning. It’s straight up the failing of your if statement mumbo jumbo. Yeah, Uber way statistically below where they should have been with an accident. I think that accident happened at 4 million miles. Yeah, so that’s not a failure of deep learning and I also don’t believe that there’s any such thing as general intelligence. When you talk about a world model, there’s definitely a real meaning to that. And it’s true that… Again, it depends exactly what you mean by world model, but to have an integrated world model like the one that humans have that are capable of predicting the way scenarios can play out in complex ways, that is the absolute cutting edge of machine learning today and is not deployed in any self-driving cars.

Jim: So let’s get back to where you started from. You hook up a camera that calculates the steering wheel. Doesn’t work. What do you do then?

George: It almost works. It almost works. It’s off by epsilon and these epsilons accumulate over time. So all I need to do is add a small amount of corrective pressure. Okay, fine, I train a quick algorithm to detect the two lane lines, take the center of the lane lines and compute a corrective pressure based on how far I am off from the center. So now it’s 90% the machine learning algorithm, but 10% this corrective pressure and this fixes it.

Jim: So long as you have nice visible line markers.

George: So I talk about lane lines as the original sin of comma. I was terribly upset that we had to include them because I really wanted to make an end-to-end solution for driving. There is no definition of what a lane line is. There is no physics-based definition of a lane line. And when we started hand labeling the pictures, you quickly realize that there’s pictures where 50% of humans believe something’s a lane line and 50% of humans believe it’s not. And wherever your calibration is on that, you will always find those pictures because there is no physics-based definition of a lane line. So we have to figure out how to take them out. It took many years, but we did end up removing lane lines.

Jim: Interesting. Yeah. Here I live deep in the country and the main road that comes down from the state road goes up. I got a center line only about halfway and then it stops for no good reason. And there’s no center line the last two miles till we get to our turnoff. When I drive that, I go, “[inaudible 00:16:25] probably give Tesla’s autopilot fair fits.”

George: Try it out now. It’ll do great.

Jim: Interesting. And so is you basically understand where the edge of the road is instead?

George: Nope, definitely not. So you don’t do anything like that.

Jim: Okay. Talk a little bit about how do you stay in the right part of the road. Because [inaudible 00:16:40] a good case because it’s a road that’s illegally under modern standards narrow. In one place it’s got a 30-foot cliff and a fall into a river. It’s got Turkey House feed trucks and logging trucks on it. So it’s a pretty ugly… You better be over in the right part of your road, especially you go around some of those blind corners.

George: Yeah, the more complex stuff is still… We do have a Level 2 system. But our new model asks the question, given this road, where would a human drive the car? And that’s the whole question. So you ask where is it going to go in that road? It’s going to see training data that looks kind of like that road when the human was in control, where did they put the car? And then once we know where the human put the car, we can actually put the car there. But it’s really hard to deal with that problem that I was talking about. We refer to it as behavioral cloning. That may not quite be the industry name for it, but it happens because the error accumulates over time.

So one way to fix this is to train in simulation. If you train in simulation, the training data that I’m showing is no longer data that’s driven with the human policy as it would be in a supervised learning scenario, but it’s data driven with its own policy. The simulator uses the policy that the model has learned to roll out a scenario and at the end of the scenario says, “Okay, how much should I diverge from where the human went?” And then it knows when it’s over here, and if the human was here, it should have been here. And then we can back prop through that and it can learn how to correct itself. It can learn corrective pressure over time. Will learn to converge.

We have this test called the hugging test where we use a straight-up classical Unreal Engine simulator and we initialize the car in different places in the highway lane and we see how long it takes after we let the model go for it to come back to the center. And that’s called a corrective pressure in the model. But if you don’t train in simulation, if you train just as a supervised learning, behaviorally-cloned problem, you’re going to have no corrective pressure.

Jim: Yeah, I did go out and talk to the guys at Waymo years ago and they talked me through how much they depended upon simulators. They were thousands to one of simulator to real miles. How about you? Two questions. One, where did you get your human data from? Just some dude. You hook it up to your own car and start, I suppose, taking the data. And then how did you think about the evolution of actual driving data with simulation data and how those two things inform each other?

George: First, when it comes to driving data, most people aren’t aware of this, we have the second-largest driving data set in the world after Tesla. We have 10,000 weekly active users all uploading data to us. This is a massively diverse set. We have tens of millions of miles of it. And it’s again, not just in quantity but in diversity. Waymo has all the same streets in Scottsdale, Arizona or wherever three cities they’re in now. We have everywhere in the world. Ship these devices anywhere. So we have a huge diverse complex dataset.

And then our simulator is a bit different from Waymo simulator. We didn’t hand code it and it doesn’t use a game engine. We call it the small offset simulator. It’s reprojective. So you can take a human video and then you can apply small perturbations geometrically. If you know the depth of every pixel, you can re-project into a 3D world and it can make it seem like instead of driving here, you drove over here. So our simulator is not fully flexible. The problem with a fully flexible game engine simulator, one problem is what policy do you use for the other cars? How do the other cars drive? Sounds like you need to solve self-driving cars in order to solve that problem. We solve that problem by just using what the cars really did in reality. Now, there are some caveats with this, but at least for solving the behavioral cloning convergence problem, this works great.

Jim: I suppose you could perturb the other cars too. You could add noise to the trajectories of the other cars.

George: You could, and that starts to get very fancy. Because now I have to know where the other car is, I have to know how to move it. I have to fill in what the pixel should have been. Modern ML can do it, but again, very complex. So we call this whole simulator second paradigm and we’re moving to third paradigm now, which is even more generic and I can go into that later, but that stuff doesn’t work yet. This is pretty much what we’re using today.

Jim: All right. Let’s now get down to the more tangible for folks that are wondering, “How do we do this at home?” As I understand, you’ve got 275 cars that you support to one degree or other, though I did look them up and none of the three we have do the full thing. My 2017 Jeep Grand Cherokee has adaptive cruise control available. My wife’s 2019 Outback nothing. And my 2016 Tacoma nothing. So I’m out of luck. But there’s a long list of cars that you guys can work with. Tell us how does someone hook up your stuff on one of these 275 cars?

George: First off, people think that it is some kind of, “Oh, I’m going to have to put a motor on the steering wheel.” It’s nothing like this. Most new cars, in fact almost all new cars shipping today have a camera mounted right behind the rearview mirror and there’s one plug that connects to it. All you have to do to install the comma, unplug that plug, we have a Y splitter, plug in there, plug in there, plug it into our device. That’s it. Takes about 15 minutes. It’s completely electrical and it’s also… It’s not hacking. People think this is hacking the car. It’s not. It’s just looking at the messages the camera is sending and saying, “You don’t actually want to do that. Here’s a better message.” Then sends the better message along to the steering system, the braking system, et cetera.

Jim: So is that camera that’s already in the car, is that sending messages to do things like emergency braking and things like that?

George: So we selectively block and don’t block some of them. The emergency braking by default, we don’t disable… By default, if you mess with it, you can disable it. But if you don’t mess with it, if you just stock install comma, we don’t disable the emergency braking, we pass all them through. The messages we will change are the lane keep assist messages. Many cars don’t have a lane centering option. They have something that looks a lot more like if you get near a lane line, it’ll put torque on the wheel. That’s just stupid. We’ll just put torque on the wheel to keep you in the center of the lane. We’ll put torque on the wheel to not just keep you in the center of the lane but to drive on your unmarked no center line road and put it where a human would put it, to put the same torque on the wheel that a human would put in the same situation.

Jim: And this is all from two cameras that, as you say, emulate the human two eyes that go right behind your rearview mirror, essentially?

George: Yep. It’s a little box. You can buy it, 1250.

Jim: You guys sell that. It’s your comma 3, right?

George: This is the 3X. It’s the same thing as the 3.

Jim: So now let’s take us through what your system will actually… Give an example of a popular, relatively inexpensive car this thing would work with.

George: Toyota Corolla.

Jim: Toyota Corolla, perfect. I also saw another one of my favorite I recommend to people all the time, is the RAV4, is another nice little one.

George: RAV4, yeah, yeah. The Toyotas are great.

Jim: Great little functional car. If you just need a car to do random shit, it’s a good one. So you plug it in, how long does it take for someone to set it up and then what will it do for them?

George: It takes about 15 minutes. 30 if you’re careful. 5, if you rush through everything. There’s plenty of videos of people online installing these things. It is way less hard than people think. People are intimidated by this. If you can set up a piece of IKEA furniture, it’s easier than that. So what it does, right now think about whatever your driver assistance system is in your car. Think about how long you can go on the highway without touching anything.

Jim: Yeah, my car don’t have any driver’s assistance so I don’t have to worry about it.

George: Even most of the modern ones, it’s 10 seconds, maybe a minute.

Jim: My wife’s Outback’s got… Which is annoying. It slows down when it’s in cruise control, someone’s in front of you. I like the old style where it starts closing in on the guy, make some speed up or get the hell out of the way, or reminds you it’s time to pass because sometimes you space out [inaudible 00:24:24]. You want to be doing 70, and you’re only doing 62 because damn adaptive cruise control slowed it down.

George: If your Subaru has that, I’m sure it works.

Jim: Okay.

George: That’s enough cruise control. comma helps a lot more with the lateral stuff than the longitudinal stuff. We do the longitudinal as well, but the real difference, and this is a super hard thing to convey, except for the fact that it’s so simple when you see it. You can go on the highway, press the cruise control button and sit back. And it will, not just sometimes, but usually, drive for an hour without you having to do anything.

Jim: That’s on the highway or-

George: Interstate highways, yeah.

Jim: Interstate highway. So this is interstate highway only.

George: So it works around town as well. You’re not going to get an hour. We have a experimental mode which will stop at stop signs, stop at lights. It’s a little worse than Tesla FSD, but this stuff does work. And there are people… Again, you can look. We released a drive at the end of last year where we went from downtown San Diego to a Taco Bell in the suburbs without a disengagement. Stopping at red lights, stop signs, 90-degree turns, highway interchanges. It can do all these things. These things just turn out to be a lot less useful in day-to-day usage. The main thing that… The feature that you not just want but will be very upset if you ever don’t have is hours on the highway without touching it.

Jim: That makes sense as a figure of merit. Another data question I have, and I know the old days, I still don’t know if they still do, Waymo, it invested a load in super high resolution mapping. Do you guys use maps and if so, whose?

George: No, worthless. We have an experimental mode called navigate on openpilot, which we’ll use Mapbox maps to navigate, but it’s the same exact maps that a human use. My general philosophy about all AI stuff is like you don’t need special stuff for the computers, just look at what humans use. So yeah, humans use a nav system, but humans use like a normal standard definition map and this turns out to work totally fine for self-driving cars too. Humans don’t do things with centimeters of precision on a global scale. This is absurd. If you are trying to localize yourself within centimeters on a map, that’s just such a non-robust system. You’ve built this super fragile, “Oh, I got to get to [inaudible 00:26:38]. I can’t even use float 32. I could use float 64 for my ECEF coordinates.” This is not how humans drive cars.

Jim: Now, do you know if that’s still how Waymo is doing things, using these high precision maps they spend a load of money creating?

George: Yeah. So Waymo’s Level 4, not Level 5, meaning they operate in defined regions that they can carefully map. It’s a very different approach to driving from how a human drives our approach to driving is much more like how a human drives. So to criticize the approach is like… That’s not even what I criticize about Waymo and people have like I criticize lidar, that’s not true. I criticize the unit economics of Waymo. I think that the things they are building are… It’s a $500,000 robo taxi. “Oh, they’ll come down over time.” Yeah, maybe, but you know what will definitely be cheap over time? A cell phone. How do you make a cell phone drive a car? Why can’t it drive a car?

Jim: Presumably you could tape two cell phones together and perhaps drive a car.

George: On what extreme of this, on what extreme of what Waymo’s building is they’ve built a train. They’ve built a train with virtual rails. And yeah, you can build trains with virtual rails and you can get into all the economic reasons why building trains doesn’t make that much sense.

Jim: Thinking back of the history of self-driving cars, getting back the famous promises, Level 5 in two years and all that stuff, but then you had various startups and you had Waymo, which decided to do its first trials in Mesa, Arizona, a place famous for its nice square streets, level, and perfect weather. We spent a fair bit of time in Pittsburgh for family reasons and there were Ubers mostly, and I think some Argos too running around Pittsburgh, and that’s a much harder place because the roads are triangular and they’re shitty and they’re old and constant being dug up for sewer work. Weather sucks. All these bridges across these ravines and all this sort of stuff. Maybe they were trying to climb too high a hill, the guys that were doing their prototyping in Pittsburgh versus Mesa.

George: The whole thing never made any sense to me. What these things are are not self-driving cars, they’re trackless monorails. Again, when you start to view it through that lens, it becomes much more of an economics question. Why don’t we just replace all the streets with a normal monorail?

Jim: Of course, as you know, there were proposals back in the maybe 2010 time of putting smart telemetry in all the roads, which would’ve cost like $5 trillion or something.

George: I don’t know. This is just what’s like baffling to me. It seems that these people are very out of touch with the real world. The government won’t even fix a broken stop sign. Do you think they’re going to, “Oh, we’re going to install this smart telemetry”? It’s a scam.

Jim: The other thing you alluded to in passing, I’d like to dig into this a little bit more detail, is you suggested that the Waymos, et cetera, Cruise are still using remote control driving more than they like to let on. What do you know about that?

George: Cruz admits it. Cruz admits this way more than Waymo. These cars have multiple operators for each car. They took the driver out of the car, gave them a title where they get paid twice as much and made two of them. Again, it’s all based on this premise that eventually the AI is going to come and eventually it’s going to become economical because… Okay, right now every five minutes a human has to intervene. And they’re not RC, there’s not a human with a gas pedal and a wheel, it’s a point-and-click interface probably. But the decisions are fundamentally still being made by a human in a call center somewhere. Another way you can know this is you can just Google all the cruises stop when the cell phone network goes down, they just stop. Again, I see a system like that and I’m like, “You’re building something that’s so fragile, it’s so centralized, it’s so antithetical to everything I want to see about technology.”

Your comma doesn’t need an internet connection, it just runs a little model on the device, it’s like… The AI that impresses me… I build AI that can do what an ant can do. We’re not even close. An ant can self replicate, an ant can survive in new environments, and you have the pinnacle of these self-driving car things that are the most fragile dependent on the heights of civilization, and the minute anything goes down, oh well, doesn’t work anymore.

Jim: Yeah, sorry about that guys. Because I suppose at one level you could say there’s an economic curve. Let’s say your Waymo or Cruise is getting better and better and it was five minutes, once every five minutes the remote control driver has to intervene, then it’s 20 minutes and it’s an hour, then it’s three hours. The economics start to work perhaps when you get out to once an hour or something like that, even though it is still a crazy system at some level. To your point, it’s still very dependent on infrastructure that you’d not like to be dependent on. Do you think that’s more or less their play, is to ignore all the infrastructure problems and gradually improve until they can tolerate the fact that they have to intervene once an hour and that’s economical?

George: One of the most hilarious things I see in the projections of all these self-driving car companies is they keep the cost of transportation fixed, and they assume that they are going to be the sole winner and they’re going to be able to eat all those margins. There is no way this is going to be true. For two reasons. How many years ahead is Waymo of us? Let’s say Waymo starts to get this down to the point where it’s economical. If we get it to work, even economical for them might mean, “Oh, the car only costs $50,000.” But if I’m doing it with a $500 cell phone, you’re not competing with the Uber driver of the past, you’re competing with me. I will win. Over a long time horizon, I will win. Waymo may have a very, very short window to try to recapture any value. They’re assuming a static world, which is just completely not true.

I think also even if Waymo-style approaches win, it’s not going to be Waymo alone who solves the problem. It’s going to be like 10 companies who solve it pretty much all at the same time. And then you have a market that doesn’t even look like Uber, it looks like the scooter market. It looks like Lime and Bird and all these companies where you just basically pump these things out and it’s a total race to the bottom and everybody loses.

Jim: And everybody goes broke, right? Yeah, I was watching those things proliferated a couple years ago, I said, “Ah, this reminds me of the 1982 famous debacle when 104 companies introduced all at the same time, 5 and 1/4-inch Winchester hard drives and four of them survived, and then two, and then one, and that’s the way it went.” Now, the one we haven’t talked about, and this is much more analogous to what you’re doing, is Tesla. Why don’t you compare and contrast your approach with Tesla’s approach.

George: To quickly compare Tesla and Waymo. Tesla has a positive unit economics and Waymo has hilariously negative unit economics. So regardless of whether Tesla succeeds that self-driving or not, they are selling cars today and making a profit today. So when you compare and contrast us to Tesla, we’re doing the same thing. We are selling boxes today and making a profit today. Not quite at the same scale as Tesla, but that’s very important to me. I don’t believe in hockey stick growth. I don’t believe in magical inflection points. I believe that slowly over time you build value and you can do this in such a way that you’re profitable mostly along the way. Obviously, at some point you go under, but this idea of it’s all going to pay itself back in 20 years, it makes no sense.

Jim: As we would say, pie in the sky when we die.

George: Tesla and comma both have businesses where we sell things to consumers at a profit.

Now, our autonomy approaches are… You can see the differences when you read Reddit posts that compare autopilot and openpilot. Tesla views driving much more as a [inaudible 00:34:27] problem and much more from a modernist perspective. They’re talking now about end-to-end, but even their end-to-end stuff still looks a lot like rigid maneuvers and thinking about what cars are. Look, they display their cars in a virtual 3D display. They localize every car. We don’t do anything like that. We just say, “Where does a human drive the car, when does a human hit the brakes?” So yeah, we have a much more holistic just tell me the action, don’t tell me the state. I don’t care what you know about the state.

Jim: You’re using the human as the model. What would a human know in this situation? And let’s emulate as closely as we can what the human would do.

George: And humans don’t have little cars with bounding boxes in their head, especially the ones on the other side of the dividing line of the highway, but your Tesla does.

Jim: Ah, this might be a way to get at the difference. Guesstimate the ratio of CPU power that Tesla applies to the real time problem compared to what your comma 3X box applies to the problem. Guesstimate. Good faith estimate.

George: It’s about 100X.

Jim: 100X. Oh, only 100X. Well, that’s interesting, but still 100X is big, two orders of magnitude.

George: Yeah.

Jim: I know they got some honking big computers and some specialized silicon and all kinds of stuff.

George: We’re spending about three watts, they’re maybe spending about 60. They’re doing [inaudible 00:35:43], so they have another 5X there. So it works out to about 100X on both training and testing. So we’ll train on 40 GPUs, they’ll train on 4,000.

Jim: And they are a bit ahead of you functionally, right? What would you say is the gap? Where are they ahead of you and where are you ahead of them if you are ahead of them?

George: It’s tricky. Tesla certainly has more capability than us. If you’re asking the question, like if you’re trying to drive from point A to point B without a disengagement, there’s many things that just comma will never do and Tesla may do with some percent chance. But I think they’re behind us considerably in usability. And again, Reddit reflects this. You put a Tesla on the highway and every once in a while it makes some really sketchy mistake. It’ll phantom break. Not just the highway, but if you’re going to an intersection, it’ll mis-track the lane, it’ll put you one lane over instead of this lane as the continuation. And then it applies a lot of torque to the wheel. It’s such a jarring experience as a user. Our torque limits, the amount of torque that we can apply to the wheel is way lower than the amount of torque Tesla can apply to the wheel. We have a saying that smooth driving is safe driving. So as far as a practical day-to-day usability thing, I do now say we’re ahead of Tesla. As far as high-end capabilities, yeah, Tesla is multiple years ahead of us.

Jim: I also did see some of these YouTube side-by-side and things that you guys say making driving chill and Tesla does apparently. I’ve never messed with a self-driving car, so I don’t personally have any hands-on experience, but apparently it is not chill. So where are they ahead? Give me an example of where they’re clearly feature further ahead of you.

George: I would say that comma slogan is make driving chill and Tesla slogan is look at this crazy feature. So we have rudimentary stuff now shipped where you can put in a destination and it will navigate there, but it looks a lot more like the earliest versions of FSD than the versions of FSD that are out now. The version of FSD that are out now may not be comfortable and may not be particularly good driving, but they are very capable. A Tesla can make a right turn at a light, it can go get in the right turn lane, turn the blinker on, wait appropriately, make a turn. It’s rigid, but it can do that. Whereas the comma, when it gets overwhelmed, they behave very differently. When the Tesla gets overwhelmed, it freaks out, it’ll jerk the wheel, it’ll slam on the brakes. When the comma gets overwhelmed, it’ll just get a little bit more shaky and unsure.

We’re putting the neural net policy a lot more in than Tesla is. Even Tesla’s new end-to-end thing. They’re still using the same planner, they just move the planner off of the car and onto the backend, and it’s a very rigid MPC cost-based planner. MPC stands for model predictive control. So you can put in a list of costs and then optimize a trajectory given those costs. But you can have things that look very snappy with that kind of thing. You might have like two local minima of the function and it can snap to either one even if one’s the higher than the other. Whereas ours looks a lot more like the failure modes of neural networks, which look a lot more human-like than the failure modes of like a powerful optimizer.

Jim: On the other hand, the similarities between you and Tesla is that they do most of their processing locally. You do all your processing locally pretty much, right?

George: Tesla does all. The backend trains the model.

Jim: Oh, I see, okay.

George: So both us and Tesla have data centers that train the model, but then once the model is uploaded to your car, everything about it is local.

Jim: Got it. Okay.

George: Tesla doesn’t have the unit economics. You think there’s a guy helping you out with autopilot? No way. It’s all on the car, it’s all software.

Jim: I was like maybe the big, big computer might’ve helped you out once in a while or something.

George: It becomes very hard to make something like that robust, especially for something like us and Tesla, which can operate anywhere. If you’re Waymo, you can bribe the city of Scottsdale to install a new cell phone tower.

Jim: That is actually a big distinction that you guys actually still are on the track for it works anywhere, which was the original story back in the mid-fifteens.

George: I don’t want to solve self-driving. Self-driving is a stepping stone. I want to solve life. I want to build artificial life, silicon stack life. A car is just a… It’s another form of life.

Jim: I’ve always loved self-driving cars because it is narrow AI. The thing that will drive your car will not make you a sandwich also, but it’s a really big narrow piece of AI and we’re bound to learn a bunch of cool things from solving that problem and we can then apply those to what comes next. Some of the data I dug up is as of December 2023, Waymo had only driven theirs in no-driver mode, I was surprised, only 7 million miles. Relatively small amount. Guesstimates of engaged autopilot, 3.3 billion miles. You guys something north of 100 billion.

George: Yeah, we’re at 10X bigger than Waymo and Tesla is 30X bigger than us.

Jim: And I think when I reached out to you, I said, “You guys probably ought to be more well known than you are.” There ain’t a lot about you out there on the internet.

George: It doesn’t help.

Jim: What doesn’t help?

George: People knowing about us.

Jim: How are they going to buy your stuff if they don’t know about you? We used to say… Well, I won’t say what I was going to say, which was very politically incorrect, but.

George: We’re trying to be a profitable company, but we don’t do marketing. We’re thinking about it now. But fundamentally our mission is to solve self-driving cars and contrary to what people believe, we are not at all limited by data. We only train on about 5% of the data we have, and the only reason for this is diminishing returns once you train on more, and we can iterate faster if we train on less than the data. So we’re not data limited. We’re not money limited. No one’s money limited today. No one who has good ideas is usually limited by money. I can raise all the money I want if I had a way to deploy it, but I don’t. We’re limited by solving the problem.

Jim: So what does that mean? You say you have 10,000 active users, there’s only 10,000 people in the world that would like your level of capability?

George: I’m sure there’s many more people who would like it and they’ll find out about it over time, but it doesn’t help me if they find out about it today versus they find out about it in two years. It’s about the end point. It’s not about making money tomorrow, I don’t care.

Jim: Gotcha. All right, let’s now dig into a little bit more of the nitty-gritty because I’m sure the listeners are just waiting to get into. What about the legal issues? The federal government, state governments, who’s liable? Are you an open source guy? Hands off, if this thing blows up, tough luck, it’s on you? What about the legal liability regulatory environment that you’re operating in?

George: So there was a lot of fake news about comma and NHTSA. We’ve had many back and forths with NHTSA since then. For the most part, they’re relatively reasonable. I feel that the way that cars are regulated in America is quite reasonable. People are always like, “Is this comma thing certified?” Who do you think certifies it? The way automotive works in America is manufacturers self-certify, and we self certify that we’re in compliance with the same set of standards that like Bosch and Continental do when they make ADAS systems for cars. We have a safety system which follows ISO 26262. There’s two more standards now. We limit our torque. The EU led a lot of this regulation, but we are really, for the most part, like regulation if it’s good regulation.

If it regulates things like torque on the wheel, max braking force, max acceleration, we’re interested in those numbers and we make sure that our system complies with that. It’s a Level 2 system, meaning you are in control of the vehicle at all times. The only thing that comma can guarantee you, the only thing we promise you is that the car will never become uncontrollable. You can always reach out, hit the brake pedal and the brakes will work. You can always massively overpower any torque we’re putting on the steering wheel. Again, we put very little torque on the steering wheel. You can override it with two fingers. If the car crashes, yeah, I mean it’s on you, but pay attention at all times.

Jim: I will say I saw some YouTubes of people hands off.

George: Hands off is very different from eyes off. Look, we don’t say hands off, but people drive hands off with no driver assisted systems in their car. Whether you choose to take your hands off the wheel is completely up to you or not. You’ll get a feel for the maximum amount of torque it’s ever going to put on the wheel. We absolutely say you must keep your eyes on the road at all times. We actually have a camera which monitors and make sure you do that. Again, it’s very non-intrusive as long as you’re paying attention. But if you think you’re going to get this thing and use your cell phone or take a nap, you just can’t. Again, can you? Well, sure. You can also get a normal car, put a brick on the gas pedal and sleep in the backseat.

Jim: And of course we know cases of Tesla where people have made out with their girlfriends or something, got their heads cut off when it ran into the back of a truck. And of course part of that what I read about is the Cadillac system apparently is extremely Germanic in keeping its eye on you and making sure that you have your eyes on the road and all this sort of shit. How far along that line are you with, say, compared to the high-end GM system?

George: We have the best driver monitoring in the world. We have the best driver monitoring in the world. And then also our driver monitoring, again, we’re in a pretty good space with the open source too, which is if you make a system that has too many false positives or alerts people when they feel it’s unreasonable, what you’ll get is alert fatigue and they’ll just stop paying attention to the system. It’s really important to us that people respect the system. And we’re not going to force that respect through an iron fist, we’re going to force that respect through, “Wait a second. Wait, I actually did look off the road for too long there. I’m actually happy that thing beeped. We’ll wake people up from sleeping. People are very happy to be woken up from sleeping.” A good driver monitoring system should not be viewed as an adversary. It should really be viewed as something that helps you out. And again, it’s all completely local on the device. Unless you specifically opt in, we’re not uploading any pictures of you.

Jim: Are you capturing telemetry on everybody or is that also opt-in driving telemetry?

George: Telemetry is opt out. You can run a fork or you can disable the uploader, or you can just not connect a wifi.

Jim: Gotcha.

George: So it is optional in that sense, but it is opt out [inaudible 00:45:48].

Jim: Truthfully, it is a common good to upload your telemetry so that the system will get better. You would think if you were a non-free-rider moral person, unless you’re going to your illicit girlfriend’s house or something, you would probably want to upload your telemetry.

All right. This has been very interesting. What else can you tell us about your vision for the road ahead?

George: What you said before about self-Driving cars are narrow AI. I don’t really exactly understand the distinction between narrow AI and general AI, but I will say that I think it’s all on a spectrum. Self-driving cars have some things about them that make them an easier problem than general purpose robotics. A robot that can make you a sandwich and clean your house is a lot more complicated than a car for two reasons. One, it’s very easy to gather data of good driving and it’s very easy to gather data of good driving from the perspective of the car. It’s much harder to gather data of good sandwich making from the perspective of the human. Maybe I could do it with cameras today and some fancy recovery algorithms, but really if I wanted the true thing, I’d have to put them in like a motion capture suit. So it’s hard to get datasets for these other things.

And then the driving problem is low dimensional. So a car is basically a two-dimensional system. You have a steering and you have an acceleration, whereas a hand, look how many dimensions it has. It’s this crazy complex. Even if you’re just talking about my hand as an end actuator and then grip is one, that’s still seven. It’s six DoF to put it in space and then one to do the grip, and the hand is of course way more complicated than that. So our goal is to solve self-driving cars, but not as an endpoint. Our goal is to solve self-driving cars as a jumping off point to general purpose robotics. And the end dream of comma is to sell you the comma body, the $25,000 robot companion that comes home and cooks for you, cleans for you, and does whatever else you might want, we don’t judge.

Jim: Interesting. Okay, yeah, that makes sense. Do you have any other competitors other than Tesla, let’s say? Is there anybody else trying to do what you’re doing?

George: Wayve AI has a lot of similar stuff. We like them. They have the fancy simulator now out there. We’re pushing now on very similar simulator technologies. These things were all enabled by transformers. Transformers are allowing all sorts of data, not just language. Our new simulator is basically a video transformer.

Jim: All right, then more tangibly on the road ahead, you’re currently at Level 2, the equivalent of what you might get in a high-end Subaru today. Do you see yourself climbing the Level 3, Level 4 things? Was it the top of the line GM one, is that Level 3 plus, or is that four these days?

George: This is why I don’t think the levels are particularly good. We have no interest in ever going past Level 2. We have no interest in taking liability. We have no interest in being an insurance company. Other people can definitely use our software and provide that service on top of it. Our goal is to build software that is a better driver than a human. Maybe a 10X better driver than a human. But as far as comma AI ever shipping something that’s not Level 2, I have no interest. Somebody could take our open source software, do the statistics themselves and be like, “Wow. Wait, if we just had this thing controlling the car instead of humans, we’d have 10X less accidents” and then they can provide that liability Level 5 layer on top of it.

I’m also a believer that there’s Level 2, maybe there’s a little bit of Level 3 and then there’s Level 5. I don’t think Level 4 is viable. I don’t think a car that works in one precision mapped city… Not that it’s not buildable, it’s just never a good business model. The Level 5 cars will come too quickly after the Level 4 cars for you to ever recapture the amount of value that you burned creating that thing.

Jim: Gotcha, that’s interesting. For the audience, Level 2 means, at least officially, people have to have their hands on the wheel at all times.

George: Whether you put your hands on the wheel is completely up to you. We don’t issue any official guidance on this. All we say is that we limit the maximum amount of torque the system is capable of applying to the wheel.

Jim: What do you believe that implies about liability for your company? Let’s suppose you have a bad bug that causes your software to run over somebody.

George: The software didn’t run over somebody, the human driving the car ran over somebody.

Jim: Unless they didn’t have their hands on the wheel?

George: That’s their choice. What did IBM say? You can never let a computer make a decision because a computer can’t be held accountable. That’s my philosophy on this kind of stuff. It’s like the human is in control of the car at all times. They can decide for themselves how much they’d like to… All cars have driver assistance to some level. Power steering is a form of driver assistance. Cruise control is a form of driver assistance. We just continue to move up this gradient of who’s liable if you jam the wheel to the right and drive the car into somebody. Can you say the power steering system is liable?

Jim: It’s just been litigated. Has there been any claims against you guys for, “Oh, bug in the software caused this problem”?

George: Not us, no. We were involved in one lawsuit with a patent troll that we quickly crushed.

Jim: I used to love crushing patent [inaudible 00:51:03] when I was the CTO of Thomson Reuters. When anybody called up with a patent thing, they’d send them to me and I’d say, “You know who we are. We own West Publishing. We have a building in Minneapolis with 7,000 fucking lawyers in it that will litigate your ass until you’re dust. If it costs $5 million, I don’t care. I ain’t giving you a fucking penny. There’s a lot of weaker sisters out there than me, go fuck with them.”

George: This is exactly what I said.

Jim: We never paid a penny ever to a patent troll when I was there.

George: But you have to. This is the only way to do it. I’m legit willing to do it. I was willing to spend whatever to make sure he doesn’t get money.

Jim: There needs to be more people like that. In our industry, a whole bunch of the companies succumbed to stupid-ass patent trolls and paid out hundreds of thousands of dollars, which you’re just feeding the troll, when you do that.

George: comma will eventually get sued and I will take the exact same approach. We are not going to settle, we are going to… You are in control of the car at all times. And I’m not tongue a cheek about this. I’m not trying to sell devices. Some of the Tesla marketing stuff is like it’s way beyond anything we do. I wouldn’t call the system full self-driving. We don’t do that.

Jim: Yeah, that’s kind of nuts. That was a stupid [inaudible 00:52:09]. If I was their lawyers, I would not have let them do that, but-

George: Look, Elon loves these kind of things. Elon loves pushing the boundaries and pushing the limits. I’m much more of the quiet scientist guy who I want to solve this problem very carefully. I don’t want problems from other people. I don’t want to oversell anything. Buy it or don’t buy it, that’s up to you.

Jim: I presume there’s some form of contract people have to sign that lays all this out.

George: There’s a terms of service, and the terms of service is pretty clear about this. It’s not just… You indemnify comma for liability when you’re using the system.

Jim: You got good lawyers?

George: Yeah.

Jim: If you don’t, I’ve got a great one that does this kind of shit. He drew up some contracts for me and when we did get sued, we were 65 and 0, because he wrote the contracts and he was our head of litigation.

George: It will happen. We will get sued. We do have great lawyers and these things have also been well litigated throughout automotive history to be that you can’t hold the car manufacturer liable if a person does something stupid in a car.

Jim: That is true. On the other hand, if there is a mechanical failure that should not have happened at that point, let’s say a bearing brake falls off left front wheel and it causes a head-on collision, that could result in actionable litigation.

George: Absolutely. We do distinguish the concept of functional safety. If for example, you put comma in your car and then the brake stopped working, that’s a very different story. That’s a very different story from you made the decision to not have your hands on the wheel and not pay attention and something bad happened.

Jim: So you draw a line that you will be liable for mechanical side effects caused by your system, but you are saying that any actions taken, judgment calls done reside liably with the human?

George: I’m not saying I will take liability, but this is my basic understanding of product liability.

Jim: Gotcha.

George: Which is yes, of course, if the product malfunctions in a way that you can no longer use the steering wheel or you can no longer use the brake. Yeah, that’s very different from. But we’ve never had anything like that happen. There’s so many redundancies in place to make sure. Also, again, we don’t hack the car. We use the messages that are put in there by the manufacturer that are intended to be used for ADAS and we use them within the spec. It’s a reverse engineered spec, but I think for some of these we know more than the manufacturer does.

Jim: That’s quite possible.

George: We have telemetry. They don’t.

Jim: That’s true though. You have the full loop, right?

George: We got everything. I got the full CAN bus coming back on these cars and we find quirks in these cars. We found some Volkswagens that the power steering that wasn’t initializing in certain scenarios and the bug was in the Volkswagen software, not in comma.

Jim: All right, [inaudible 00:54:40]. Any final things you want to talk about openpilot and comma before we go on for a brief chat about your other thing, tiny, whatever the fuck it is.

George: I think that’s mostly it. We had a big breakthrough last year that I think is going to start to pay off to move to this third paradigm. I’m so grateful for all the people who are doing research on things like VQ-VAE transformers, all the people building hardware and infrastructure to enable us to extract signal from all of this data and make progress and solving these very hard problems that are following in the footsteps of God, building brains and building.

Jim: Yeah, it is amazing. I’m working on a project where we use LLMs and related technologies to write movie screenplays, and I acknowledge all the time, “The heavy lifting is being done by others.” We are basically smart appliers of tools being built by other folks and it is amazing how much cool good work is being done by so many people in this world right now that are allowing all this to come on.

All right, now let’s talk about your tinygrad, just a few minutes here. That’s another project that you’re working on. Tell us what that is, why you’re doing it and what implications do you think it has?

George: I’m the CEO of tinygrad. That is where I’m full-time right now. What tinygrad is is a machine learning framework. It competes with TensorFlow, PyTorch, and JAX, and it allows you to train models on various hardware. The big difference between tinygrad and its competitors is tinygrad is a 100X simpler. The code base is right now 5,200 lines.

Jim: Wow. 5,200 lines? That’s amazing.

George: 5,200 lines. It can run stable diffusion, it can run LlaMA, it can train [inaudible 00:56:24], it can train ResNet. It’s a fully featured… And it’s pretty fast too. And I think that a lot of the problems in something like PyTorch is combinatorial explosion. You have an operator, a D type, a device, and PyTorch will write a kernel for each one of those things and those become multiplies. We support devices in a generic way, D types in a generic way, operations in a generic way such that those become ads. You can add a new D type and that D type [inaudible 00:56:54] data type, and that data type will automatically work for every operation tinygrad supports and on every device tinygrad supports. There’s also a whole bunch of other things where it’s just, I’ve refactored it and thought about things so much that we care about the simplicity of the library. Because eventually this stuff’s going to be translated into hardware. The long-term goal of tinygrad is to build machine learning ASICs. But we start with the software, not with the hardware because I don’t want to end up like Dojo.

Jim: Has anybody taken up using it at this point? Because that’s obviously where the rubber meets the road on machine learning frameworks.

George: So yeah. So it’s actually used in openpilot. It’s used in openpilot to run the model on the device. There’s a whole bunch of people using it for what looked like similar use cases to that. It’s very good at the embedded weird… It’s very easy to port a new system, to port a new kind of accelerator to tinygrad. You can deploy it in microcontrollers. There’s a whole bunch of people who’ve deployed it in those settings. The big settings, like I’m training huge models on NVIDIA, not yet, but we’re working on it.

Jim: Okay, very cool. Folks who are interested in those categories, probably 5% of my listeners, go check it out at tinygrad.com. Is that your website?

George: Org.

Jim: Org?

George: tinygrad.org, yeah.

Jim: And then the self-driving car stuff is at comma.ai. That one I know. Check these things out. I really want to thank George Hotz for a heck of an interesting conversation here today.

George: Thank you for having me.

Jim: It’s been great.