See all Lex Fridman transcripts on Youtube

youtube thumbnail

George Hotz: Hacking the Simulation & Learning to Drive with Neural Nets | Lex Fridman Podcast #132

3 hours 8 minutes 45 seconds

🇬🇧 English

S1

Speaker 1

00:00

The following is a conversation with George Hotz, AKA GeoHot, his second time on the podcast. He's the founder of Kama AI, an autonomous and semi-autonomous vehicle technology company that seeks to be, to Tesla Autopilot, what Android is to the iOS. They sell the Comma 2 device for $1,000 that when installed in many of their supported cars can keep the vehicle centered in the lane even when there are no lane markings. It includes driver sensing that ensures that the driver's eyes are on the road.

S1

Speaker 1

00:35

As you may know, I'm a big fan of driver sensing. I do believe Tesla, Autopilot, and others should definitely include it in their sensor suite. Also, I'm a fan of Android and a big fan of George for many reasons, including his nonlinear out of the box brilliance and the fact that he's a superstar programmer of a very different style than myself. Styles make fights and styles make conversations.

S1

Speaker 1

01:01

So I really enjoyed this chat, and I'm sure we'll talk many more times on this podcast. Quick mention of each sponsor, followed by some thoughts related to the episode. First is 4 Sigmatic, the maker of delicious mushroom coffee. Second is Decoding Digital, a podcast on tech and entrepreneurship that I listen to and enjoy.

S1

Speaker 1

01:22

And finally, ExpressVPN, the VPN I've used for many years to protect my privacy on the internet. Please check out the sponsors in the description to get a discount and to support this podcast. As a side note, let me say that my work at MIT on autonomous and semi-autonomous vehicles led me to study the human side of autonomy enough to understand that it's a beautifully complicated and interesting problem space, much richer than what can be studied in the lab. In that sense, the data that Comma AI, Tesla Autopilot, and perhaps others like Cadillac Super Cruiser are collecting gives us a chance to understand how we can design safe semi-autonomous vehicles for real human beings in real world conditions.

S1

Speaker 1

02:07

I think this requires bold innovation and a serious exploration of the first principles of the driving task itself. If you enjoyed this thing, subscribe on YouTube, review it with 5 Stars and Apple Podcasts, follow on Spotify, support on Patreon, or connect with me on Twitter at Lex Friedman. And now, here's my conversation with George Hotz.

S2

Speaker 2

02:31

So last time we started talking about the simulation. This time let me ask you, do you think there's intelligent life out there in the universe?

S3

Speaker 3

02:38

I've always maintained my answer to the Fermi paradox. I think there has been intelligent life elsewhere in the universe.

S2

Speaker 2

02:45

So intelligent civilizations existed, but they've blown themselves up. So your general intuition is that intelligent civilizations quickly, like there's that parameter in the Drake equation, your sense is they don't last very long. Yeah.

S2

Speaker 2

03:00

How are we doing on that? Like, have we lasted pretty good? Are we due?

S3

Speaker 3

03:04

Oh, yeah. I mean, not quite yet. Well, it's all right, Yudkowsky.

S3

Speaker 3

03:11

IQ required to destroy the world falls by 1 point every year.

S2

Speaker 2

03:15

Okay, so technology democratizes the destruction of the world.

S3

Speaker 3

03:21

When can a meme destroy the world?

S2

Speaker 2

03:25

It kind of is already, right?

S3

Speaker 3

03:27

Somewhat. I don't think we've seen anywhere near the worst of it yet. Well, it's gonna get weird.

S2

Speaker 2

03:33

Well, maybe a meme can save the world. You thought about that? The meme Lord Elon Musk fighting on the side of good versus the meme Lord of the darkness, which is not saying anything bad about Donald Trump, but he is the lord of the meme on the dark side.

S2

Speaker 2

03:51

He's a Darth Vader of memes.

S3

Speaker 3

03:53

I think in every fairy tale, they always end it with, and they lived happily ever after. And I'm like, please tell me more about this happily ever after. I've heard 50% of marriages end in divorce.

S3

Speaker 3

04:05

Why doesn't your marriage end up there? You can't just say happily ever after. So the thing about destruction is it's over after the destruction. We have to do everything right in order to avoid it.

S3

Speaker 3

04:18

And 1 thing wrong, I mean, actually, this is what I really like about cryptography. Cryptography, it seems like we live in a world where the defense wins versus like nuclear weapons, the opposite is true. It is much easier to build a warhead that splits into 100 little warheads than to build something that can, you know, take out 100 little warheads. The offense has the advantage there.

S3

Speaker 3

04:41

So maybe our future is in crypto, but...

S2

Speaker 2

04:44

So cryptography, Right. The Goliath is the defense. And then all the different hackers are the Davids.

S2

Speaker 2

04:54

And that equation is flipped for nuclear war. Because there's so many, like 1 nuclear weapon destroys everything, essentially.

S3

Speaker 3

05:02

Yeah. And it is much easier to attack with a nuclear weapon than it is to like, the technology required to intercept and destroy a rocket is much more complicated than the technology required to just, you know, orbital trajectory, send a rocket to somebody.

S2

Speaker 2

05:17

So, okay, your intuition that there were intelligent civilizations out there, but it's very possible that they're no longer there. It's kind of a sad picture.

S3

Speaker 3

05:27

They enter some steady state. They all wirehead themselves.

S2

Speaker 2

05:31

What's wirehead?

S3

Speaker 3

05:33

Stimulate their pleasure centers. And just live forever in this kind of stasis. Well, I think the reason I believe this is because where are they?

S3

Speaker 3

05:46

If there's some reason they stopped expanding. Because otherwise they would have taken over the universe. The universe isn't that big. Or at least, you know, let's just talk about the galaxy, right, 70,000 light years across.

S3

Speaker 3

05:58

I took that number from Star Trek Voyager, I don't know how true it is. But yeah, that's not big, right? 70,000 light years is nothing.

S2

Speaker 2

06:07

For some possible technology that you can imagine that can leverage like wormholes or something like that.

S3

Speaker 3

06:12

Oh, you don't even need wormholes. Just a von Neumann probe is enough. A Von Neumann probe and a million years of sub-light travel and you'd have taken over the whole universe.

S3

Speaker 3

06:20

That clearly didn't happen. So something stopped it.

S2

Speaker 2

06:23

So you mean if you, right, for like a few million years, if you sent out probes that travel close, what's sub-light? You mean close to the speed of light?

S3

Speaker 3

06:32

Let's say 0.1c.

S2

Speaker 2

06:33

And it just spreads. Interesting. Actually, that's an interesting calculation.

S2

Speaker 2

06:38

So what makes you think that we'd be able to communicate with them? Why do you think we would able to be able to comprehend intelligent lives that are out there? Like even if they were among us kind of thing like or even just flying around

S3

Speaker 3

06:57

Well, I mean that's Possible It's possible that there is some sort of prime directive that'd be a really cool universe to live in. And there's some reason they're not making themselves visible to us. But it makes sense that they would use the same, well, at least the same entropy.

S2

Speaker 2

07:16

Well, you're implying the same laws of physics. I don't know what you mean by entropy in this case.

S3

Speaker 3

07:21

Oh, yeah. I mean, if entropy is the scarce resource in the universe.

S2

Speaker 2

07:24

So what do you think about like Stephen Wolfram and everything is a computation, and then what if they are traveling through this world of computation. So if you think of the universe as just information processing, then what you're referring to with entropy, and then these pockets of interesting, complex computation swimming around, How do we know they're not already here? How do we know that this, like all the different amazing things that are full of mystery on earth are just like little footprints of intelligence from light years away?

S2

Speaker 2

08:01

Maybe. I mean, I tend to think that as civilizations expand, they use more and more energy, and you can never overcome the problem of waste heat.

S3

Speaker 3

08:10

So where is their waste heat?

S2

Speaker 2

08:11

So we'd be able to, with our crude methods, be able to see like, there's a whole lot of energy here, But it could be something we're not, I mean, we don't understand dark energy, right, dark matter. It could be just stuff we don't understand at all. Or they could have a fundamentally different physics, you know, like that we just don't even

S3

Speaker 3

08:31

comprehend. Well, I think, okay, I mean, it depends how far out you wanna go. I don't think physics is very different on the other side of the galaxy. I would suspect that they have, I mean, if they're in our universe, they have the same physics.

S2

Speaker 2

08:45

Well, yeah, that's the assumption we have, but there could be like super trippy things like our cognition only gets to a slice, and all the possible instruments that we can design only get to a particular slice of the universe. And there's something much like weirder.

S3

Speaker 3

09:03

Maybe we can try a thought experiment. Would people from the past be able to detect the remnants of our, would we be able to detect our modern civilization? I think the answer is obviously yes.

S2

Speaker 2

09:18

You mean past from 100 years ago?

S3

Speaker 3

09:20

Well, let's even go back further. Let's go to a million years ago. The humans who were lying around in the desert probably didn't even have, maybe they just barely had fire.

S3

Speaker 3

09:31

They would understand if a 747 flew overhead.

S2

Speaker 2

09:35

Oh, in this vicinity, but not if a 747 flew on Mars. Because they wouldn't be able to see far. Because we're not actually communicating that well with the rest of the universe.

S2

Speaker 2

09:48

We're doing OK, just sending out random 50s tracks of music.

S3

Speaker 3

09:54

True. And yeah, I mean, they'd have to, you know, we've only been broadcasting radio waves for 150 years. And well, there's your light cone.

S2

Speaker 2

10:05

Yeah, OK. What do you make of all the, I recently came across this, having talked to David Fravor. I don't know if you caught what the videos that Pentagon released and the New York Times reporting of the UFO sightings.

S2

Speaker 2

10:23

So I kind of looked into it, quote unquote, and there's actually been like hundreds of thousands of UFO sightings, right? And a lot of it you can explain away in different kinds of ways. So 1 is it could be interesting physical phenomena. 2, it could be people wanting to believe and therefore they conjure up a lot of different things that just, you know, when you see different kinds of lights, some basic physics phenomena, and then you just conjure up ideas of possible, out there, mysterious worlds.

S2

Speaker 2

10:56

But, you know, it's also possible, like you have a case of David Fravor, who is a Navy pilot, who's as legit as it gets in terms of humans who are able to perceive things in the environment and make conclusions whether those things are a threat or not. And he and several other pilots saw a thing, I don't know if you followed this, but they saw a thing that they've since then called TikTok that moved in all kinds of weird ways. They don't know what it is. It could be technology developed by the United States, and they're just not aware of it on the surface level from the Navy, right?

S2

Speaker 2

11:39

It could be different kind of lighting technology or drone technology, all that kind of stuff. It could be the Russians and the Chinese, all that kind of stuff. And of course, their mind, our mind, can also venture into the possibility that it's from another world. Have you looked into this at all?

S2

Speaker 2

11:58

What do you think about it?

S3

Speaker 3

11:59

I think all the news is a psyop. I think that the most

S2

Speaker 2

12:04

plausible... Nothing is real.

S3

Speaker 3

12:06

Yeah, I listened to the, I think it was Bob Lazar on Joe Rogan. And like, I believe everything this guy is saying. And then I think that it's probably just some like MK Ultra kind of thing, you know?

S2

Speaker 2

12:20

What do you mean? Like they, they, they, you

S3

Speaker 3

12:23

know, they made some weird thing and they called it an alien spaceship. You know, maybe it was just to like stimulate young physicists' minds. We'll tell them It's alien technology and we'll see what they come up with.

S2

Speaker 2

12:33

Do you find any conspiracy theories compelling? Like have you pulled at the string of the rich, complex world of conspiracy theories that's out there?

S3

Speaker 3

12:43

I think that I've heard a conspiracy theory that conspiracy theories were invented by the CIA in the 60s to discredit true things. So you can go to ridiculous conspiracy theories like Flat Earth and Pizzagate and these things are almost to hide like conspiracy theories that like, you know, remember when the Chinese like locked up the doctors who discovered coronavirus? Like I tell people this and I'm like, no, no, that's not a conspiracy theory.

S3

Speaker 3

13:14

That actually happened. Do you remember the time that the money used to be backed by gold and now it's backed by nothing? This is not a conspiracy theory. This actually happened.

S2

Speaker 2

13:23

Well, that's 1 of my worries today with the idea of fake news is that when nothing is real, then, like, you dilute the possibility of anything being true by conjuring up all kinds of conspiracy theories. And then you don't know what to believe. And then like the idea of truth, of objectivity is lost completely.

S2

Speaker 2

13:47

Everybody has their own truth.

S3

Speaker 3

13:50

So you used to control information by censoring it. Then the internet happened and governments are like, oh shit, we can't censor things anymore. I know what we'll do.

S3

Speaker 3

14:00

You know, it's the old story of like tying a flag where the leprechaun tells you his gold is buried, and you tie 1 flag and you make the leprechaun swear to not remove the flag, and you come back to the field later with a shovel and there's flags everywhere.

S2

Speaker 2

14:14

That's 1 way to maintain privacy, right? It's like, in order to protect the contents of this conversation, for example, we could just generate millions of deep fake conversations where you and I talk and say random things. So this is just 1 of them and nobody knows which 1 was the real 1.

S2

Speaker 2

14:33

This could be fake right now. Classic steganography technique. Okay, another absurd question about intelligent life. Because you're an incredible programmer, outside of everything else we'll talk about, just as a programmer.

S2

Speaker 2

14:49

Do you think intelligent beings out there, the civilizations that were out there had computers and programming? Did they, do we naturally have to develop something where we engineer machines and are able to encode both knowledge into those machines and instructions that process that knowledge, process that information to make decisions and actions and so on. And would those programming languages, if you think they exist, be at all similar to anything we've developed?

S3

Speaker 3

15:24

So I don't see that much of a difference between quote unquote natural languages and programming languages. I think there's so many similarities. So when asked the question, what do alien languages look like?

S3

Speaker 3

15:42

I imagine they're not all that dissimilar from ours. And I think translating in and out of them wouldn't be that crazy.

S2

Speaker 2

15:52

It's difficult to compile like DNA to Python and then to C. There is a little bit of a gap in the kind of languages we use for touring machines and the kind of languages nature seems to use a little bit. Maybe that's just we just haven't understood the kind of language that nature uses well yet.

S3

Speaker 3

16:16

DNA is a CAD model. It's not quite a programming language. It has no sort of serial execution.

S3

Speaker 3

16:25

It's not quite a... Yeah, it's a CAD model. So I think In that sense, we actually completely understand it. The problem is, you know, well, simulating on these CAD models.

S3

Speaker 3

16:37

I played with it a bit this year. It is super computationally intensive. If you want to go down to like the molecular level, where you need to go to see a lot of these phenomenon, like protein folding. So yeah, it's not that we don't understand it, it just requires a whole lot of compute to kind of compile it.

S2

Speaker 2

16:54

For our human minds, it's inefficient both for the data representation and for the programming.

S3

Speaker 3

17:00

Yeah, it runs well on raw nature. It runs well on raw nature. And when we try to build emulators or simulators for that, well, they're mad slow.

S3

Speaker 3

17:08

And I've tried it.

S2

Speaker 2

17:10

It runs in, yeah, you've commented elsewhere, I don't remember where, that 1 of the problems is simulating nature is tough. And if you want to sort of deploy a prototype, I forgot how you put it, but it made me laugh, but animals or humans would need to be involved in order to try to run some prototype code on, like, if we're talking about COVID and viruses and so on, if you were trying to engineer some kind of defense mechanisms, like a vaccine against COVID and all that kind of stuff, That doing any kind of experimentation like you can with like autonomous vehicles would be very technically and ethically costly.

S3

Speaker 3

17:59

I'm not sure about that. I think you can do tons of crazy biology in test tubes. I think my bigger complaint is more, oh, the tools are so bad.

S2

Speaker 2

18:11

Like, literally? You mean like libraries and...

S3

Speaker 3

18:14

I'm not pipetting shit, like you're handing me a, I gotta, no. No, no, there has to be some.

S2

Speaker 2

18:22

Like automating stuff and like the, yeah but human biology is messy. Like it seems to be.

S3

Speaker 3

18:28

Well but like, look at those Duranos videos, they were a joke, It's like a little gantry, it's like a little XY gantry high school science project with the pipette. I'm like, really? You can't build like nice microfluidics and I can program the computation to bio interface?

S3

Speaker 3

18:45

I mean, this is gonna happen. But like right now, if you are asking me to pipette 50 milliliters of solution, I'm out. This is so crude.

S2

Speaker 2

18:55

Yeah. OK, let's get all the crazy out of the way. So a bunch of people asked me, since we talked about the simulation last time, we talked about hacking the simulation. Do you have any updates, any insights about how we might be able to go about hacking simulation if we indeed do live in a simulation?

S2

Speaker 2

19:17

I think

S3

Speaker 3

19:17

a lot of people misinterpreted the point of that South by talk. The point of the South by talk was not literally to hack the simulation. I think that This is an idea is literally just I think theoretical physics.

S3

Speaker 3

19:34

I think that's the whole goal, right? You want your grand unified theory, but then, okay, build a grand unified theory, search for exploits, right? I think we're nowhere near actually there yet. My hope with that was just more to like, are you people kidding me with the things you spend time thinking about?

S3

Speaker 3

19:54

Do you understand how small you are? You are bytes and God's computer, really? The things that people get worked up about, you know?

S2

Speaker 2

20:06

So basically, it was more a message of we should humble ourselves. That we get to, like, what are we humans in this byte code?

S3

Speaker 3

20:19

Yeah, and not just humble ourselves, but like I'm not trying to make people guilty or anything like that. I'm trying to say like literally, look at what you are spending time on,

S2

Speaker 2

20:29

right? What are you referring to? Are You referring to the Kardashians? What are we talking about?

S2

Speaker 2

20:34

Twitter?

S3

Speaker 3

20:34

I'm referring to, no, the Kardashians, everyone knows that's kind of fun. I'm referring more to like the economy. You know, this idea that we gotta up our stock price.

S3

Speaker 3

20:50

Or what is the goal function of humanity?

S2

Speaker 2

20:55

You don't like the game of capitalism? Like you don't like the games we've constructed for ourselves as humans?

S3

Speaker 3

21:00

I'm a big fan of capitalism. I don't think that's really the game we're playing right now. I think we're playing a different game where the rules are

S2

Speaker 2

21:08

rigged. Okay, which games are interesting to you that we humans have constructed and which aren't? Which are productive and which are not?

S3

Speaker 3

21:18

Actually, maybe that's the real point of the talk. It's like, stop playing these fake human games. There's a real game here.

S3

Speaker 3

21:26

We can play the real game. The real game is, you know, Nature wrote the rules. This is a real game. There still is a game to play.

S2

Speaker 2

21:34

But if you look at, sorry to interrupt, I don't know if you've seen the Instagram account, Nature is Metal. The game that nature seems to be playing is a lot more cruel than we humans want to put up with, or at least we see it as cruel. It's like the bigger thing eats the smaller thing and does it to impress another big thing so it can mate with that thing And that's it.

S2

Speaker 2

22:01

That seems to be the entirety of it. Well, there's no art, there's no music, there's no comma AI, there's no comma 1, no comma 2, no George Hotz with his brilliant talks at South by Southwest.

S3

Speaker 3

22:17

I disagree though. I disagree that this is what nature is. I think nature just provide it basically a open-world MMORPG.

S3

Speaker 3

22:28

Here it's open-world. I mean, if that's the game you want to play, you can play that game.

S2

Speaker 2

22:32

But isn't that beautiful? I don't know if you played Diablo. They used to have, I think, cow level where everybody will go, just they figured out this, like the best way to gain experience points, is to just slaughter cows over and over and over.

S2

Speaker 2

22:53

And so they figured out this little sub-game within the bigger game that this is the most efficient way to get experience points. Everybody somehow agreed that getting experience points in RPG context where you always want to be getting more stuff, more skills, more levels, keep advancing, that seems to be good, so might as well spend, sacrifice actual enjoyment of playing a game, exploring a world, and spending like hundreds of hours of your time in cow level. I mean, the number of hours I spent in cow level, I'm not like the most impressive person because people have probably thousands of hours there, but it's ridiculous. So that's a little absurd game that brought me joy in some weird dopamine drug kind of way.

S2

Speaker 2

23:38

So you don't like those games. You don't think that's us humans feeling the nature. And that was the point of the talk.

S3

Speaker 3

23:49

Yeah. So

S2

Speaker 2

23:50

how do we hack it then?

S3

Speaker 3

23:51

Well, I want to live forever and.

S2

Speaker 2

23:53

Wait, what?

S3

Speaker 3

23:54

Well, I want to live forever. And this is a. What's the goal?

S3

Speaker 3

23:56

Well, that's a game against nature.

S2

Speaker 2

23:59

Yeah. Immortality is the good objective function to you?

S3

Speaker 3

24:03

I mean, start there and then you can do whatever else you want because you've got a long time.

S2

Speaker 2

24:07

What if immortality makes the game just totally not fun? I mean, like, why do you assume immortality is somehow...

S3

Speaker 3

24:15

It's not...

S2

Speaker 2

24:16

...A good objective function?

S3

Speaker 3

24:18

It's not immortality that I want. A true immortality where I could not die, I would prefer what we have right now. But I want to choose my own death, of course.

S3

Speaker 3

24:27

I don't want nature to decide when I die. I'm going to win. I'm going to beat you.

S2

Speaker 2

24:33

And then at some point, if you choose, commit suicide,

S3

Speaker 3

24:38

like

S2

Speaker 2

24:38

how long do you think you'd

S3

Speaker 3

24:40

live? Until I get bored.

S2

Speaker 2

24:43

See, I don't think people like, Like brilliant people like you that really ponder living a long time are really considering how meaningless life becomes.

S3

Speaker 3

24:58

Well, I want to know everything and then I'm ready to die.

S2

Speaker 2

25:03

But why do you want, Isn't it possible that you want to know everything because it's finite? The reason you want to know, quote unquote, everything is because you don't have enough time to know everything. And once you have unlimited time, then you realize, why do anything?

S2

Speaker 2

25:22

Like, why learn anything?

S3

Speaker 3

25:24

I want to know everything, and then I'm ready to die. So you have, yeah, okay. It's not a, it's a terminal value.

S3

Speaker 3

25:31

It's not in service of anything else.

S2

Speaker 2

25:34

I'm conscious of the possibility, this is not a certainty, but the possibility is of that engine of curiosity that you're speaking to is actually a symptom of the finiteness of life. Without that finiteness, your curiosity would vanish, like a morning fog.

S3

Speaker 3

25:56

All right, cool.

S1

Speaker 1

25:57

Pukowski talked about love like

S3

Speaker 3

25:59

that. Then Let me solve immortality, let me change the thing in my brain that reminds me of the fact that I'm immortal, tells me that life is finite shit. Maybe I'll have it tell me that life ends next week. I'm okay with some self manipulation like that, I'm okay with deceiving myself.

S2

Speaker 2

26:14

Oh, Changing the code.

S3

Speaker 3

26:16

Yeah, if that's the problem, if the problem is that I will no longer have that curiosity, I'd like to have backup copies of myself. Revert, yeah. Well, which I check in with occasionally to make sure they're okay with the trajectory and they can override it.

S3

Speaker 3

26:30

Maybe a nice, I think of those wave nets, those logarithmic, go back to the copies.

S2

Speaker 2

26:34

Yeah, but sometimes it's not reversible. Like, I've done this with video games. Once you figure out the cheat code, or you look up how to cheat old school, like single player, it ruins the game for you.

S3

Speaker 3

26:46

Absolutely, I know that feeling. But again, that just means our brain manipulation technology is not good enough yet. Remove that cheat code from your brain.

S2

Speaker 2

26:54

Here you go. So it's also possible that if we figure out immortality, that all of us will kill ourselves before we advance far enough to be able to revert the change.

S3

Speaker 3

27:08

I'm not killing myself till I know everything, so.

S2

Speaker 2

27:11

That's what you say now because your life is finite.

S3

Speaker 3

27:15

You know, I think, yes, self-modifying systems gets, comes up with all these hairy complexities. And can I promise that I'll do it perfectly? No, but I think I can put good safety structures in place.

S2

Speaker 2

27:27

So that talk in your thinking here is not literally referring to a simulation in that our universe is a kind of computer program running in a computer. It's more of a thought experiment. Do you also think of the potential of the sort of Bostrom, Elon Musk, and others that talk about an actual program that simulates our universe?

S3

Speaker 3

27:59

Oh, I don't doubt that we're in a simulation. I just think that it's not quite that important. I mean, I'm interested only in simulation theory as far as like it gives me power over nature.

S3

Speaker 3

28:09

If it's totally unfalsifiable, then who cares?

S2

Speaker 2

28:12

I mean, what do you think that experiment would look like? Like somebody on Twitter asked, ask George what signs we would look for to know whether or not we're in the simulation, which is exactly what you're asking is like, the step that precedes the step of knowing how to get more power from this knowledge is to get an indication that there's some power to be gained. So get an indication that you can discover and exploit cracks in the simulation.

S2

Speaker 2

28:42

Or it doesn't have to be in the physics of the universe.

S3

Speaker 3

28:45

Yeah. Show me, I mean, like a memory leak could be cool.

S2

Speaker 2

28:51

Some scrying technology. What kind of technology? Scrying.

S2

Speaker 2

28:56

What's that?

S3

Speaker 3

28:56

Oh, that's a weird. Scrying is the paranormal ability to remote viewing, like being able to see somewhere where you're not. So, you know, I don't think you can do it by chanting in a room, but if we could find, it's a memory leak, basically.

S3

Speaker 3

29:16

It's a

S2

Speaker 2

29:16

memory leak. Yeah, you're able to access parts you're not supposed to. And thereby discover a shortcut.

S3

Speaker 3

29:22

Yeah, memory leak means the other thing as well, but I mean like, yeah, like an ability to read arbitrary memory. Right? And that one's not that horrifying.

S3

Speaker 3

29:30

The right ones start to be horrifying.

S2

Speaker 2

29:31

Read it, right. So the reading is not the problem.

S3

Speaker 3

29:34

Yeah, it's like Heartfleet for the universe.

S2

Speaker 2

29:37

Oh boy, the writing is a big, big problem. It's a big problem. It's the moment you can write anything, even if it's just random noise.

S2

Speaker 2

29:47

That's terrifying.

S3

Speaker 3

29:49

I mean, even without that, like even some of the nanotech stuff that's coming, I think is...

S2

Speaker 2

29:56

I don't know if you're paying attention, but actually Eric Weinstein came out with the theory of everything. I mean, that came out. He's been working on a theory of everything in the physics world called geometric unity.

S2

Speaker 2

30:08

And then for me, from a computer science person like you, Stephen Wolfram's theory of everything, of like hypergraphs, is super interesting and beautiful. But not from a physics perspective, but from a computational perspective. I don't know, have you paid attention to any of that?

S3

Speaker 3

30:23

So again, like what would make me pay attention and like why I hate string theory is okay, make a testable prediction, right? I'm only interested in, I'm not interested in theories for their intrinsic beauty, I'm interested in theories that give me power over the universe. So if these theories do, I'm very interested.

S2

Speaker 2

30:42

Can I just say how beautiful that is? Because a lot of physicists say, I'm interested in experimental validation. And they skip out the part where they say, to give me more power in the universe.

S2

Speaker 2

30:55

I just love the clarity of that.

S3

Speaker 3

30:59

I want 100 gigahertz processors. I want transistors that are smaller than atoms. I want power.

S2

Speaker 2

31:09

That's true and that's where people, from aliens to this kind of technology, where people are worried that Governments, like who owns that power? Is it George Hotz? Is it thousands of distributed hackers across the world?

S2

Speaker 2

31:24

Is it governments? You know, is it Mark Zuckerberg? There's a lot of people that I don't know if anyone trusts any 1 individual with power. So they're always worried.

S3

Speaker 3

31:37

It's the beauty of blockchains.

S2

Speaker 2

31:39

That's the beauty of blockchains, which we'll talk about. On Twitter, somebody pointed me to a story, A bunch of people pointed me to a story a few months ago where you went into a restaurant in New York, and you can correct me if I'm wrong, and ran into a bunch of folks from a company, a crypto company, who are trying to scale up Ethereum. And they had a technical deadline related to a solidity to OVM compiler.

S2

Speaker 2

32:07

So these are all Ethereum technologies. So you stepped in, they recognized you, pulled you aside, explained their problem, and you stepped in and helped them solve the problem, thereby creating legend status story. Can you tell me the story in a little more detail? It seems kind of Incredible.

S2

Speaker 2

32:31

Did this happen?

S3

Speaker 3

32:32

Yeah, yeah, it's a true story. It's a true story. I mean, they wrote a very flattering account of it.

S3

Speaker 3

32:40

So Optimism is the company's called Optimism, spin-off of Plasma. They're trying to build L2 solutions on Ethereum. So right now, every Ethereum node has to run every transaction on the Ethereum network. And this kind of doesn't scale, right?

S3

Speaker 3

32:58

Because if you have n computers, well, you know, if that becomes 2N computers, you actually still get the same amount of compute. Right? This is like O of 1 scaling. Because they all have to run it.

S3

Speaker 3

33:10

Okay, fine, you get more blockchain security, but like, the blockchain's already so secure. Can we trade some of that off for speed? So that's kind of what these L2 solutions are. They built this thing which kind of sandbox for Ethereum contracts so they can run it in this L2 world and it can't do certain things in L1.

S3

Speaker 3

33:30

L1.

S2

Speaker 2

33:30

Can I ask you for

S3

Speaker 3

33:31

some definitions? What's L2? Oh, L2 is layer 2.

S3

Speaker 3

33:34

So L1 is like the base Ethereum chain. And then layer 2 is like a computational layer that runs elsewhere, but still is kind of secured by layer 1.

S2

Speaker 2

33:47

And I'm sure a lot of people know, but Ethereum is a cryptocurrency, probably 1 of the most popular cryptocurrencies, second to Bitcoin. And a lot of interesting technological innovations there. Maybe you could also slip in, whenever you talk about this, any things that are exciting to you in the Ethereum space?

S2

Speaker 2

34:06

And why Ethereum?

S3

Speaker 3

34:07

Well, I mean, Bitcoin is not Turing complete. Well, Ethereum is not technically Turing complete with a gas limit, but close enough.

S2

Speaker 2

34:15

With a gas limit? What's the gas limit?

S3

Speaker 3

34:17

Resources? Yeah, I mean no computer's actually turning complete. Right. You're fine at RAM, you know?

S3

Speaker 3

34:24

I can actually solve the whole problem.

S2

Speaker 2

34:25

What's the word gas limit? You have so many brilliant words.

S3

Speaker 3

34:34

I'm not even going to ask. No, no, that's not my word. That's Ethereum's word.

S3

Speaker 3

34:34

Gas limit. Ethereum, you have to spend gas per instruction. So different opcodes use different amounts of gas. And you buy gas with ether to prevent people from basically DDoSing the network.

S2

Speaker 2

34:42

So Bitcoin is proof of work. And then what's Ethereum?

S3

Speaker 3

34:47

It's also proof of work. They're working on some proof of stake Ethereum 2.0 stuff. But right now it's proof of work.

S3

Speaker 3

34:52

It uses a different hash function from Bitcoin. That's more ASIC resistance because you need RAM.

S2

Speaker 2

34:57

So we're all talking about Ethereum 1.0. So what were they trying to do to scale this whole process?

S3

Speaker 3

35:03

So they were like, well, if we could run contracts elsewhere and then only save the results of that computation, you know, well, we don't actually have to do the compute on the chain. We can do the compute off chain and just post what the results are. Now, the problem with that is, well, somebody could lie about what the results are.

S3

Speaker 3

35:21

So you need a resolution mechanism. And the resolution mechanism can be really expensive because, you know, you just have to make sure that, like, the person who is saying, look, I swear that this is the real computation. I'm staking $10,000 on that fact. And if you prove it wrong, yeah, it might cost you $3,000 in gas fees to prove wrong, but you'll get the $10,000 bounty.

S3

Speaker 3

35:44

So you can secure using those kind of systems. So it's effectively a sandbox which runs contracts and like, just like any kind of normal sandbox, you have to like replace syscalls with, you know, calls into the hypervisor.

S2

Speaker 2

36:03

Sandbox, syscalls, hypervisor. What do these things mean? As long as it's interesting to talk about.

S3

Speaker 3

36:09

Yeah, I mean you could take like the Chrome sandboxes maybe the 1 to think about, right? So the Chrome process that's doing a rendering can't, for example, read a file from the file system. Yeah.

S3

Speaker 3

36:18

It has, if it tries to make an open syscall in Linux, the open syscall, you can't make an open syscall, no, no, no. You have to request from the kind of hypervisor process, or like, I don't know what's called in Chrome, but, the, hey, could you open this file for me? And then it does all these checks, and then it passes the file handle back in if it's approved.

S2

Speaker 2

36:40

Got it.

S3

Speaker 3

36:40

So that's, yeah.

S2

Speaker 2

36:42

So what's the, in the context of Ethereum, What are the boundaries of the sandbox that we're talking about?

S3

Speaker 3

36:48

Well, like 1 of the calls that you actually reading and writing any state to the Ethereum contract or to the Ethereum blockchain. Writing state is 1 of those calls that you're going to have to sandbox in layer 2. Because if you let layer 2 just arbitrarily write to the Ethereum blockchain...

S2

Speaker 2

37:10

So layer 2 is really sitting on top of layer 1. So you're going to have a lot of different kinds of ideas that you can play with. Yeah.

S2

Speaker 2

37:18

And they're not fundamentally changing the source code level of Ethereum.

S3

Speaker 3

37:25

Well, you have to replace a bunch of calls with calls into the hypervisor. So instead of doing the syscall directly, you replace it with a call to the hypervisor. So originally they were doing this by first running the—so Solidity is the language that most Ethereum contracts are written in.

S3

Speaker 3

37:45

It compiles to a bytecode. And then they wrote this thing they called the transpiler. And the transpiler took the bytecode, and it transpiled it into OVM-safe bytecode. Basically, bytecode that didn't make any of those restricted syscalls and added the calls to the hypervisor.

S3

Speaker 3

38:01

This transpiler was a 3,000-line mess. And it's hard to do. It's hard to do if you're trying to do it like that. Because you have to kind of like deconstruct the bytecode.

S3

Speaker 3

38:12

Change things about it, and then reconstruct it. And, I mean, As soon as I hear this, I'm like, why don't you just change the compiler? Why not the first place you build the bytecode, just do it in the compiler? I asked them how much they wanted it.

S3

Speaker 3

38:29

Of course, measured in dollars and I'm like, well, okay. And yeah.

S2

Speaker 2

38:34

And you wrote the compiler.

S3

Speaker 3

38:35

Yeah, I modified, I wrote a 300 line diff to the compiler. It's open source, you can look at it.

S2

Speaker 2

38:41

Yeah, I looked at the code last night. Yeah.

S3

Speaker 3

38:44

It's cute.

S2

Speaker 2

38:45

It's cute. Yeah, exactly. Cute is a good word for it.

S2

Speaker 2

38:49

And it's C++. C++, yeah. So when asked how you were able to do it, you said you just gotta think and then do it right. So can you break that apart a little bit?

S2

Speaker 2

39:04

What's your process of 1, thinking, and 2, doing it right?

S3

Speaker 3

39:09

You know, the people I was working for were amused that I said that. It doesn't really mean anything.

S2

Speaker 2

39:14

Okay. I mean, is there some deep, profound insights to draw from, like, how you problem solve from that?

S3

Speaker 3

39:23

This is always what I say. I'm like, do you want to be a good programmer? Do it for 20 years.

S2

Speaker 2

39:27

Yeah. There's no shortcuts. What are your thoughts on crypto in general? What parts technically or philosophically do you find especially beautiful maybe?

S3

Speaker 3

39:39

Oh, I'm extremely bullish on crypto long term. Not any specific crypto project, but this idea of… well, 2 ideas. 1, the Nakamoto consensus algorithm is, I think, 1 of the greatest innovations of the 21st century.

S3

Speaker 3

39:58

This idea that people can reach consensus, you can reach a group consensus using a relatively straightforward algorithm is wild. And like, you know, Satoshi Nakamoto, people always ask me who I look up to, it's like, whoever that is. Who do

S2

Speaker 2

40:17

you think it is? Elon Musk? Is it you?

S2

Speaker 2

40:22

It is

S3

Speaker 3

40:22

definitely not me and I do not think it's Elon Musk. But yeah, this idea of groups reaching consensus in a decentralized yet formulaic way is 1 extremely powerful idea from crypto. Maybe the second idea is this idea of smart contracts.

S3

Speaker 3

40:45

When you write a contract between 2 parties, any contract, this contract, if there are disputes, it's interpreted by lawyers. Lawyers are just really shitty, overpaid interpreters. Imagine you had—let's talk about them in terms of like, let's compare a lawyer to Python, right? So, lawyer...

S3

Speaker 3

41:06

Well, OK. That's brilliant.

S2

Speaker 2

41:08

I never thought of it that way. It's hilarious.

S3

Speaker 3

41:11

So, Python, I'm paying even 10 cents an hour. I'll use the nice Azure machine. I can run Python for 10 cents an hour.

S3

Speaker 3

41:19

Lawyers cost a thousand dollars an hour. So Python is 10000 X better on that axis. Lawyers don't always return the same answer. Python almost always does.

S3

Speaker 3

41:36

Cost. Yeah, I mean, just cost, reliability, everything about Python is so much better than lawyers. So if you can make smart contracts, This whole concept of code is law. I love, and I would love to live in a world where everybody accepted that fact.

S2

Speaker 2

41:56

So maybe you can talk about what smart contracts are.

S3

Speaker 3

42:01

So let's say, you know, we have a... Even something as simple as a safety deposit box, right? A safety deposit box that holds a million dollars.

S3

Speaker 3

42:14

I have a contract with the bank that says 2 out of these 3 parties must be present to open the safety deposit box and get the money out. So that's a contract with the bank, and it's only as good as the bank and the lawyers, right? Let's say, you know, somebody dies, and now, oh, we're going to go through a big legal dispute about whether, oh, well, was it in the will? Was it not in the will?

S3

Speaker 3

42:37

What, what? Like, it's just so messy and the cost to determine truth is so expensive versus a smart contract, which just uses cryptography to check if 2 out of 3 keys are present. Well, I can look at that and I can have certainty in the answer that it's going to return. And that's what all businesses want, is certainty.

S3

Speaker 3

42:57

You know, they say businesses don't care. Viacom, YouTube. YouTube's like, look, we don't care which way this lawsuit goes, just please tell us so we can have certainty.

S2

Speaker 2

43:07

I wonder how many agreements in this world, because we're talking about financial transactions only in this case, correct? The smart contracts.

S3

Speaker 3

43:15

Oh, you can go to anything. You can put a prenup in the Ethereum blockchain. A married smart contract?

S3

Speaker 3

43:23

Sorry, divorce lawyer. Sorry. You're going to be replaced by Python.

S2

Speaker 2

43:32

Okay, so that's another beautiful idea. Do you think there's something that's appealing to you about any 1 specific implementation? So if you look 10, 20, 50 years down the line, Do you see any Bitcoin, Ethereum, any of the other hundreds of cryptocurrencies winning out?

S2

Speaker 2

43:51

What's your intuition about the space? Are you just sitting back and watching the chaos and look who cares what emerges?

S3

Speaker 3

43:57

Oh, I don't. I don't speculate. I don't really care.

S3

Speaker 3

43:59

I don't really care which 1 of these projects wins. I'm kind of in the Bitcoin is a meme coin camp. I mean, why does Bitcoin have value? It's technically kind of, you know, not great.

S3

Speaker 3

44:12

Like the block size debate. When I found out what the block size debate was, I'm like, are you guys kidding?

S2

Speaker 2

44:17

What's the block size debate?

S3

Speaker 3

44:21

You know what? It's really, it's too stupid to even talk about. People can look it up, but I'm like, wow.

S3

Speaker 3

44:26

You know, Ethereum seems, the governance of Ethereum seems much better. I've come around a bit on proof of stake ideas. You know, very smart people thinking about some things.

S2

Speaker 2

44:37

Yeah, you know, governance is interesting. It does feel like Vitalik, like it does feel like an open, even in these distributed systems, leaders are helpful because they kind of help you drive the mission and the vision and they put a face to a project. It's a weird thing about us humans.

S3

Speaker 3

45:00

Geniuses are helpful, like Vitalik. Yeah, brilliant. Leaders are

S2

Speaker 2

45:07

not necessary. Yeah. So you think the reason he's the face of a theorem is because he's a genius.

S2

Speaker 2

45:16

That's interesting. I mean, that was, it's interesting to think about that we need to create systems in which the quote unquote leaders that emerge are the geniuses in the system. I mean, that's arguably why the current state of democracy is broken, is the people who are emerging as the leaders are not the most competent, are not the superstars of the system. And it seems like at least for now In the crypto world, oftentimes the leaders are the superstars.

S3

Speaker 3

45:49

Imagine at the debate they asked, what's the sixth amendment? What are the 4 fundamental forces in the universe? What's the integral of 2 to the X?

S3

Speaker 3

46:00

I'd love to see those questions asked. And that's what I want as our leader. It's a little bit- What's Bayes' rule?

S2

Speaker 2

46:07

Yeah, I mean, even, oh wow, you're hurting my brain. My standard was even lower, but I would have loved to see just this basic brilliance. Like I've talked to historians.

S2

Speaker 2

46:21

There's just these, they're not even like, they don't have a PhD or even education history. They just like a Dan Carlin type character who just like, holy shit, how did all this information get into your head? They're able to just connect Genghis Khan to the entirety of the history of the 20th century. They know everything about every single battle that happened and they know the Game of Thrones, of the different power plays and all that happened there.

S2

Speaker 2

46:54

And they know the individuals and all the documents involved. And they integrate that into their regular life. It's not like they're ultra history nerds. They're just, they know this information.

S2

Speaker 2

47:06

That's what competence looks like.

S3

Speaker 3

47:07

Yeah.

S2

Speaker 2

47:08

Because I've seen that with programmers too, right? That's what great programmers do. But yeah, it would be, it's really unfortunate that those kinds of people aren't emerging as our leaders.

S2

Speaker 2

47:19

But for now, at least in the crypto world, that seems to be the case. I don't know if that always, you could imagine that in a hundred years, it's not the case.

S3

Speaker 3

47:28

Crypto world has 1 very powerful idea going for it. And that's the idea of forks. I mean, imagine—we'll use a less controversial example.

S3

Speaker 3

47:42

This was actually in my joke app in 2012. I was like, Barack Obama, Mitt Romney, let's let them both be president. All right? Like imagine we could fork America and just let them both be president.

S3

Speaker 3

47:54

And then the Americas could compete and people could invest in 1, pull their liquidity out of 1, put it in the other. You have this in the crypto world. Ethereum forks into Ethereum and Ethereum classic. And you can pull your liquidity out of 1 and put it in another.

S3

Speaker 3

48:08

And people vote with their dollars, which forks companies should be able to fork. I'd love to fork NVIDIA, you know?

S2

Speaker 2

48:20

Yeah, like different business strategies. And then try them out and see what works. Like even take, Yeah, take CalmAI that closes its source and then take 1 that's open source and see what works.

S2

Speaker 2

48:38

Take 1 that's purchased by GM and 1 that remains Android Renegade and all these different versions and see.

S3

Speaker 3

48:45

The beauty of CalmAI is someone can actually do that. Please take comma AI and fork it.

S2

Speaker 2

48:50

That's right. That's the beauty of open source. So you're, I mean, we'll talk about autonomous vehicle space, but it does seem that you're really knowledgeable about a lot of different topics.

S2

Speaker 2

49:03

So the natural question, a bunch of people ask this, which is, how do you keep learning new things? Do you have like practical advice? If you were to introspect, like taking notes, allocate time, or do you just mess around and just allow your curiosity to drive you?

S3

Speaker 3

49:21

I'll write these people a self-help book and I'll charge $67 for it. And I will write on the cover of the self-help book, all of this advice is completely meaningless. You're going to be a sucker and buy this book anyway.

S3

Speaker 3

49:34

And the 1 lesson that I hope they take away from the book is that I can't give you a meaningful answer to that.

S2

Speaker 2

49:42

That's interesting. Let me translate that. Is you haven't really thought about what it is you do systematically because you could reduce it.

S2

Speaker 2

49:53

And there's some people, I mean, I've met brilliant people that this is really clear with athletes. Some are just, you know, the best in the world at something. And they have 0 interest in writing a self-help book or how to master this game. And then there's some athletes who become great coaches and they love the analysis, perhaps the over analysis.

S2

Speaker 2

50:18

And you right now, at least at your age, which is an interesting, you're in the middle of the battle. You're like the warriors that have 0 interest in writing books. So you're in the middle of the battle. So you have, Yeah.

S3

Speaker 3

50:30

This is a fair point. I do think I have a certain aversion to this kind of deliberate, intentional way of living life.

S2

Speaker 2

50:40

You eventually, the hilarity of this, especially since this is recorded, It will reveal beautifully the absurdity when you finally do publish this book. I guarantee you, you will. The story of comma AI, maybe it'll be a biography written about you.

S2

Speaker 2

50:59

They'll be better, I guess.

S3

Speaker 3

51:00

You might be able to learn some cute lessons if you're starting a company like ComAI from that book. But if you're asking generic questions, like how do I be good at things, dude, I don't know.

S2

Speaker 2

51:11

Well, I mean, the interesting thing.

S3

Speaker 3

51:13

Do them a lot.

S2

Speaker 2

51:14

Do them a lot. But The interesting thing here is learning things outside of your current trajectory, which is what it feels like from an outsider's perspective. I don't know if there's advice on that, But it is an interesting curiosity.

S2

Speaker 2

51:32

When you become really busy, you're running a company.

S3

Speaker 3

51:37

Hard time.

S2

Speaker 2

51:40

Yeah. But like there's a natural inclination and trend, like just the momentum of life carries you into a particular direction of wanting to focus. And this kind of dispersion that curiosity can lead to gets harder and harder with time. Because you get really good at certain things and it sucks trying things that you're not good at, like trying to figure them out.

S2

Speaker 2

52:05

You do this with your live streams, you're on the fly figuring stuff out, you don't mind looking dumb. You just figure it out, figure it out pretty quickly.

S3

Speaker 3

52:16

Sometimes I try things and I don't figure them out quickly. My chess rating is like a 1400, despite putting like a couple hundred hours in, it's pathetic. I mean, to be fair, I know that I could do it better.

S3

Speaker 3

52:26

If I did it better, like don't play, you know, don't play 5 minute games, play 15 minute games at least. Like I know these things, but it just doesn't, it doesn't stick nicely in my knowledge stream.

S2

Speaker 2

52:36

All right, let's talk about Kama AI. What's the mission of the company? Let's like look at the biggest picture.

S2

Speaker 2

52:44

Oh, I

S3

Speaker 3

52:44

have an exact statement. Solve self-driving cars while delivering shippable intermediaries.

S2

Speaker 2

52:51

So long-term vision is have fully autonomous vehicles and make sure you're making money along the way.

S3

Speaker 3

52:58

I think it doesn't really speak to money but I can talk about what solve self-driving cars means. Solve self-driving cars, of course, means you're not building a new car, you're building a person replacement. That person can sit in the driver's seat and drive you anywhere a person can drive with a human or better level of safety, speed, quality, comfort.

S2

Speaker 2

53:21

What's the second part of that?

S3

Speaker 3

53:23

Deliverance shippable intermediaries is, well, it's a way to fund the company, that's true. But it's also a way to keep us honest. If you don't have that, it is very easy with this technology to think you're making progress when you're not.

S3

Speaker 3

53:39

I've heard it best described on Hacker News as you can set any arbitrary milestone, meet that milestone, and still be infinitely far away from solving self-driving cars.

S2

Speaker 2

53:51

So it's hard to have real deadlines when you're like Cruz or Waymo, when you don't have revenue. Is that, I mean, is revenue essentially the thing we're talking about here?

S3

Speaker 3

54:07

Revenue is, capitalism is based around consent. Capitalism, the way that you get revenue is, real capitalism, commas in the real capitalism camp. There's definitely scams out there, but real capitalism is based around consent.

S3

Speaker 3

54:19

It's based around this idea that if we're getting revenue, it's because we're providing at least that much value to another person. When someone buys $1,000 Comma 2 from us, we're providing them at least $1,000 of value, or they wouldn't buy it.

S2

Speaker 2

54:30

Brilliant. So can you give a whirlwind overview of the products that Comet AI provides, like, throughout its history and today?

S3

Speaker 3

54:38

I mean, yeah, the past ones aren't really that interesting. It's kind of just been refinement of the same idea. The real only product we sell today is the Comet 2.

S2

Speaker 2

54:48

Which is a piece of hardware with cameras. Mm-hmm.

S3

Speaker 3

54:52

So the Comet 2, I mean, you can think about it kind of like a person. You know, in future hardware will probably be even more and more person-like. So it has, you know, eyes, ears, a mouth, a brain, and a way to interface with the car.

S2

Speaker 2

55:09

Does it have consciousness? Just kidding, that was a trick question.

S3

Speaker 3

55:13

I don't have consciousness either. Me and the Commodore are the same.

S2

Speaker 2

55:16

You're the same?

S3

Speaker 3

55:16

I have a little more compute than it. It only has like the same compute as

S2

Speaker 2

55:20

a B. You're more efficient energy-wise for the compute you're doing.

S3

Speaker 3

55:26

Far more efficient energy-wise. 20 petaflops, 20 watts, crazy.

S2

Speaker 2

55:30

You lack consciousness. Sure. Do you fear death?

S2

Speaker 2

55:33

You do, you want immortality.

S3

Speaker 3

55:35

Of course I fear death.

S2

Speaker 2

55:35

Does Kamaei fear death? I don't think so.

S3

Speaker 3

55:39

Of course it does. It very much fears, well it fears negative loss. Oh yeah.

S2

Speaker 2

55:45

Okay, so Kama, so Kama 2, when did that come out? That was a year ago? No, 2?

S3

Speaker 3

55:51

Early this year.

S2

Speaker 2

55:53

Wow, time, it feels like, yeah. 2020 feels like it's taken 10 years to get to the end.

S3

Speaker 3

56:00

It's a long year.

S2

Speaker 2

56:01

It's a long year. So what's the sexiest thing about Kama 2 feature wise? So, I mean, maybe you can also linger on like, what is it?

S2

Speaker 2

56:14

Like what's its purpose? Cause there's a hardware, there's a software component. You've mentioned the sensors, but also what are its features and capabilities?

S3

Speaker 3

56:22

I think our slogan summarizes it well, comma slogan is make driving chill. I

S2

Speaker 2

56:28

love it. Okay.

S3

Speaker 3

56:30

Yeah. I mean, if you like cruise control, imagine cruise control, but much, much more.

S2

Speaker 2

56:37

So it can do adaptive cruise control things, which is like slow down for cars in front of it, maintain a certain speed, and it can also Do lane keeping, so staying in the lane, and do it better and better and better over time. That's very much machine learning based. So there's cameras, there's a driver facing camera too.

S2

Speaker 2

57:00

What else is there? What am I thinking? So the hardware versus software. So open pilot versus the actual hardware of the device.

S2

Speaker 2

57:09

Can you draw that distinction? What's 1, what's the other?

S3

Speaker 3

57:11

I mean the hardware is pretty much a cell phone with a few additions. A cell phone with a cooling system and with a car interface connected to it.

S2

Speaker 2

57:20

And by cell phone you mean like Qualcomm Snapdragon?

S3

Speaker 3

57:25

Yeah, the current hardware is a Snapdragon 821. It has a Wi-Fi radio, it has an LTE radio, it has a screen. We use every part of the cell phone.

S2

Speaker 2

57:35

And then the interface of the car is specific to the car, so you keep supporting more and more cars.

S3

Speaker 3

57:41

Yeah, so the interface to the car, I mean, the device itself just has 4 CAN buses, It has 4 CAN interfaces on it that are connected through the USB port to the phone. And then, yeah, on those 4 CAN buses, you connect it to the car. And there's a little harness to do this.

S3

Speaker 3

57:56

Cars are actually surprisingly similar.

S2

Speaker 2

57:58

So CAN is the protocol by which cars communicate.

S3

Speaker 3

58:01

And

S2

Speaker 2

58:01

then you're able to read stuff

S3

Speaker 3

58:03

and write stuff to be able to control the car depending on

S2

Speaker 2

58:06

the car. So what's the software side? What's OpenPilot?

S3

Speaker 3

58:10

So, I mean, OpenPilot is, the hardware is pretty simple compared to OpenPilot. OpenPilot is, Well, so you have a machine learning model, which it's in OpenPilot, it's a blob, it's just a blob of weights. It's not like people are like, oh, it's closed source.

S3

Speaker 3

58:27

I'm like, it's a blob of weights, what do you expect? It's primarily neural network based. Well, OpenPilot is all the software kind of around that neural network. That if you have a neural network that says, here's where you want to send the car, OpenPilot actually goes and executes all of that.

S3

Speaker 3

58:45

It cleans

S2

Speaker 2

58:45

up the input to the neural network. It cleans up the output and executes on it. So it connects.

S2

Speaker 2

58:49

It's the glue that connects everything together. It runs

S3

Speaker 3

58:51

the sensors, does a bunch of calibration for the neural network, deals with like, if the car is on a banked road, you have to counter-steer against that. And the neural network can't necessarily know that by looking at the picture. So you do that with other sensors, and Fusion, and Localizer.

S3

Speaker 3

59:09

OpenPilot also is responsible for sending the data up to our servers so we can learn from it, logging it, recording it, running the cameras, thermally managing the device, managing the disk space on the device, managing all the resources on the device.

S2

Speaker 2

59:24

So what, since we last spoke, I don't remember when, maybe a year ago, maybe a little bit longer, How has OpenPilot improved?

S3

Speaker 3

59:32

We did exactly what I promised you. I promised you that by the end of the year, you'd be able to remove the lanes. The lateral policy is now almost completely end-to-end.

S3

Speaker 3

59:45

You can turn the lanes off and it will drive. It drives slightly worse on the highway if you turn the lanes off, but you can turn the lanes off and it will drive well-trained, completely end-to-end on user data. And this year we hope to do the same for the longitudinal policy.