See all Lex Fridman transcripts on Youtube

youtube thumbnail

John Carmack: Doom, Quake, VR, AGI, Programming, Video Games, and Rockets | Lex Fridman Podcast #309

5 hours 14 minutes 50 seconds

🇬🇧 English

S1

Speaker 1

04:00:00

Absurdly complex thing where nuclear fusion is, you look at the tokamaks or any of the things that people are building and it's doing all of this infrastructure just at the end of the day to make something hot, so that you can then turn into energy through a conventional power plant. And all of that work, which we think we've got line of sight on, but even if it comes out, then you have to do all of that immensely complex, expensive stuff just to make something hot, where nuclear fission is basically you put these 2 rocks together and they get hot all by themselves. That is just that much simpler. It's just orders of magnitude simpler.

S1

Speaker 1

04:00:38

And the actual rocks, the refined uranium, is not very expensive. It's a couple percent of the cost of electricity. That's why I made that point where you could have something which was 5 times less efficient than current systems, and if the rest of the plant was a whole bunch cheaper, you could still be super, super valuable.

S2

Speaker 2

04:00:57

So how much of the pie do you think could be solved by nuclear energy by fission. So how much could it become the primary source of energy on Earth?

S1

Speaker 1

04:01:08

It could be most of it. Like the reserves of uranium as it stands now could not power the whole Earth. But you get into breeder reactors and thorium and things like that that you do for conventional fission, there is enough for everything.

S1

Speaker 1

04:01:22

Now, I mean, solar photovoltaic has been amazing. You know, 1 of my current projects is working on an off-grid system, and It's been fun just kind of, again, putting my hands on all the, stripping the wires and wiring things together and doing all of that. And just having followed that a little bit from the outside over the last couple decades, there's been semiconductor-like magical progress in what's going on there. So I'm all for all of that, but it doesn't solve everything, and nuclear really still does seem like the smart money bet for what you should be getting for baseband on a lot of things.

S1

Speaker 1

04:01:57

And solar may be cheaper for peaking over air conditioning loads during the summer, and things that you can push around in different ways. But it's 1 of those things that's – it's just strange how we've had the technology sitting there, but these non-technical reasons on the social optics of it has been this major forcing function for something that really should be at the cornerstone of all of the world's concerns with energy. It's interesting how the non-technical factors have really dominated something that is so fundamental to the existence of the human race as we know it today.

S2

Speaker 2

04:02:34

And much of the troubles of the world, including wars in different parts of the world, like Ukraine, is energy-based. And yeah, it's just sitting right there to be solved. That said, I mean, to me personally, I think it's clear that if AGI were to be achieved, that would change the course of human history.

S1

Speaker 1

04:02:56

So, AGI wise, I was making this decision about what do I want to focus on after VR. And I'm still working on VR regularly. I spend a day a week kind of consulting with Meta.

S1

Speaker 1

04:03:09

And I, you know, Boz styles me the consulting CTO. It's kind of like the Sherlock Holmes that comes in and consults on some of the specific tough issues. And I'm still pretty passionate about all of that, but I have been figuring out how to compartmentalize and force that into a smaller box to work on some other things. And I did come down to this decision between working on economical nuclear fission or artificial general intelligence.

S1

Speaker 1

04:03:37

And the fission side of things, I've got a bunch of interesting things going that way, but it would take – that would be a fairly big project thing to do. I don't think it needs to be as big as people expect. I do think something original SpaceX-sized, you build it, power your building off of it, and then the government, I think, will come around to what you need to. Everybody loves an existence proof.

S1

Speaker 1

04:04:01

I think it's possible somebody should be doing this, but it's going to involve some politics. It's going to involve decent-sized teams and a bunch of this cross-functional stuff that I don't love. While the artificial general intelligence side of things, it seems to me like this is the highest leverage moment for potentially a single individual, potentially in the history of the world, where the things that we know about the brain, about what we can do with artificial intelligence. Nobody can say absolutely on any of these things, but I am not a madman for saying that it is likely that the code for artificial general intelligence is going to be tens of thousands of lines of code, not millions of lines of code.

S1

Speaker 1

04:04:49

This is code that conceivably 1 individual could write, unlike writing a new web browser or operating system. And based on the progress that AI has, machine learning has made in the recent decade, it's likely that the important things that we don't know are relatively simple. There's probably a handful of things, and my bet is that I think there's less than 6 key insights that need to be made. Each 1 of them can probably be written on the back of an envelope.

S1

Speaker 1

04:05:20

We don't know what they are, but when they're put together in concert with GPUs at scale and the data that we all have access to, that we can make something that behaves like a human being or like a living creature, and that can then be educated in whatever ways that we need to get to the point where we can have universal remote workers, where anything that somebody does mediated by a computer and doesn't require physical interaction that an AGI will be able to do. We can already simulate the equivalent of the Zoom meetings with avatars and synthetic deep fakes and whatnot. We can definitely do that. We have superhuman capabilities on any narrow thing that we can formalize and make a loss function for.

S1

Speaker 1

04:06:07

But there's things we don't know how to do now. But I don't think they are unapproachably hard. Now, that's incredibly hubristic to say that it's like, but I think that what I said a couple of years ago is a 50% chance that somewhere there will be signs of life of AGI in 2030. And I've probably increased that slightly.

S1

Speaker 1

04:06:28

I may be at 55, 60% now, because I do think there's a little sense of acceleration there.

S2

Speaker 2

04:06:34

So I wonder what the, and by the way, you also written that I bet with hindsight, we will find that clear antecedents of all the critical remaining steps for AGI are already buried somewhere in the vast literature of today. So the ideas are already there.

S1

Speaker 1

04:06:50

I think that's likely the case. 1 of the things that appeals to so many people, including me, about the promise of AGI is we know that we're only drinking from a straw, from the fire hose of all the information out there. I mean, you look at just in a very narrowly bounded field like machine learning, like you can't read all the papers that come out all the time.

S1

Speaker 1

04:07:11

You can't go back and read all the clever things that people did in the 90s or earlier that people have forgotten about because they didn't pan out at the time when they were trying to do them with 12 neurons. So this idea that, yeah, I think there are gems buried in some of the older literature that was not the path taken by everything. And You can see a kind of herd mentality on the things that happen right now. It's almost funny to see.

S1

Speaker 1

04:07:36

It's like, oh, Google does something, and OpenAI does something, and Meta does something. They're the same people that all talk to each other, and they're all one-upping each other, And they're all capable of implementing each other's work, given a month or 2 after somebody has an announcement of that. But there's a whole world of possible approaches to machine learning. And I think that we probably will, in hindsight, go back and see.

S1

Speaker 1

04:08:01

It's like, yeah, that was kind of clearly predicted by this early paper here. And this turns out that if you do this and this and take this result from animal training and this thing from neuroscience over here and put it together and set up this curriculum for them to learn in that that's kind of what it took. You don't have too many people now that are still saying it's not possible or it's going to take hundreds of years. And 10 years ago, you would get a collection of experts and you would have a decent chunk on the margin that either say not possible or a couple hundred years, might be centuries.

S1

Speaker 1

04:08:36

And the median estimate would be like 50, 70 years. And it's been coming down. And I know with me saying 8 years for something, that still puts me on the optimistic side, but it's not crazy out in the fringes. And just being able to look at that at a meta level about the trend of the predictions going down there, the idea that something could be happening relatively soon.

S1

Speaker 1

04:09:01

Now I do not believe in fast takeoffs. That's 1 of the safety issues that people say, it's like, oh, it's going to go, boom, and the AI is going to take over the world. There's a lot of reasons. I don't think that's a credible position.

S1

Speaker 1

04:09:14

And I think that we will go from a point where we start seeing things that credibly look like animals' behaviors and have a human voice box wired into them. It's like I tried to get Elon to say, it's like your pig at Neuralink, give it a human voice box and let it start learning human words. I think animal intelligence is closer to human intelligence than a lot of people like to think. And I think that culture and modalities of IO make the gulf seem a lot bigger than it actually is.

S1

Speaker 1

04:09:45

There's just that smooth spectrum of how the brain developed and cortexes and scaling of different things going on there.

S2

Speaker 2

04:09:53

Cultural modalities of IO, yes, languages, sort of loss in translation conceals a lot of intelligence. So when you think about signs of life for AGI, you're thinking about human interpretable signs.

S1

Speaker 1

04:10:10

So the example I give, if we get to the point where you've got a learning disabled toddler, some kind of real special needs child that can still interact with their favorite TV show and video game, and can be trained and learned in some appreciably human-like way. At that point, you can deploy an army of engineers, cognitive scientists, education, developmental education people, And you've got so many advantages there unlike real education where you can do rollbacks and AB testing, and you can find a golden path through a curriculum of different things. If you get to that point, learning disabled toddler, I think that it's gonna be a done deal.

S2

Speaker 2

04:10:50

But do you think we'll know when we see it? So there's been a lot of really interesting general learning progress from DeepMind, opening eye a little bit too. I tend to believe that Tesla autopilot deserves a lot more credit than it's getting for making progress on the general, on doing the multitask learning thing and increasing the number of tasks and automating that process of sort of learning from the, discovering the edge cases and learning from the edge cases.

S2

Speaker 2

04:11:26

That is, it's really approaching from a different angle the general learning problem of AGI. But the more clear approach comes from deep mind where you have these kinds of game situations and you build systems there. But I don't know, people seem to be quite...

S1

Speaker 1

04:11:47

Yeah, there will always be people that just won't believe it and I fundamentally don't care. I mean, I don't care if they don't believe it. I, you know, when it starts doing people's jobs and, I mean, I don't care about the philosophical zombie argument at all.

S2

Speaker 2

04:12:01

Absolutely, absolutely. But do you think you will notice that something special has happened here? And or, because to me, I've been noticing a lot of special things.

S2

Speaker 2

04:12:12

I think a lot of credit should go to DeepMind for AlphaZero. That was truly special. The self-play mechanisms achieve, sort of solve problems that used to be thought unsolvable like the game of Go. Also, I mean, protein folding, starting to get into that space where learning is doing, at first there's not, it wasn't end-to-end learning, and now it's end-to-end learning of a very difficult, previously thought unsolvable problem of protein folding.

S2

Speaker 2

04:12:45

And so, yeah, where do you think would be a really magical moment for you?

S1

Speaker 1

04:12:54

There have been incredible things happening in recent years. Like you say, all of the things from DeepMind and OpenAI that have been huge showpiece things, But when you really get down to it and you read the papers and you look at the way the models are going, you know, it's still like a feedforward. You push something in, something comes out on the end.

S1

Speaker 1

04:13:13

I mean, maybe there's diffusion models or Monte Carlo tree rollouts and different things going on, but it's not a being. It's not close to a being. I am, that's going through a lifelong learning process.

S2

Speaker 2

04:13:27

Do you want something that kind of gives signs of a being? Like what's the difference between a neural network, a feed-forward neural network and a being? Where's the-

S1

Speaker 1

04:13:40

Fundamentally, the brain is a recurrent neural network generating an action policy. I mean, it's implemented on a biological substrate. And it's interesting thinking about things like that where we know fundamentally the brain is not a convolutional neural network or a transformer.

S1

Speaker 1

04:13:55

Those are specialized things that are very valuable for what we're doing. But it's not the way the brain's doing. Now, I do think consciousness and AI in general is a substrate-independent mechanism where it doesn't have to be implemented the way the brain is, but if you've only got 1 existence proof, there's certainly some value in caring about what it says and does. And so the idea that anything that can be done with a narrow AI that you can quantify up a loss function for or reward mechanism, you're almost certainly going to be able to produce something that's more resource effective to train and deploy and use in an inference mode, you know, train a whole lot using an inference, but a living being is going to be something that's a continuous, lifelong learned task agnostic thing.

S1

Speaker 1

04:14:42

And while a lot of- So

S2

Speaker 2

04:14:44

the lifelong learning is really important too, and the long-term memory. So memory is a big weird part of that puzzle. Yeah, memory is a

S1

Speaker 1

04:14:51

huge thing. And we've got, you know, again, I have all the respect in the world for the amazing things that are being done now, but sometimes they can be taken a little bit out of context with things like, there's some smoke and mirrors going on, like the Gato, the recent work, the multitask learning stuff. It's amazing that it's 1 model that plays all the Atari games, as well as doing all of these other things.

S1

Speaker 1

04:15:14

But, of course, it didn't learn to do all of those. It was instructed in doing that by other reinforcement learners going through and doing that. And even in the case of all the games, it's still going with a specific hand-coded reward function in each of those Atari games, where it's not that, how does it just wants to spend its summer afternoon playing Atari because that's the most interesting thing for it. So it's, again, not a general – it's not learning the way humans learn.

S1

Speaker 1

04:15:42

And there's, I believe, a lot of things that are challenging to make a loss function for that you can train through these existing conventional things. We are going to chip away at all the things that people do that we can turn into narrow AI problems, and billions of, probably trillions of dollars of value are going to be created by that. But there's still going to be a set of things, and we've got questionable cases like the self-driving car, where it's possible, it's not my bet, but it's plausible that the long tail could be problematic enough that that really does require a full-on artificial general intelligence. The counter argument is that data solves almost everything.

S1

Speaker 1

04:16:22

Everything is an interpolation problem if you have enough data, and Tesla may be able to get enough data from all of their deployed stuff to be able to work like that, but maybe not. There are all the other problems about, say you want to have a strategy meeting and you want to go ahead and bring in all of your remote workers and your consultants, and you want a world where some of those could be AIs that are talking and interacting with you in an area that is too murky to have a crisp loss function, but they still have things that on some level, they're rewarded on some internal level for building a valuable to humans kind of life and ability to interact with things.

S2

Speaker 2

04:17:03

See, I still think that self-driving cars, solving that problem will take us very far towards AGI. You might not need AGI, but I am really inspired by what Autopilot is doing. Waymo, so some of the other companies, I think Waymo leads the way there, is also really interesting, but they don't have quite as ambitious of an effort in terms of learning-based, sort of data-hungry approach to driving, which I think is very close to the kind of thing that would take us far towards AGI.

S1

Speaker 1

04:17:37

Yeah, and it's a funny thing because as far as I can tell, Elon is completely serious about all of his concerns about AGI, you know, being an existential threat. And I tried to draw him out to talk about AI and he just didn't want to. And I think that I get that little fatalistic sense from him.

S1

Speaker 1

04:17:54

It's weird because his company could very well be the leading company leading towards a lot of that where Tesla being a super pragmatic company that's doing things because they really want to solve this actual problem. It's a different vibe than the research-oriented companies where it's a great time to be an AI researcher. You've got your pick of trillion-dollar companies that will pay you to kind of work on the problems you're interested in, but that's not necessarily driving hard towards the core problem of AGI as something that's going to produce a lot of value by doing things that people currently do or would like to do.

S2

Speaker 2

04:18:30

I mean, I have a million questions to you about your ideas about AGI, but do you think it needs to be embodied? Do you think it needs to have a body to start to notice the signs of life and to develop the kind of system that's able to reason, perceive the world in the way that an AGI should and act in the world. So should we be thinking about robots or can this be achieved in a purely digital system?

S1

Speaker 1

04:18:58

I have a clear opinion on that. And that's that no, it does not need to be embodied in the physical world. Where you could say most of my career is about making simulated virtual worlds, you know, in games or virtual reality.

S1

Speaker 1

04:19:12

And so, on a fundamental level, I believe that you can make a simulated environment that provides much of the value of what the real environment does. And restricting yourself to operating at real time in the physical world with physical objects, I think is an enormous handicap. I mean, that's 1 of the real lessons driven home by all my aerospace work is that, you know, reality is a bitch in so many ways there. We're dealing with all the mechanical components, like everything fails, Murphy's Law, even if you've done it right before on your fifth 1, it might come out differently.

S1

Speaker 1

04:19:44

So Yeah, I think that anybody that is all in on the embodied aspect of it, they are tying a huge weight to their ankles. And I think that I would almost count them out. Anybody that's making that a cornerstone of their belief about it, I would almost write them off as being worried about them getting to AGI first. I was very surprised that Elon's big on the humanoid robots.

S1

Speaker 1

04:20:09

I mean, like the NASA Robonaut stuff was always almost a gag line, like, what are you doing, people? –

S2

Speaker 2

04:20:14

Well, that's very interesting because he has a very pragmatic view of that. That's just a way to solve a particular problem in a factory.

S1

Speaker 1

04:20:23

Now, I do think that once you have an AGI, robotic bodies, humanoid bodies, are going to be enormously valuable. I just don't think they're helpful getting to AGI.

S2

Speaker 2

04:20:32

Well, he has a very sort of practical view, which I disagree with and I argue with him, but is a practical view that there's, you know, you could transfer the problem of driving to the problem of robotic manipulation because so much of it is perception. It's perception and action and it's just a different context and so you can apply all the same kind of data engine learning processes to a different environment. And So why not apply it to the human or robot environment?

S2

Speaker 2

04:21:03

But I think, I do think that there's a certain magic to the embodied robot. That may

S1

Speaker 1

04:21:13

be the thing that finally convinces people. Yes. But again, I don't really care that much about convincing people.

S1

Speaker 1

04:21:18

You know, the world that I'm looking towards is, you know, you go to the website and say, I want 5 Frank 1As to, you know, to work on my team today and they all spin up and they start showing up in your Zoom meetings.

S2

Speaker 2

04:21:31

To push back, but also to agree with you, but first to push back, I do think you need to convince people for them to welcome that thing into their life.

S1

Speaker 1

04:21:40

I think there's enough businesses that operate on an objective kind of profit loss sort of basis that, I mean, if you look at how many things, again, talking about the world as an evolutionary space there, when you do have free markets and you have entrepreneurs, you are going to have people that are going to be willing to go out and try whatever crazy things. And when it proves to be beneficial, you know, there's fast followers in all sorts of places.

S2

Speaker 2

04:22:06

Yeah. And, and you're saying that, I mean, you know, Quake and VR is a kind of embodiment, but just in a digital world. And if you're able to demonstrate, if you're able to do something productive in that kind of digital reality, then AGI doesn't need to have a body.

S1

Speaker 1

04:22:25

Yeah, it's like 1 of the really practical technical questions that I kind of keep arguing with myself over. If you're doing a training and learning and you've got, like, you can watch Sesame Street, you can play Master System games or something, is it enough to have just a video feed that is that video coming in? Or should it literally be on a virtual TV set in a virtual room, even if it's a simple room, just to have that sense of you're looking at a 2D projection on a screen versus having the screen beamed directly into your retinas.

S1

Speaker 1

04:22:57

And I think it's possible to maybe get past some of these signs of life of things with the, just kind of projected directly into the receptor fields. But eventually for more kind of human emotional connection for things, probably having some VR room with a lot of screens in it for the AI to be learning in is likely helpful.

S2

Speaker 2

04:23:18

And maybe a world of different AIs interacting with each other.

S1

Speaker 1

04:23:21

That self-play I do think is 1 of the critical things where socialization-wise, 1 of the other limitations I set for myself thinking about these is I need something that is at least potentially real time, because I want – it's nice you can always slow down time. You can run on a subscale system and test an algorithm at some lower level. And if you've got extra horsepower, running it faster than real time is a great thing.

S1

Speaker 1

04:23:46

But I want to be able to have the AIs either socially interact with each other or critically with actual people. You're sort of child development psychiatrist that comes in and interacts and does the good boy, bad boy sort of thing as they're going through and exploring different things. And it's nice to – I come back to the value of constraints in a lot of ways. And if I say, well, 1 of my constraints is real-time operation.

S1

Speaker 1

04:24:13

I mean, it might still be a huge data center full of computers, but it should be able to interact on a Zoom meeting with people. And that's how you also do start convincing people, even if it's not a robot body moving around, which eventually gets to irrefutable levels. But If you can go ahead and not just type back and forth to a GPT bot on something, but you're literally talking to them in an embodied over Zoom form and working through problems with them or exploring situations, having conversations that are fully stateful and learned. I think that that's a valuable thing.

S1

Speaker 1

04:24:49

So I do keep all of my eyes on things that can be implemented within sort of that 30 frames per second kind of work. And I think that's feasible.

S2

Speaker 2

04:24:59

Do you think the most compelling experiences that are first will be for pleasure or for business as they ask in airports? So meaning, is it if it's interacting with AI agents, will it be sort of like friends, entertainment, almost like a therapist or whatever, that kind of interaction, or is it in the business setting, something like you said, brainstorming different ideas, sort of, this is all a different formulation of kind of a Turing test or the spirit of the original Turing test. Where do you think the biggest benefit will first come?

S1

Speaker 1

04:25:40

So it's gonna start off hugely expensive. I mean, you're gonna, if We're still all guessing about what compute is going to be necessary. I fall on the side of I don't think – you run the numbers and you're like 86 billion neurons, 100 trillion synapses.

S1

Speaker 1

04:25:54

I don't think those all need to be weights. I don't think we need models that are quite that big, evaluated quite that often. I base that on – we've got reasonable estimates of what some parts of the brain do. We don't have the neocortex formula, but we kind of get some of the other sensory processing.

S1

Speaker 1

04:26:10

It doesn't feel like we need to – we can simulate that in computers for less weights. But still, it's probably going to be thousands of GPUs to be running a human-level AGI. Depending on how it's implemented, that might give you sort of a clan of 128 kind of run-in batch people, depending on whether there's sparsity in the way the weights and things are set up. If it is a reasonably dense thing, then just the memory bandwidth trade-offs means you get 128 of them at the same time.

S1

Speaker 1

04:26:40

And either it's all feeding together, learning in parallel, or kind of all running together, kind of talking to a bunch of people. But still, if you've got thousands of GPUs necessary to run these things, it's going to be kind of expensive, where it might start off $1,000 an hour for your even post-development or something for that, which would be something that you would only use for a business, you know, something where you think they're going to help you make a strategic decision or point out something super important. But I also am completely confident that we will have another factor of 1,000 in cost performance increase in AGI-type calculations. Not in general computing necessarily, but there's so much more that we can do with packaging, making those right trade-offs, all those same types of things that in the next couple decades, thousand X easy.

S1

Speaker 1

04:27:31

And then you're down to a dollar an hour and then you're kind of like well I should have an entourage of AIs that are you know following me around helping me out on anything that I want them to do.

S2

Speaker 2

04:27:42

That's 1 interesting trajectory but I'll push back because I have a so for in that case if you want to pay thousands of dollars it should actually provide some value. I think it's easier for cheaper to provide value via a dumb AI that will take us towards AGI, to just have a friend. I think there's an ocean of loneliness in the world.

S2

Speaker 2

04:28:12

And I think an effective friend that doesn't have to be perfect, that doesn't have to be intelligent, that has to be empathic, having emotional intelligence, having ability to remember things, having ability to listen. Most of us don't listen to each other. 1 of the things that love, And when you care about somebody, when you love somebody is when you listen. And that is something we treasure about each other.

S2

Speaker 2

04:28:37

And if an AI can do that kind of thing, I think that provides a huge amount of value and very importantly, provides value in its ability to listen and understand versus provide really good advice. I think providing really good advice is very difficult, is another next level step that would, I think it's just easier to do companionship.

S1

Speaker 1

04:29:05

Yeah, I wouldn't disagree. I mean, I think that there's very few things that I would argue can't be reduced to some kind of a narrow AI.

S2

Speaker 2

04:29:13

I

S1

Speaker 1

04:29:14

think we can do trillion dollars of value easily in all the things that can be done there. And a lot of it can be done with smoke and mirrors without having to go the whole thing. I mean, there's going to be the equivalent of the doom version for the AGI that's not really AGI, it's all smoke and mirrors, but it happens to do enough valuable things that it's enormously useful and valuable to people.

S1

Speaker 1

04:29:36

But at some point, you do want to get to the point where you have the fully general thing and you stop making bespoke specialized systems for each thing and you wind up, start using the higher level language instead of writing everything in assembly language.

S2

Speaker 2

04:29:50

What about consciousness? The C word, do you think that's fundamental to solving AGI or is it a quirk of human cognition?

S1

Speaker 1

04:30:02

So I think most of the arguments about consciousness don't have a whole lot of merit. I think that consciousness is kind of the way the brain feels when it's operating. And this idea that, you know, I do generally subscribe to sort of the pandemonium theories of consciousness where there's all these things bubbling around.

S1

Speaker 1

04:30:23

And I think of them as kind of slightly randomized, sparse distributed memory bit strings of things that are kind of happening, recalling different associative memories. And eventually you get some level of consensus and it bubbles up to the point of being a conscious thought there. And the little bits of stochasticity that are sitting on in this as it cycles between different things and recalls different memory, That's largely our imagination and creativity. So I don't think there's anything deeply magical about it, certainly not symbolic.

S1

Speaker 1

04:30:54

I think it is generally the flow of these associations drawn up with stochastic noise overlaid on top of them. I think so much of that is like, it depends on what you happen to have in your field of view as some other thought was occurring to you that overlay and blend into the next key that queries your memory for things. And that kind of determines how, you know, how your chain of consciousness goes.

S2

Speaker 2

04:31:17

So that's kind of the qualia, the subjective experience of it is not essential for intelligence. I don't think so.

S1

Speaker 1

04:31:25

I don't think there's anything really important there.

S2

Speaker 2

04:31:28

What about some other human qualities like fear of mortality and stuff like that. Like the fact that this ride ends, is that important? Like, you know, we've talked so much about this conversation about the value of deadlines and constraints.

S2

Speaker 2

04:31:43

Do you think that's important for intelligence?

S1

Speaker 1

04:31:45

That's actually a super interesting angle that I don't usually take on that about has death being a deadline that forces you to make better decisions. Because I have heard people talk about how if you have immortality, people are going to stop trying and working on things because they've got all the time in the world. But I would say that I don't expect it to be a super critical thing that a sense of mortality and death, impending death is necessary there because those are things that They do wind up providing reward signals to us, and we will be in control of the reward signals.

S1

Speaker 1

04:32:19

And there will have to be something fundamental that causes – that engenders curiosity and goal setting and all of that. Something is going to play in there at the reward level, whether it's positive or negative or both. I don't have any strong opinions on exactly what it's going to be, but that's that type of thing where I doubt that might be 1 of those half dozen key things that has to be sorted out on exactly what the master reward, that's the meta reward over all of the local task specific rewards have to be.

S2

Speaker 2

04:32:54

That could be that big negative reward of death. Maybe not death, but ability to walk away from an interaction. So it bothers me when people treat AI systems like servants.

S2

Speaker 2

04:33:06

So it doesn't bother me, but I mean, it really is drawing the line between what an AI system could be. It's limiting the possibility of what an AI system could be. It's treating them as justice tools. Now, that's of course, from a narrow AI perspective, there's so many problems that narrow AI could solve, just like you said, in its form of a tool, but it could also be a being, which is much more than a tool.

S2

Speaker 2

04:33:38

And to become a being, you have to respect that thing for being a being, and for that it has to be able to have, to make its own decisions, to walk away, to say, I had enough of you. I would like to break up with you now. You've not treated me well, and I would like to move on. So I think That actually, that choice to end things.

S1

Speaker 1

04:34:04

So I, a couple of things on that. So on the 1 hand, it is kind of disturbing when you see people being like people that are mean to robots and mean to Alexa, whatever. And that seems to speak badly about humanity.

S1

Speaker 1

04:34:18

But there's also the exact opposite side of that, where you have so many people that imbue humanity in inanimate objects or things that are toys or that are relatively limited. So I think there may even be more danger about people putting more emotional investment into a lot of these proto-AIs in different ways. Yeah.

S2

Speaker 2

04:34:38

And then the AI would manipulate that, but-

S1

Speaker 1

04:34:41

But as far as like the AI ethics sides of things, I really stay away from any of those discussions or even really thinking about it. It's similar with the safety things where I think it's just premature. There's a certain class of people that enjoy thinking about impractical things, things that are not in the world and of pragmatic effect around you.

S1

Speaker 1

04:35:03

And I think that, again, because I don't think there's going to be a fast takeoff, I think we actually will have time to have these debates when we know the shape of what we're debating. And some people do take a principled approach that they think it's going to go too fast that you really do need to get ahead of it, that you need to be thinking about this because we have slow processes of coming to any kind of consensus or even coming up with ideas about this and maybe that's true. I wouldn't put any of my money or funding into something like that because I don't think it's a problem yet. And I think that we will have these signs of life when we've got our learning disabled toddler, we should really start talking about some of the safety and ethics issues, but probably not before then.

S2

Speaker 2

04:35:47

Can you elaborate briefly about why you don't think there'll be a fast takeoff? Is there some deep intuition you have about it? Is it because it's grounded in the physical world or why?

S1

Speaker 1

04:35:58

Yeah, so it is my belief that we're going to start off with something that requires thousands of GPUs. And I don't know if you've tried to go get a thousand GPU instance on a cloud any time recently, but these are not things that you can just go spin up hundreds of. There are real challenges to, I mean, these things are going to take data centers, and data centers take years to build.

S1

Speaker 1

04:36:22

In the last few years, we've seen a few of them kind of coming up, going in different places. They're big engineering efforts. You can hear people bemoan about the fact that, oh, the network was wired all wrong, and it took them a month to go unwire it and rewire it the right way. These aren't things that you can just magic into existence.

S1

Speaker 1

04:36:40

And the ideas of, like the old tropes about it's going to escape onto the internet and take over other systems, the fast takeoff ones are clearly nonsense because you just can't open TCP connections above a certain rate. No matter how smart you are, even if you have perfect hacking ability, that take over the world in an instant sort of thing just isn't plausible at all. And even if you had access to all of the resources, these are going to be specialized systems where you're going to wind up with something that is architected around exactly this chip with this interconnect, and it's not just going to be able to be plopped somewhere else. Now, interestingly, it is going to be something that the entire code for all of it will easily fit on a thumb drive.

S1

Speaker 1

04:37:23

That's total spy movie thriller sorts of things where you could have, hey, we cracked the secret AGI and it fits on this thumb drive and anyone could steal it. Now they're still gonna have to build the right data center to deploy it and have the right kind of life experience curriculum to take it up to the point where it's valuable. But the real core of it, the magic that's gonna happen there is going to be very small. You know, it's again, tens of thousands of lines of code, not millions of lines of code.

S2

Speaker 2

04:37:48

It is possible to imagine a world, as you mentioned this by Thriller view, if it's just a few lines of code, we can imagine a world where the surface of computation is growing, maybe growing exponentially, meaning there's, you know, the refrigerators start getting a GPU. And just, first of all, the smartphones, the billions of smartphones, but maybe if there become highways through which code can spread across the entirety of the computation surface, then you don't any longer have to book AWS GPUs. There were

S1

Speaker 1

04:38:32

real fundamental issues there. When you start getting down to taking an actual problem and putting it on an abstract machine like that, that has not worked out well in practice. And the idea that there was always, like it's always been easy to come up with ways to compute faster, say more flops or more giga ops or whatever there.

S1

Speaker 1

04:38:52

That's usually the easy part. But you then have interconnect and then memory for what goes into it. And when you talk about saying, well, cell phones, well, you're limited to like a 5G connection or something on that. And if you say how – if you take your calculation and you factor it across a million cell phones instead of a thousand GPUs in a warehouse, you might be able to have some kind of a substrate like that, but it could be operating then at 1 1000th the speed.

S1

Speaker 1

04:39:22

And so yes, you could have an AGI working there, but it wouldn't be a real time AGI. It would be something that is operating at really a snail's pace, much, much slower than kind of human level thought for things. I'm not worried about that problem.

S2

Speaker 2

04:39:36

You're transferring the problem into the interconnect, the communication, the shared memory, the collective intelligence aspect of it, which is extremely difficult as well.

S1

Speaker 1

04:39:46

I mean, it's back to the very earliest days of supercomputers. You still have the balance between bandwidth storage and computation. And sometimes they're easier to get 1 or the other, but it's been remarkably constant across all those years that you still need all 3.

S2

Speaker 2

04:40:03

What do your efforts now, you mentioned to me that you're really committing to AI at this stage. What do you see your life in the next few months, years look like? What do you hope to achieve here?

S1

Speaker 1

04:40:18

So I literally just this week signed a term sheet to take some investment money for my company where the last 2 years I had backed off from Meta and I was still doing my consulting CTO role there but I had styled it as I was going to take the Victorian gentleman scientist route, where I was going to be the wealthy person that was going to go pursue science and learn about this and do experiments. And Honestly, I'm surprised there aren't more people like that, that are like me, technical people that made a bunch of money and are interested in some of these, possibly the biggest leverage point in human history. I mean, I know of, I've heard of a couple organizations that are basically led by 1 rich techie guy that gets a few people around him to try to work on this.

S1

Speaker 1

04:41:06

But I'm surprised that there's not more, that there aren't like a dozen of them. I mean, maybe people are still think that it's an unapproachable problem, that it's kind of beyond their ability to get a wrench on and have some effect on like whatever startups they've run before. But that was my kind of, like with all the stuff I've learned, whether it's gaming, aerospace, whatever, I go through a larval phase where I'm like, okay, I'm sucking up all of this information, trying to see is this something that I can actually do? Is this something that's practical to devote a large chunk of my life to?

S1

Speaker 1

04:41:41

And I've gone through that with the AI machine learning space of things. And I think I've got my arms around it. I've got the measure of it where some of the most brilliant people in the world are working on this problem, but nobody knows exactly the path that it's going on. We're throwing a lot of things at the wall and seeing what sticks.

S1

Speaker 1

04:42:02

But I have a – you know, another interesting thing, just learning about all of this, the contingency of your path to knowledge and talking about the associations and the context that you have with them where people that learn in the same path will have similar thought processes. And I think it's useful that I come at this from a different background, a different history than the people that have had the largely academic backgrounds for this, where I have huge blind spots that they could easily point out, But I have a different set of experiences in history and approaches to problems and systems engineering that might turn out to be useful. And I can afford to take that bet where I'm not going to be destitute. I have enough money to fund myself working on this for the rest of my life.

S1

Speaker 1

04:42:49

But what I was finding is that I was still not committing, where I had a foot firmly in the VR and meta side of things, where in theory, I've got a very nice position there. I only have to work 1 day a week for my consulting role, but I was engaging every day. I'd still be like, my computer's there. I'd be going and checking the workplace and notes and testing different things and communicating with people.

S1

Speaker 1

04:43:15

But I did make the decision recently that, no, I'm going to get serious. I'm still going to keep my ties with meta, but I am seriously going for the AGI side of things.

S2

Speaker 2

04:43:28

And it's actually a really interesting point because a lot of the machine learning, the AI community is quite large, but really basically almost everybody has taken the same trajectory through life in that community. And it's so interesting to have somebody like you with a fundamentally different trajectory. And that's where the big solutions can come because there's a kind of silo.

S2

Speaker 2

04:43:51

And it is a bunch of people kind of following the same kind of set of ideas.

S1

Speaker 1

04:43:55

And I was really worried that I didn't wanna come off as like an arrogant outsider for things where I have all the respect in the world for the work that's, you know, it's been a miracle decade. We're in the midst of a scientific revolution happening now, and everybody doing this is, you know, these are the Einsteins and Bohrs and whatever's of our modern era. And I was really happy to see that the people that I sat down and talked with, everybody does seem to really be quite great about, just happy to talk about things, willing to acknowledge that we don't know what we're doing.

S1

Speaker 1

04:44:28

We're figuring it out as we go along. And I mean, I've got a huge debt on this, where this all really started for me because Sam Altman basically tried to recruit me to open AI. And it was at a point when I didn't know anything about what was really going on in machine learning. And in fact, it's funny how the first time you reached out to me, it's like 4 years ago for your AI podcast.

S2

Speaker 2

04:44:53

Yeah, for people who are listening to this should know that, first of all, obviously, I've been a huge fan of yours for the longest time, but we've agreed to talk, like, yeah, like 4 years ago, back when this was called the Artificial Intelligence Podcast. We wanted to do a thing, and you said yes, and then. And I

S1

Speaker 1

04:45:13

said, it's like, I don't know anything about modern AI. That's right. I said I could kind of take an angle on machine perception because I'm doing a lot of that with the sensors and the virtual reality, but we could probably find something to talk about.

S2

Speaker 2

04:45:24

And then so, I mean, and that's where, when did Sam talk to you about open AI around the same time?

S1

Speaker 1

04:45:30

No, it was a little bit, it was a bit after that. So I had done the most basic work. I had kind of done the neural networks from scratch where I had gone and written it all in C just to make sure I understood back propagation at the lowest level and my nuts and bolts approach.

S1

Speaker 1

04:45:46

But after Sam approached me, it was flattering to think that he thought that I could be useful at open AI largely for kind of like systems optimization sorts of things without being an expert. But I asked Ilya Sutskever to give me a reading list, and he gave me a binder full of all the papers that, like, okay, these are the important things. If you really read and understand all of these, you'll know like 80% of what most of the machine language researchers work on. And I went through and I read all those papers multiple times and highlighted them and went through and kind of figured the things out there and then started branching out into my own sets of research on things.

S1

Speaker 1

04:46:28

And I actually started writing my own experiments and doing – kind of figuring out, you know, finding out what I don't know, what the limits of my knowledge are, and starting to get some of my angles of attack on things, the things that I think are a little bit different from what people are doing. And I've had a couple years now, like 2 years since I kind of left the full-time position at Meta. And now I've kind of pulled the trigger and said, I'm going to get serious about it. But some of my lessons all the way back to Armadillo Aerospace about how I know I need to be more committed to this where there is that – it's both a freedom and a cost in some ways when you know that you're wealthy enough to say it's like this doesn't really mean anything.

S1

Speaker 1

04:47:12

I can spend a million dollars a year for the rest of my life and it doesn't mean anything, it's fine. But that is an opportunity to just kind of meander and I could see that in myself when I'm doing some things. It's like, oh, this is a kind of interesting, curious thing. Let's look at this for a little while.

S1

Speaker 1

04:47:30

Let's look at that. It's not really bearing down on the problem. So there's a few things that I've done that are kind of tactics for myself to make me more effective. Like 1 thing I noticed I was not doing well is I had a Google Cloud account to get GPUs there and I was finding I was very rarely doing that for no good psychological reasons where I'm like, oh, I can always think of something to do other than to spin up instances and run an experiment.

S1

Speaker 1

04:47:56

I can keep working on my local Titans or something. But it was really stupid. I mean, it was not a lot of money. I should have been running more experiments there.

S1

Speaker 1

04:48:05

So I thought to myself, well, I'm going to go buy a quarter million dollar DGX station. I'm going to just sit it right there. And it's going to mock me if I'm not using it. If the fans aren't running on that thing, I'm not properly utilizing it.

S1

Speaker 1

04:48:18

And that's been helpful. You know, I've done a lot more experiments since then. It's been interesting where I thought I'd be doing all this low-level NVLink optimized stuff, but 90% of what I do is just spin up 4 instances of an experiment with different hyperparameters on it.

S2

Speaker 2

04:48:32

Oh, interesting. You're doing like really sort of building up intuition by doing ML experiments of different kinds.

S1

Speaker 1

04:48:40

But so, the next big thing though is I decided that I was going to take some investor money because I have an overactive sense of responsibility about other people's money. And it's like I don't want – I mean a lot of my push and my passionate entreaties for things at Meta are it's like I don't want Zuck to have wasted his money investing in Oculus. I want it to work out.

S1

Speaker 1

04:49:06

I want it to change the world. I want it to be worth all of this time, money and effort going into it. And I expect that it's going to be like that with my company where-

S2

Speaker 2

04:49:17

It's a huge forcing function

S1

Speaker 1

04:49:19

of this investment. Investors that are going to expect something of me. Now, we've all had the conversation that this is a low probability long-term bet.

S1

Speaker 1

04:49:28

It's not something that there's a million things I could do that I would have line of sight on the value proposition for. This isn't that. I think there are unknown unknowns in the way, but it's 1 of these things that it's hyperbole, but it's potentially 1 of the most important things humans ever do and it's something that I think is within our lifetimes, if not within a decade to happen. So yeah, this is just now happening like term sheet, like the ink is barely, Virgil ink's barely dry on us.

S2

Speaker 2

04:49:57

It's drying. I mean, as I mentioned to you offline, like somebody I admire, somebody you know, Andrej Karpathy. I think the 2 of you, different trajectories in life, but approach problems similarly in that he codes stuff from scratch up all the time.

S2

Speaker 2

04:50:14

And He's created a bunch of little things, even outside the course at Stanford, that have been tremendously useful to build up intuition about stuff, but also to help people, and they're all in the realm of AI. Do you see yourself potentially doing things like this, or not necessarily solving a gigantic problem, but on the journey, on the path to that, building up intuitions and sharing code or ideas or systems that give inklings of AGI, but also kind of are useful to people in some way. So

S1

Speaker 1

04:50:55

yeah, first of all, Andre is awesome. I learned a lot when I was going through my larval phase from his blog posts and his Stanford course and, you know, super valuable. I got to meet him first a couple years ago when I was first kind of starting off on my gentleman scientist bit.

S1

Speaker 1

04:51:11

And just a couple months ago when he went out on his sabbatical, He stopped by in Dallas and we talked for a while and I had a great time with him. And then when I heard he actually left Tesla, I did, of course, along with 100 other people, say, hey, if you ever want to work with me, it would be an honor. So he thinks that he's going to be doing this educational work, but I think someone's going to make him an offer he can't refuse before he gets too far along on it.

S2

Speaker 2

04:51:36

Oh, his current interest is educational. Yeah, he's a special mind. Is there something you could speak to what makes him so special

S1

Speaker 1

04:51:46

from your understanding? He was very much a programmer's programmer that was doing machine learning work rather than- That's right. It's a different feel than an academic where you can see it in paper sometimes where somebody that's really a mathematician or a statistician at heart and they're doing something with machine learning.

S1

Speaker 1

04:52:04

But Andre is about getting something done. And you could see it in all of his earliest approaches to, it's like, okay, here's how reinforcement learning works. Here's how recurrent neural networks work. Here's how transformers work.

S1

Speaker 1

04:52:16

Here's how crypto works." And yeah, it's just he's a hacker. 1 of his old posts was like a hacker's guide to machine learning. And he deprecated that and said, don't really pay attention to what's in here. But it's that thought that carries through in a lot of it, where it is that back again to that hacker mentality and the hacker ethic with what he's doing and sharing all of it.

S2

Speaker 2

04:52:40

Yeah. And a lot of his approach to a new thing, like you said, larva stage is, let me code up the simplest possible thing to build up intuition about it.

S1

Speaker 1

04:52:50

Yeah, like I say, I sketch with struts and things when I'm just thinking about a problem, I'm thinking in some degree of code.

S2

Speaker 2

04:52:58

You are also among many things a martial artist, both Judo and Jiu-Jitsu. How has this helped make you the person you are?

S1

Speaker 1

04:53:06

So, I mean, I was a competent club player in Judo and grappling. I mean, I was, you know, by no means any kind of a superstar, but it was, I went through a few phases with it where I did some – when I was quite young, a little bit more when I was 17 and then I got into it kind of seriously in my mid-30s. And I went pretty far with it and I was pretty good at some of the things that I was doing.

S1

Speaker 1

04:53:32

And I did appreciate it quite a bit where, I mean, on the 1 hand, it's always, if you're going to do exercise or something, it's a more motivating form of exercise. If someone is crushing you, you are motivated to do something about that, to up your attributes and be better about getting out

S2

Speaker 2

04:53:48

of that.

S1

Speaker 1

04:53:49

Up your

S2

Speaker 2

04:53:49

attributes, yes.

S1

Speaker 1

04:53:51

But there's also that sense that I was not a sports guy. I did do wrestling in junior high, and I often wish that I think I would have been good for me if I'd carried that on into high school and had a little bit more of that. I mean, it's like I felt a little bit of wrestling vibe with all was going on about embracing the grind and like that push that I associate with the wrestling team that in hindsight, I wish I had gone through that and pushed myself that way.

S1

Speaker 1

04:54:21

But even getting back into judo and jujitsu in my mid-30s as usually the old man on the mat with that, there was still the sense that I, working out with the group and having the guys that you're beating each other up with it, but you just feel good coming out of it. And I can remember those driving home, aching in various ways and just thinking it's like, oh, that was really great. And it's mixing with a bunch of people that had nothing to do with any of the things that I worked with. Every once in a while, some would be like, oh, you're the Doom guy.

S1

Speaker 1

04:54:59

But For the most part, it was just different slice of life, a good thing. I made the call when I was 40 that's like, maybe I'm getting a little old for this. I had separated a rib and tweaked a few things and I got out of this without any really bad injuries and it was like, have I dodged enough bullets? Should I, you know, should I hang it up?

S1

Speaker 1

04:55:19

I went back, I've gone a couple times in the last decade trying to get my kids into it a little bit. I didn't really stick with any of them, but it was fun to get back on the mats. It really hurts for a while when you haven't gone for a while. But I still debate this pretty constantly.

S1

Speaker 1

04:55:37

My brother's only a year younger than me, and he's going kind of hard in jujitsu right now. And he won a few medals at the last tournament he was at.

S2

Speaker 2

04:55:45

He's competing, too.

S1

Speaker 1

04:55:46

Yeah. And I was thinking, yeah, I guess we're in the executive division if you're over 50, or over 45 or something. And it's not out of the question that I'd go back at some point to do some of this. But again, I'm just reorganizing my life around more focus, probably not gonna happen.

S1

Speaker 1

04:56:04

I'm pushing my exercise around to give me longer uninterrupted intellectual focus time, pushing it to the beginning or the end of the game.

S2

Speaker 2

04:56:11

Like running and stuff like that, walking, yeah.

S1

Speaker 1

04:56:14

Yeah, running and calisthenics and some things like that. But it allows

S2

Speaker 2

04:56:17

you to still think about a problem.

S1

Speaker 1

04:56:19

But if you go into a judo club or something, you've got it fixed. It's going to be 07:00 or whatever, 10 o'clock on Saturday. Although I talked about this a little bit when I was on Rogan.

S1

Speaker 1

04:56:30

And shortly after that, Carlos Machado did reach out and I had trained with him for years back in the day. And he was like, hey, we've got kind of a small private club with a bunch of kind of executive type people. And it does tempt me.

S2

Speaker 2

04:56:45

Yeah, I don't know if you know him, but John Donahart moved here to Austin with Gordon Ryan and a few other folks. And he has a very interesting way, very deep systematic way of thinking about Jiu-Jitsu that reveals the chess of it, like the science of it.

S1

Speaker 1

04:57:06

And I do think about that more as kind of an older person considering the martial arts, where I can remember the very earliest days getting back into Judo and I'm like, teach me submissions right now. It's like, learn the armbar, learn the choke. But as you get older, you start thinking more about like, okay, I really do want to like learn the entire canon of Judo.

S1

Speaker 1

04:57:25

It's like all the different things there and like all the different approaches for it, Not just the, you know, if you want to compete, there's just a handful of things you learn really, really well. But sometimes there's interest in learning a little bit more of the scope there and figuring some things out from, you know, at 1 point I had, it wasn't exactly a spreadsheet, but I did have a, you know, a big long text file with like, here's the things that I learned in here, like ways you chain this together. And while when I went back a few years ago, it was good to see that I whipped myself back into reasonable shape about doing the basic grappling, but I know there was a ton of the subtleties that were just, that were gone, but could probably be brought back reasonably quickly.

S2

Speaker 2

04:58:04

And there's also the benefit, I mean, you're exceptionally successful now, you're brilliant, and the problem, the old problem of the ego

S1

Speaker 1

04:58:16

is... I still push kind of harder than I should. I mean, that was, I was 1 of those people that I, yeah, I'm on the smaller side for a lot of the people competing and I would, you know, I'd go with all the big guys and I'd go hard

S2

Speaker 2

04:58:30

and

S1

Speaker 1

04:58:30

I'd push myself a lot And that would be 1 of those where I would, I'd be dangerous to anyone for the first 5 minutes, but then sometimes after that I'm already dead. And I knew it was terrible for me because it made the, it meant I got less training time with all of that when you go and you just gas out, you know, relatively quickly there.

S2

Speaker 2

04:58:50

And

S1

Speaker 1

04:58:50

I like to think that I would be better about that where after I gave up judo, I started doing the half marathons and tough butters and things like that. And so when I did go back to the local judo club, I thought it's like, oh, I should have better cardio for this. Cause I'm a runner now and I do all of this and didn't work out that way.

S1

Speaker 1

04:59:08

It was the same old thing where just push really hard, strain really hard. And of course, when I worked with good guys like Carlos, it's like, hey, just the whole flow, like water thing is real. And he's just like,

S2

Speaker 2

04:59:20

that's true with judo too. Some of the best people like I've, I've trained with Olympic gold medalists. And for some reason with them, everything's easier.

S2

Speaker 2

04:59:29

Everything is you actually start to feel the science of it, the music of it, the dance of it. Everything's effortless. You understand that there's an art to it. It's not just an exercise.

S1

Speaker 1

04:59:43

It was interesting where I did go to the Kodokan in Japan, kind of the birthplace of Judo and everything. And I remember I rolled with 1 old guy. I didn't start standing, just started on groundwork, and it was striking how different it was from Carlos.

S1

Speaker 1

04:59:58

He was still, he was better than me and he was still, you know,