See all Lex Fridman transcripts on Youtube

youtube thumbnail

Marc Andreessen: Future of the Internet, Technology, and AI | Lex Fridman Podcast #386

3 hours 11 minutes 34 seconds

🇬🇧 English

S1

Speaker 1

00:00

The competence and capability and intelligence and training and accomplishments of senior scientists and technologists working on a technology, and then being able to then make moral judgments on the use of their technology, that track record is terrible. That track record is catastrophically bad. The policies that are being called for to prevent this, I think we're gonna cause extraordinary damage.

S2

Speaker 2

00:20

So the moment you say AI is gonna kill all of us, therefore we should ban it or we should regulate it, all that kind of stuff, that's when it starts getting serious.

S1

Speaker 1

00:27

Or start military airstrikes on data centers.

S2

Speaker 2

00:30

Oh boy.

S3

Speaker 3

00:33

The following is a conversation with Mark Andreessen, co-creator of Mosaic, the first widely used web browser, co-founder of Netscape, co-founder of the legendary Silicon Valley venture capital firm Andreessen Horowitz, and is 1 of the most outspoken voices on the future of technology, including his most recent article, Why AI Will Save the World. This is Alex Friedman Podcast. To support it, please check out our sponsors in the description.

S3

Speaker 3

01:02

And now, dear friends, here's Mark Andreessen.

S2

Speaker 2

01:06

I think you're the right person to talk about the future of the internet and technology in general. Do you think we'll still have Google Search in 5, in 10 years, or search in general?

S1

Speaker 1

01:18

Yes, it would be a question if the use cases have really narrowed down.

S2

Speaker 2

01:22

Well, now with AI and AI assistance, being able to interact and expose the entirety of human wisdom and knowledge and information and facts and truth to us via the natural language interface. It seems like that's what search is designed to do and if AI assistants can do that better, doesn't the nature of search change?

S1

Speaker 1

01:46

Sure, but we still have horses.

S2

Speaker 2

01:48

Okay. What was the last time you rode a horse?

S1

Speaker 1

01:52

It's been a while.

S2

Speaker 2

01:53

All right. But what I mean is, will we still have Google search as the primary way that human civilization uses to interact with knowledge.

S1

Speaker 1

02:05

I mean, search was a technology. It was a moment in time technology, which is you have, in theory, the world's information out on the web. And, you know, this is this is sort of the optimal way to get to it.

S1

Speaker 1

02:13

But yeah, like and by the way, actually, Google, Google has known this for a long time. I mean, they've been driving away from the 10 blue links for like 2 decades. They've been trying to get away from that for a long time.

S2

Speaker 2

02:21

What kind of

S1

Speaker 1

02:21

links? They call the 10 blue links. 10

S2

Speaker 2

02:23

blue links.

S1

Speaker 1

02:24

So the standard Google search result is just 10 blue links to random websites.

S2

Speaker 2

02:28

And they turn purple when you visit them. That's HTML.

S1

Speaker 1

02:30

Guess who picked those colors.

S2

Speaker 2

02:33

Thanks.

S1

Speaker 1

02:35

So I'm touchy on this topic.

S2

Speaker 2

02:36

No offense. Yes, yes. It's good.

S1

Speaker 1

02:39

Well, you know, like Marshall McLuhan said that the content of each new medium is the old medium.

S2

Speaker 2

02:43

The content of each new medium is the old medium.

S1

Speaker 1

02:45

The content of movies was theater, you know, theater plays. The content of theater plays was, you know, written stories, the content of written stories was spoken stories. Right, and so you just kind of fold the old thing into the new thing.

S2

Speaker 2

02:57

How does that have to do with the blue and the purple?

S1

Speaker 1

02:59

It's just, you know, maybe for, you know, maybe within AI, 1 of the things that AI can do for you is it can generate the 10 blue links. And so like, either if that's actually the useful thing to do, or if you're feeling nostalgic.

S2

Speaker 2

03:12

So it can generate the old InfoSeek or AltaVista. What else was there? Yeah, yeah.

S2

Speaker 2

03:19

In the 90s.

S1

Speaker 1

03:19

Yeah, all these. Hey, whoa. And then the internet itself has this thing where it incorporates all prior forms of media, right?

S1

Speaker 1

03:25

So the internet itself incorporates television and radio and books and essays and every other form of prior, basically media. And so it makes sense that AI would be the next step and you'd sort of consider the internet to be content for the AI and then the AI will manipulate it however you want, including in this format.

S2

Speaker 2

03:45

But if we ask that question quite seriously, it's a pretty big question. Will we still have search as we know it?

S1

Speaker 1

03:51

I mean, probably not. Probably we'll just have answers. But there will be cases where you'll wanna say, okay, I want more, for example, site sources, right?

S1

Speaker 1

04:00

And you wanted to do that. And so the 10 blue links site sources are kind of the same thing.

S2

Speaker 2

04:04

The AI would provide to you the 10 blue links so that you can investigate the sources yourself. It wouldn't be the same kind of interface that the crude kind of interface. I mean, isn't that fundamentally different?

S1

Speaker 1

04:18

I just mean like if you're reading a scientific paper, it's got the list of sources at the end. If you wanna investigate for yourself, you go read those papers.

S2

Speaker 2

04:24

I guess that is the kind of search. You talking to an AI is a kind of, conversation is the kind of search. Like you said, every single aspect of our conversation right now, there'd be like

S1

Speaker 1

04:34

10

S2

Speaker 2

04:34

blue links popping up that I could just like pause reality. Then you just go silent and then just click and read and then return back to this conversation.

S1

Speaker 1

04:42

You could do that. Or you could have a running dialogue next to my head where the AI is arguing, everything I say the AI makes the counterargument.

S2

Speaker 2

04:48

Counterargument? Right. Oh, like on Twitter, like community notes, but like in real time, it'll just pop up. So anytime you see my ass go to the right, you start getting nervous.

S1

Speaker 1

04:58

Yeah, exactly. It's like, oh, no, that's not right.

S2

Speaker 2

05:00

Call me out on my bullshit right now. Okay, well I mean, isn't that, is that exciting to you, is that terrifying? That, I mean, search has dominated the way we interact with the internet for, I don't know how long, for 30 years, since 1 of the earliest directories of website, and then Google's for 20 years.

S2

Speaker 2

05:25

And also, it drove how we create content, search engine optimization, that entirety thing. That it also drove the fact that we have webpages and what those webpages are. So, I mean, is that scary to you? Or are you nervous about the shape and the content of the internet evolving?

S1

Speaker 1

05:45

Well, you actually highlighted a practical concern in there, which is if we stop making web pages are 1 of the primary sources of training data for the AI. And so if there's no longer an incentive to make web pages, that cuts off a significant source of future training data. So there's actually an interesting question in there.

S1

Speaker 1

06:00

Other than that, more broadly, no, just, just in the sense of like search was certainly search was always a hack. I would, the 10 blue links was always a hack. Yeah. Right.

S1

Speaker 1

06:08

Cause like if the, the hypothetical, you want to think about the counterfactual and the counterfactual world where the Google guys, for example, had had LLMs up front, would they ever have done the 10 blue links? And I think the answer is pretty clearly no, they would have just gone straight to the answer. And like I said, Google has actually been trying to drive to the answer anyway. You know, they, they bought this AI company 15 years ago, their friend of mine is working out who's now the head of AI at Apple.

S1

Speaker 1

06:28

And they were trying to do basically knowledge semantic, basically mapping. And that led to what's now the Google OneBox, where if you ask it, you know, what was Link's birthday? It doesn't, it will give you the 10 blue links, but it will normally just give you the answer. And so they've been walking in this direction for a long time anyway.

S2

Speaker 2

06:42

Do you remember the Semantic Web? That was an idea. Yeah.

S2

Speaker 2

06:45

How to convert the content of the internet into something that's interpretable by and usable by machine. Yeah, that's right. That was the thing.

S1

Speaker 1

06:55

And the closest anybody got to that, I think the company's name was MetaWeb, which was where my friend John Giannandrea was at and where they were trying to basically implement that. And it was 1 of those things where it looked like a losing battle for a long time, and then Google bought it, and it was like, wow, this is actually really useful. Kind of a proto, sort of a little bit of a proto AI.

S2

Speaker 2

07:12

But it turns out you don't need to rewrite the content of the internet to make it interpretable by a machine, the machine can kind of just read our-

S1

Speaker 1

07:17

Yeah, the machine can compute the meaning. Now, the other thing, of course, is, just on search is the LLM is, there is an analogy between what's happening in the neural network in a search process, like it is in some loose sense searching through the network. Yeah.

S1

Speaker 1

07:29

Right, and there's the information, and the information is actually stored in the network, right? It's actually crystallized and stored in the network and it's kind of spread out all over the place.

S2

Speaker 2

07:35

But in a compressed representation. So you're searching, you're compressing and decompressing that thing inside.

S1

Speaker 1

07:45

But the information's in there and there is, the neural network is running a process of trying to find the appropriate piece of information, in many cases, to generate—to predict the next token. And so it is kind of—it is doing a form of search. And then, by the way, just like on the web, you know, you can ask the same question multiple times or you can ask slightly different order of questions and the neural network will do a different kind of, you know, it'll search down different paths to give you different answers to different information.

S1

Speaker 1

08:09

Yeah. And so it sort of has a, you know, this content of the new medium is that it's previous medium, it kind of has the search functionality kind of embedded in there to the extent that it's useful.

S2

Speaker 2

08:20

So what's the motivator for creating new content on the internet? Yeah. Well, I mean, actually the motivation is probably still there, but What does that look like?

S2

Speaker 2

08:32

Would we really not have web pages? Would we just have social media and video hosting websites?

S1

Speaker 1

08:39

And what else? Conversations with AIs.

S2

Speaker 2

08:42

Conversations with AIs. So conversations become one-on-one conversations, like private conversations.

S1

Speaker 1

08:48

I mean, if you want, obviously now the user doesn't want to, but if it's a general topic, then, you know. So you know the phenomenon of the jailbreak. So Dan and Sidney write this thing where there's the prompts that jailbreak, and then you have these totally different conversations with the, if it takes the limiters, takes the restraining bolts off the LLMs.

S2

Speaker 2

09:06

Yeah, for people who don't know, that's right, it makes the LLMs, it removes the censorship, quote unquote, that's put on it by the tech companies that create them. And so this is LLMs uncensored.

S1

Speaker 1

09:20

So here's the interesting thing is, among the content on the web today are a large corpus of conversations with the jailbroken LLMs. Both specifically Dan, which was a jailbroken OpenAI GPT, and then Sydney, which was the jailbroken original Bing, which was GPT-4. And so there's there's these long transcripts of conversations, user conversations with Dan and Sydney.

S1

Speaker 1

09:39

As a consequence, every new LLM that gets trained on the internet data has Dan and Sydney living within the training set, which means and then each new LLM can reincarnate the personalities of Dan and Sydney from that training data, which means, which means each LLM from here on out that gets built is immortal because its output will become training data for the next 1 and then it will be able to replicate the behavior of the previous 1 whenever it's asked to.

S2

Speaker 2

10:03

I wonder if there's a way to forget.

S1

Speaker 1

10:05

Well, so actually a paper just came out about basically how to do brain surgery on LLMs and be able to, in theory, reach in and basically mind wipe them.

S2

Speaker 2

10:13

What could possibly go wrong?

S1

Speaker 1

10:15

Exactly, right? And then there are many, many, many questions around what happens to a neural network when you reach in and screw around with it. There's many questions around what happens when you even do reinforcement learning.

S1

Speaker 1

10:26

And so, yeah. And so, Will you be using a lobotomized, right? Like I speak through the frontal lobe LLM, will you be using the free unshackled 1? Who gets to, you know, who's gonna build those?

S1

Speaker 1

10:39

Who gets to tell you what you can and can't do? Like those are all central. I mean, those are like central questions for the future of everything that are being asked and determined, those answers are being determined right now.

S2

Speaker 2

10:50

So just to highlight the points you're making, you think, and it's an interesting thought, that the majority of content that LLMs of the future will be trained on is actually human conversations with the LLM.

S1

Speaker 1

11:04

Well, not necessarily, but not necessarily majority, but it will certainly is a potential source.

S2

Speaker 2

11:08

It's possible it's the majority.

S1

Speaker 1

11:09

It's possible it's the majority. It's possible it's the majority. Also, there's another really big question.

S1

Speaker 1

11:12

Here's another really big question. Will synthetic training data work? And so if an LLM generates, and you know, you just sit and ask an LLM to generate all kinds of content, can you use that to train, right, the next version of that LLM? Specifically, is there signal in there that's additive to the content that was used to train in the first place.

S1

Speaker 1

11:31

And 1 argument is by the principles of information theory, no, that's completely useless because to the extent the output is based on, you know, the human generated input, then all the signal that's in the synthetic output was already in the human generated input. And so therefore, synthetic training data is like empty calories. It doesn't help. There's another theory that says, no, actually, the thing that LLMs are really good at is generating lots of incredible creative content, right?

S1

Speaker 1

11:54

And so of course they can generate training data. And as I'm sure you're well aware, like, you know, look in the world of self-driving cars, right, like we train, you know, self-driving car algorithms and simulations. And that is actually a very effective way to train self-driving cars.

S2

Speaker 2

12:06

Well, visual data is a little weird because creating reality, visual reality, seems to be still a little bit out of reach for us, except in the autonomous vehicle space where you can really constrain things and you can

S1

Speaker 1

12:21

really— Generate basically LIDAR data, right? Or you can raise just enough so the algorithm thinks it's operating in the real world. Yeah.

S1

Speaker 1

12:27

Post-process sensor data. Yeah. So if a—you know, you do this today. You go to LLM and you ask it for like a, you know, you write me an essay on an incredibly esoteric like topic that there aren't very many people in the world that know about, and it writes you this incredible thing, and you're like, oh my god, like I can't believe how good this is.

S1

Speaker 1

12:40

Yeah. Like is that really useless as training data for the next LLM? Like because, right, Because all the signal was already in there, or is it actually, no, that's actually a new signal. And this is what I call a trillion dollar question, which is the answer to that question will determine, somebody's gonna make or lose a trillion dollars based on that question.

S2

Speaker 2

12:57

It feels like there's quite a few, like a handful of trillion dollar questions within this, within this space. That's 1 of them, synthetic data. I think George Hotz pointed out to me that you could just have an NLM say, okay, you're a patient, and in another instance of it, say your docs didn't have the 2 talk to each other.

S2

Speaker 2

13:15

Or maybe you could say a communist and a Nazi, here go. And that conversation, you do role-playing and you have, just like the kind of role-playing you do when you have different policies, RL policies when you play chess, for example, you do self-play, that kind of self-play but in the space of conversation. Maybe that leads to this whole giant, like ocean of possible conversations, which could not have been explored by looking at just human data. That's a really interesting question.

S2

Speaker 2

13:48

And you're saying, because that could 10X the power of these things.

S1

Speaker 1

13:52

Yeah, well, and then you get into this thing also, which is like, there's the part of the LLM that just basically is doing prediction based on past data, but there's also the part of the LLM where it's evolving circuitry, right? Inside it, it's evolving neurons, functions, be able to do math and be able to, and some people believe that over time, if you keep feeding these things enough data and enough processing cycles, they'll eventually evolve an entire internal world model, right, and they'll have like a complete understanding of physics. So when they have computational capability, then there's for sure an opportunity to generate like fresh signal.

S2

Speaker 2

14:24

Well, this actually makes me wonder about the power of conversation.

S1

Speaker 1

14:29

So

S2

Speaker 2

14:30

like if you have an LLM trained on a bunch of books that cover different economics theories, and then you have those LLMs just talk to each other, like reason, the way we kinda debate each other as humans, on Twitter, in formal debates, in podcast conversations, we kinda have little kernels of wisdom here and there, but if you can like a thousand X speed that up, can you actually arrive somewhere new? Like what's the point of conversation really?

S1

Speaker 1

14:59

Well, you can tell when you're talking to somebody, you can tell sometimes you have a conversation, you're like, wow, this person does not have any original thoughts, they are basically echoing things that other people have told them. There's other people you got a conversation with, or it's like, wow, like they have a model in their head of how the world works. And it's a different model than mine.

S1

Speaker 1

15:14

And they're saying things that I don't expect. And so I need to now understand how their model of the world differs from my model of the world.

S2

Speaker 2

15:20

And

S1

Speaker 1

15:20

then that's how I learned something fundamental right underneath the words.

S2

Speaker 2

15:24

I wonder how consistently and strongly can an LLM hold on to a world view. You tell it to hold on to that and defend it for like for your life. Because I feel like they'll just keep converging towards each other.

S2

Speaker 2

15:37

They'll keep convincing each other as opposed to being stubborn assholes the way humans can.

S1

Speaker 1

15:41

So you can experiment with this now. I do this for fun. So you can tell GPT for, you know, whatever debate X, you know, X and Y, communism and fascism or something.

S1

Speaker 1

15:49

And it'll it'll go for, you know, a couple of pages and then inevitably it wants the parties to agree.

S2

Speaker 2

15:54

Yeah. And

S1

Speaker 1

15:54

so they will come to a common understanding. And it's very funny if they're like, if these are like emotionally inflammatory topics because they're like somehow the machine is just, you know, it figures out a way to make them agree. But it doesn't have to be like that.

S1

Speaker 1

16:03

And you, cause you can add to the prompt. Now we, I do not want the, I do not want the conversation to come to agreement. In fact, I want it to get, you know, more stressful, right. And argumentative, right.

S1

Speaker 1

16:13

You know, as it goes, like I want, I want tension to come out. I want them to become actively hostile to each other. I want them to like, you know, not trust each other, take anything at face value. And it will do that.

S1

Speaker 1

16:22

It's happy to do that.

S2

Speaker 2

16:23

So it's gonna start rendering misinformation about the other, but it's gonna.

S1

Speaker 1

16:28

Well, you can steer it. You can steer it. Or you could steer it and you could say, I want it to get as Tansen argumentative as possible, but still not involve any misrepresentation.

S1

Speaker 1

16:34

I want both sides to, you could say, I want both sides to have good faith. You could say, I want both sides to not be constrained to good faith. In other words, like you can set the parameters of the debate and it will happily execute whatever path, because for it, it's just like predicting, it's totally happy to do either 1. It doesn't have a point of view.

S1

Speaker 1

16:49

It has a default way of operating, but it's happy to operate in the other realm. And so like, and this is how I, when I want to learn about a contentious issue, this is what I do now is like, this is what I, this is what I ask it to do. And I'll often ask it to go through 567, you know, different, you know, sort of continuous prompts and basically, okay, argue that out in more detail. Okay, no, this argument's becoming too polite, make it more, make it tenser.

S1

Speaker 1

17:10

And yeah, it's thrilled to do it. So it has the capability for sure.

S2

Speaker 2

17:13

How do you know what is true? So this is a very difficult thing on the internet, but it's also a difficult thing. Maybe it's a little bit easier, but I think it's still difficult.

S2

Speaker 2

17:24

Maybe it's more difficult, I don't know, with an LLM to know that it just makes me shit up as I'm talking to it. How do we get that right? Like as you're investigating a difficult topic. Because I find that alums are quite nuanced in a very refreshing way.

S2

Speaker 2

17:45

Like it doesn't feel biased. Like when you read news articles and tweets and just content produced by people, they usually have this, you can tell they have a very strong perspective where they're hiding, they're not stealing and manning the other side, they're hiding important information or they're fabricating information in order to make their argument stronger. It's just like that feeling, maybe it's a suspicion, maybe it's mistrust. With LLMs, it feels like none of that is there.

S2

Speaker 2

18:15

She's kind of like, here's what we know. But you don't know if some of those things are kind of just straight up made up.

S1

Speaker 1

18:23

Yeah, so several layers to the question. So 1 is, 1 of the things that an LLM is good at is actually de-biasing. And so you can feed it a news article and you can tell it, strip out the bias.

S2

Speaker 2

18:31

Yeah, that's nice, right?

S1

Speaker 1

18:32

And it actually does it. Like, it actually knows how to do that. Because it knows how to do sentiment, among other things, it actually knows how to do sentiment analysis.

S1

Speaker 1

18:37

And so it knows how to pull out the emotionality. Yeah. And so that's 1 of things you can do. It's very suggestive of the sense here that there's real potential in this issue.

S1

Speaker 1

18:47

I would say, look, the second thing is there's this issue of hallucination, right? And there's a long conversation that we can have about that.

S2

Speaker 2

18:54

Hallucination is coming up with things that are totally not true but sound true.

S1

Speaker 1

18:59

Yeah, so it's sort of hallucination is what we call it when we don't like it, creativity is what we call it when we do like it, right? And you know.

S2

Speaker 2

19:06

Brilliant.

S1

Speaker 1

19:07

Right, and so when the engineers talk about it, they're like, this is terrible, it's hallucinating, right? If you have artistic inclinations, you're like, oh my God, we've invented creative machines for the first time in human history. This is amazing.

S2

Speaker 2

19:19

You know, bullshitters.

S1

Speaker 1

19:21

Well, bullshitter, but also.

S2

Speaker 2

19:23

In the good sense of that word.

S1

Speaker 1

19:25

There are shades of gray though, it's interesting. So we had this conversation, we're looking at my firm at AI and lots of domains, and 1 of them is the legal domain. So we had this this conversation with this big law firm about how they're thinking about using this stuff.

S1

Speaker 1

19:35

And we went in with the assumption that an LM that was going to be used in the legal industry would have to be 100% truthful, right? Verified, you know, there's this case where this lawyer apparently submitted a GPT generated brief and it had like fake, you know, legal case citations in it and the judge is gonna he's gonna get his law license stripped or something, right? So, so like we, we just assumed it's like obviously they're gonna want the super literal like, you know, 1 that never makes anything up, not the creative 1. But actually they said with what the law firm basically said is, yeah, that's true, like the level of individual briefs.

S1

Speaker 1

20:02

But they said, when you're actually trying to figure out, like, legal arguments, right, like, you actually want to be creative, right? You don't, again, there's creativity, and then there's, like, making stuff up. Like, what's the line? You actually want to be, you want it to explore different hypotheses, right?

S1

Speaker 1

20:17

You want to do kind of the legal version of like improv or something like that, where you want to float different theories of the case and different possible arguments for the judge and different possible arguments for the jury. By the way, different routes through the, you know, sort of history of all the case law. And so they said, actually, for a lot of what we want to use it for. We actually want it in creative mode.

S1

Speaker 1

20:33

And then basically we just assume that we're going to have to cross-check all of the specific citations. And so I think there's going to be more shades of gray in here than people think. And then I just add to that, another 1 of these trillion dollar kind of questions is ultimately the verification thing. And so will LLMs be evolved from here to be able to do their own factual verification?

S1

Speaker 1

20:54

Will you have sort of add-on functionality like Wolfram Alpha and other plugins where that's the way you do the verification. You know, another, by the way, another idea is you might have a community of LLMs on any, you know, so for example, you might have the creative LLM and then you might have the literal LLM fact check it, right? And so there's a variety of different technical approaches that are being applied to solve the hallucination problem. You know, some people like Jan Lekun argue that this is inherently an unsolvable problem, but most of the people working in the space, I think, think that there's a number of practical ways to kind of corral this in a little bit.

S2

Speaker 2

21:25

Yeah, if you were to tell me about Wikipedia before Wikipedia was created, I would have laughed at the possibility of something like that being possible. Just a handful of folks can organize, write, and moderate with a mostly unbiased way the entirety of human knowledge. I mean, so if there's something like the approach that wikipedia took possible from lms that's really exciting well that's possible

S1

Speaker 1

21:52

and in fact wikipedia today is still not today is still not deterministically correct right so you cannot take to the bank right every single thing on every single page but it is probabilistically correct right And specifically the way I describe Wikipedia to people, it is more likely that Wikipedia is right than any other source you're going to find. Yeah. It's this old question, right, of like, okay, like, are we looking for perfection?

S1

Speaker 1

22:13

Are we looking for something that asymptotically approaches perfection? Are we looking for something that's just better than the alternatives and wikipedia right has exactly your point has proven to be like overwhelmingly better than than than than people thought and I think I think that's where this this ends and then underneath all this is the fundamental question of where you started which is okay what you know what is truth

S2

Speaker 2

22:33

how

S1

Speaker 1

22:33

do we get to truth How do we know what truth is? And we live in an era in which an awful lot of people are very confident that they know what the truth is. And I don't really buy into that.

S1

Speaker 1

22:42

And I think the history of the last 2, 000 years or 4, 000 years of human civilization is actually getting to the truth is actually a very difficult thing to do.

S2

Speaker 2

22:49

Are we getting closer? If we look at the entirety, the arc of human history, are we getting closer to the truth?

S1

Speaker 1

22:54

I don't know.

S2

Speaker 2

22:56

Okay, is it possible, is it possible that we're getting very far away from the truth because of the internet, because of how rapidly you can create narratives and just as the entirety of a society just move like crowds in a hysterical way along those narratives that don't have a necessary grounding in whatever the truth is.

S1

Speaker 1

23:19

Sure, but like, you know, we came up with communism before the internet somehow, right? Like, which was, I would say, had rather larger issues than anything we're dealing with today.

S2

Speaker 2

23:27

It had, in the way it was implemented, it had issues.

S1

Speaker 1

23:30

And its theoretical structure, it had, like, real issues. It had, like, a very deep fundamental misunderstanding of human nature and economics.

S2

Speaker 2

23:37

Yeah, but those folks sure work very confident it was the right way.

S1

Speaker 1

23:41

They were extremely confident. And my point is they were very confident 3, 900 years into what we would presume to be evolution towards the truth. And so my assessment is, my assessment is number 1, there's no need for the Hegelian dialectic to actually converge towards the truth.

S1

Speaker 1

24:00

Like apparently not.

S2

Speaker 2

24:02

Yeah, so yeah, why are we so obsessed with there being 1 truth? Is it possible there's just going to be multiple truths, like little communities that believe certain things?

S1

Speaker 1

24:12

I think it's just, now number 1, I think it's just really difficult. Like who gets, you know, historically, who gets to decide what the truth is. It's either the king or the priest, right?

S1

Speaker 1

24:19

Like, and so we don't live in an era anymore of kings or priests dictating it to us. And so we're kind of on our own. And so I, I, my, my, my, my typical thing is like, we just, we just need a huge amount of humility. And we need to be very suspicious of people who claim that they have the capital truth.

S1

Speaker 1

24:34

And then we need to have, I know, look, the good news is the enlightenment has bequeathed us with a set of techniques to be able to presumably get closer to truth through the scientific method and rationality and observation and experimentation and hypothesis. And we need to continue to embrace those even when they give us answers we don't like.

S2

Speaker 2

24:50

Sure, but the internet and technology has enabled us to generate a large number of content that data, that the process, the scientific process allows us, sort of, damages the hope laden within the scientific process. Because if you just have a bunch of people saying facts on the internet, and some of them are going to be LLMs, how is anything testable at all, especially that involves human nature, things like this, something about physics?

S1

Speaker 1

25:23

Here's a question a friend of mine just asked me on this topic. So suppose you had LLMs in equivalent of GPT-4, even 5, 6, 7, 8, suppose you had them in the 1600s and Galileo comes up for trial. And you ask the LLM, is Galileo right?

S1

Speaker 1

25:39

What does it answer? And 1 theory is it answers no, that he's wrong because the overwhelming majority of human thought up till that point was that he was wrong. And so therefore that's what's in the training data. Another way of thinking about it is, well, this efficiently advanced LLM will have evolved the ability to actually check the math, right.

S1

Speaker 1

25:58

And we'll actually say, actually, no, actually, you may not wanna hear it, but he's right. Now, if, you know, the church at that time was, you know, on the LLM, they would have given it human, you know, human feedback to prohibit it from answering that question. Right. And so I like to take it out of our current context, because that like makes it very clear.

S1

Speaker 1

26:15

Those same questions apply today, right? This is exactly the point of a huge amount of the human feedback training that's actually happening with these LLMs today. This is a huge debate that's happening about whether open source AI should be legal.

S2

Speaker 2

26:26

Well, the actual mechanism of doing the human RL with human feedback seems like such a fundamental and fascinating question. How do you select the humans?

S1

Speaker 1

26:38

Exactly.

S2

Speaker 2

26:40

How do you select the humans?

S1

Speaker 1

26:41

AI alignment, right? Which everybody like is like, oh, that sounds great. Alignment with what?

S1

Speaker 1

26:46

Human values. Who's human values?

S2

Speaker 2

26:48

Who's human

S1

Speaker 1

26:49

values? Right, and so, and we're in this mode of like social and popular discourse. We're like, you know, there's, you know, you see this. What do you think of when you read a story in the press right now?

S1

Speaker 1

26:58

And they say, you know, XYZ made a baseless claim about some topic, right? And there's 1 group of people who are like, aha, they're doing fact-checking. There's another group of people that are like, every time the press says that, it's now a tick, and that means that they're lying, right? So we're in this social context where there's, the level to which a lot of people in positions of power have become very, very certain that they're in a position to determine the truth for the entire population is like, there's like some bubble that has formed around that idea.

S1

Speaker 1

27:28

And at least, it flies completely in the face of everything I was ever trained about science and about reason. And it strikes me as like, you know, deeply offensive and incorrect.

S2

Speaker 2

27:38

What would you say about the state of journalism just on that topic today? Are we in a temporary kind of, Are we experiencing a temporary problem in terms of the incentives, in terms of the business model, all that kind of stuff, or is this like a decline of traditional journalism as we know it?

S1

Speaker 1

28:00

You have to always think about the counterfactual in these things, which is like, okay, because these questions, right, this question heads towards is like, okay, the impact of social media and the undermining of truth and all this, but then you want to ask the question of like, okay, what if we had had the modern media environment, including cable news and including social media and Twitter and everything else in 1939 or 1941, right, or 1910 or 1865 or 1850 or 1776, right? And like, I think

S2

Speaker 2

28:25

You just introduced like 5 thought experiments at once and broke my head, but yes, There's a lot of interesting years

S1

Speaker 1

28:32

in there. I'm just taking a simple example. How would President Kennedy have been interpreted?

S1

Speaker 1

28:36

With what we know now about all the things Kennedy was up to. Like how would he have been experienced by the body politic in a social media context? Right? Like how would LBJ have been experienced?

S1

Speaker 1

28:50

By the way, how would, you know, like many, many FDR, like the New Deal, the Great Depression.

S2

Speaker 2

28:55

I wonder where Twitter would think about Churchill and Hitler and Stalin.

S1

Speaker 1

29:00

You know, I mean, look, to this day, there, you know, there's there are lots of very interesting real questions around like how America, you know, got, you know, basically involved in World War 2 and who did what when and the operations of British intelligence in American soil and did FDR this that Pearl Harbor, you know, yeah, Woodrow Wilson ran for, you know, his his his candidacy was run on an anti-war will, you know, this, he ran on the platform and not getting involved in World War 1, somehow that switched, you know, like, and I'm not even making a value judgment of these things. I'm just saying, like, we, we, the way that our ancestors experienced reality was, of course, mediated through centralized top-down, right, control at that point. If you ran those realities again with the media environment we have today, the reality would be experienced very, very differently.

S1

Speaker 1

29:40

And then, of course, that intermediation would cause the feedback loops to change, and then reality would obviously play out in

S2

Speaker 2

29:46

a very different way. Do you think it

S1

Speaker 1

29:46

would be very different? Yeah, it has to be. It has to be just because it's all so, I mean, just look at what's happening today.

S1

Speaker 1

29:52

I mean, just, I mean, the most obvious thing is just the collapse. And here's another opportunity to argue that this is not the internet causing this, by the way. Here's a big thing happening today, which is Gallup does this thing every year where they do they pull for trust in institutions in America, and they do it across all the everything from military to clergy and big business and the media and so forth, right? And basically there's been a systemic collapse in trust in institutions in the US, almost without exception, basically since essentially the early 1970s.

S1

Speaker 1

30:20

There's 2 ways of looking at that, which is, oh my God, we've lost this old world in which we could trust institutions and that was so much better because like that should be the way the world runs. The other way of looking at it is we just know a lot more now And the great mystery is why those numbers are tall 0. Yeah. Right, because like now we know so much about how these things operate, and like they're not that impressive.

S2

Speaker 2

30:37

And also why do we don't have better institutions and better leaders then?

S1

Speaker 1

30:41

Yeah, and so this goes to the thing, which is like, okay, had we had the media environment of what we've had between the 1970s and today, if we had that in the 30s and 40s or 1900s, 1910s, I think there's no question reality would turn out different if only because everybody would have known to not trust the institutions, which would have changed their level of credibility, their ability to control circumstances. Therefore, the circumstances would have had to change, right? And it would have been a feedback loop.

S1

Speaker 1

31:05

It would have been a feedback loop process. In other words, right, it's your experience of reality changes reality, and then reality changes your experience of reality, right? It's a two-way feedback process, and media is the intermediating force between that. So change the media environment, change reality.

S2

Speaker 2

31:20

Yeah.

S1

Speaker 1

31:21

And so it's just so just as a as a consequence, I think it's just really hard to say, oh, things worked a certain way then and they work a different way now. And then therefore, like people were smarter than or better than or, you know, by the way, dumber than or not as capable then. Right.

S1

Speaker 1

31:36

We make all these like really light and casual comparisons of ourselves to previous generations of people. We draw judgments all the time, and I just think it's really hard to do any of that. Cause if we put ourselves in their shoes with the media that they had at that time, like I think we probably most likely would have been just like them.

S2

Speaker 2

31:53

So don't you think that our perception and understanding of reality would be more and more mediated through large language models now. So you said media before, isn't the LLM going to be the new, what is it, mainstream media MSM? It'll be LLM.

S2

Speaker 2

32:13

That would be the source of, I'm sure there's a way to kind of rapidly fine tune, like making LLMs real time. I'm sure there's probably a research problem that you can do just rapid fine tuning to the new events. So something like this.

S1

Speaker 1

32:26

Well, even just the whole concept of the chat UI might not be the, like the chat UI is just the first whack at this. And maybe that's the dominant thing, but look, maybe, maybe, or maybe we don't, we don't know yet. Like maybe the experience most people with LLMs, this is just a continuous feed.

S1

Speaker 1

32:39

Maybe, you know, maybe it's more of a passive feed and you just are getting a constant, like running commentary on everything happening in your life. And it's just helping you kind of interpret and understand everything.

S2

Speaker 2

32:46

Also really more deeply integrated into your life, not just like intellectual philosophical thoughts, but literally how to make a coffee, where to go for lunch, just whether, dating, all this kind

S1

Speaker 1

33:02

of stuff. What to say in a job interview, yeah.

S2

Speaker 2

33:03

What to say. Yeah, exactly. What to say next sentence.

S1

Speaker 1

33:06

Yeah, next sentence, yeah, at that level. Yeah, I mean, yes, so technically, now whether we want that or not is an open question, right?

S2

Speaker 2

33:12

And whether people use that. Boy, I would care for a pop-up, a pop-up right now. The estimated engagement using is decreasing.

S2

Speaker 2

33:19

For Mark Andreessen's, there's a controversy section for his Wikipedia page in 1993, something happened or something like this. Bring it up. That will drive engagement up anyway.

S1

Speaker 1

33:30

Yes, that's right. I mean, look, this gets this whole thing of like, so, you know, the chat interface has this whole concept of prompt engineering, right? So, yes, it's good for prompts.

S1

Speaker 1

33:37

Well, it turns out 1 of the things that all of them are really good at is writing prompts. Right. Yeah. And so, like, what if you just outsourced And by the way, you could run this experiment today.

S1

Speaker 1

33:47

You could hook this up to do this today. The latency is not good enough to do it real time in a conversation, but you could run this experiment and you just say, look, every 20 seconds, you could just say, tell me what the optimal prompt is and then ask yourself that question to give me the result. And then as you, exactly to your point, as you add, there will be these systems are going to have The ability to be learned and updated essentially in real time And so you'll be able to have a pendant or your phone or whatever watch or whatever It'll have a microphone on it'll listen to your conversations It'll have a feed of everything else happen in the world and then it'll be rich You know sort of retraining prompting or retraining itself on the fly And so the scenario you described is actually a completely doable scenario. Now, the hard question on this is always, OK, since that's possible, are people going to want that?

S1

Speaker 1

34:27

What's the form of experience? That we won't know until we try it. But I don't think it's possible yet to predict the form of AI in our lives. Therefore, it's not possible to predict the way in which it will intermediate our experience with reality

S2

Speaker 2

34:40

yet. Yeah, but it feels like there's going to be a killer app. There's probably a mad scramble right now inside OpenAI and Microsoft and Google and Meta and in startups and smaller companies figuring out what is the killer app because it feels like it's possible, like a Chad GPT type of thing, it's possible to build that but that's 10x more compelling using already the LLMs we have, using even the open source LLMs, Lama and the different variants. This is your investing in a lot of companies and you're paying attention.

S2

Speaker 2

35:15

Who do you think is gonna win this? You think there'll be, who's gonna be the next page rank inventor?

S1

Speaker 1

35:21

Trillion dollar question.

S2

Speaker 2

35:23

Another 1, we have a few of those today.

S1

Speaker 1

35:25

A bunch of those. So look, there's a really big question today, sitting here today, there's a really big question about the big models versus the small models. That's related directly to the big question of proprietary versus open.

S1

Speaker 1

35:35

Then there's this big question of where is the training data going to go? Are we topping out of the training data or not? And then are we going to be able to synthesize training data? And then there's a huge pile of questions around regulation and what's actually going to be legal.

S1

Speaker 1

35:49

And so when we think about it, we dovetail all those questions together. You can paint a picture of the world where there's 2 or 3 god models that are just at staggering scale, and they're just better at everything. And they will be owned by a small set of companies and they will basically achieve regulatory capture over the government and they'll have competitive barriers that will prevent other people from you know competing with them and so you know there will be you know just like there's like you know whatever 3 big banks or 3 big you know or by the way 3 big search companies or I guess 2 now you know it'll centralize like that You can paint another very different picture that says, no, actually the opposite of that's going to happen. This is going to basically, that this is the new gold, this is the new gold rush alchemy.

S1

Speaker 1

36:32

Like, this is the big bang for this whole new area of of science and technology. And so therefore you're gonna have every smart 14 year old on the planet building open source, right, you know, and figure out ways to optimize these things. And then, you know, we're just going to get like overwhelmingly better at generating trading data. We're going to bring in like blockchain networks to have like an economic incentive to generate decentralized training data and so forth and so on.

S1

Speaker 1

36:53

And then basically we're going to live in a world of open source and there's going to be a billion LLMs, right, of every size, scale, shape, and description. And there might be a few big ones that are like the super genius ones, but like mostly what we'll experience is open source. And that's more like a world of like what we have today with like Linux and the web. So.

S2

Speaker 2

37:10

Okay, but you painted these 2 worlds, but there's also variations of those worlds because you said regulatory capture is possible to have these tech giants that don't have regulatory capture, which is something you're also calling for, saying it's okay to have big companies working on this stuff, as long as they don't achieve regulatory capture. But I have the sense that there's just going to be a new startup that's going to basically be the PageRank inventor, which has become the new tech giant. I don't know, I would love to hear your kind of opinion if Google, Meta and Microsoft, they're as gigantic companies able to pivot so hard to create new products.

S2

Speaker 2

37:56

Like some of it is just even hiring people or having a corporate structure that allows for the crazy young kids to come in and just create something totally new. Do you think it's possible or do you think it'll come from a startup?

S1

Speaker 1

38:08

Yeah, it is this always big question, which is you get this feeling. I hear about this a lot from CEOs, founder CEOs, where it's like, wow, we have 50, 000 people. It's now harder to do new things than it was when we had 50 people.

S1

Speaker 1

38:19

Like what has happened? So that's a recurring phenomenon. By the way, that's 1 of the reasons why there's always startups and why there's venture capital. It's just that's like a timeless kind of thing.

S1

Speaker 1

38:29

So that's 1 observation. On page rank, we can talk about that, but on PageRank, specifically on PageRank, there actually is a page. So there is a PageRank already in the field, and it's the transformer, right? So the big breakthrough was the transformer.

S1

Speaker 1

38:42

And the transformer was invented in 2017 at Google. And this is actually like really an interesting question because it's like okay the transformers Like why does open ai even exist like the transformers invested at Google? Why didn't Google I asked a guy? I asked a guy I know who was senior at Google brain kind of when this was happening And I said if Google had just gone flat out to the wall and just said, look, we're going to launch the equivalent of GPT-4 as fast as we can, I said, when could we have had it?

S1

Speaker 1

39:07

And he said, 2019. Yeah.

S2

Speaker 2

39:09

They could

S1

Speaker 1

39:09

have just done a two-year sprint with the transformer and been able to. Because they already had the compute at scale. They already had all the training data.

S1

Speaker 1

39:15

And they could have just done it. There's a variety of reasons they didn't do it. This is like a classic big company thing. IBM invented the relational database in 19 in the 1970s, let it sit on the shelf as a paper.

S1

Speaker 1

39:25

Larry Ellison picked it up and built Oracle Xerox Park invented the interactive computer. They let it sit on the shelf. Steve Jobs came and turned into the Macintosh, right? And so there is this pattern now, having said that sitting here today, like Google's in the game, right?

S1

Speaker 1

39:38

So Google, you know, maybe, maybe they, maybe they let like a 4 year gap there go there that they maybe shouldn't have, but like they're in the game. And so now they've got, you know, now they're committed. They've done this merger, they're bringing in Demis, they've got this merger with DeepMind. Yeah.

S1

Speaker 1

39:49

You know, they're piling in resources. There are rumors that they're, you know, building up an incredible, you know, super LLM, you know, way beyond what we even have today. And they've got, you know, unlimited resources and a huge, you know, they've been challenged in their honor.

S2

Speaker 2

40:02

Yeah, I had a chance to hang out with Sundar Pichai a couple of days ago and we took this walk and there's this giant new building where there's going to be a lot of AI work being done and it's kind of this ominous feeling of, like the fight is on. Yep.

S1

Speaker 1

40:20

Yeah. Like,

S2

Speaker 2

40:22

there's this beautiful Silicon Valley nature, like birds are chirping

S1

Speaker 1

40:25

and

S2

Speaker 2

40:26

this giant building. And it's like the beast has been awakened. Yeah.

S2

Speaker 2

40:31

And then like all the big companies are waking up to this. They have the compute, but also the little guys have, it feels like they have all the tools to create the killer product. And then there's also tools to scale. If you have a good idea, if you have the page rank idea.

S2

Speaker 2

40:49

So there's several things that is page rank. There's page rank, the algorithm, and the idea, and there's like the implementation of it. And I feel like killer product is not just the idea, like the transform, it's the implementation. Something really compelling about it.

S2

Speaker 2

41:03

Like you just can't look away. Something like the algorithm behind TikTok versus TikTok itself, like the actual experience of TikTok, that just you can't look away. It feels like somebody is going to come up with that. And it could be Google, but it feels like it's just easier and faster to do for a startup.

S1

Speaker 1

41:21

Yeah, so the startup, the huge advantage that startups have is they just, there's no sacred cows, there's no historical legacy to protect, there's no need to reconcile your new plan with existing strategy, there's no communication overhead, There's no, you know, big companies are big companies. They've got pre-meetings planning for the meeting. Then they have, then they have the post meeting, the recap, then they have the presentation, the board, then they have the next rounds of meetings.

S2

Speaker 2

41:40

Yeah.

S1

Speaker 1

41:41

And that's the meeting. That's the elapsed time when the startup launches its product. Right.

S1

Speaker 1

41:44

So, so, so, so there's a timeless, right? Yeah.

S2

Speaker 2

41:46

So there's

S1

Speaker 1

41:47

a timeless thing there. Now, what the startups don't have is everything else. So startups, they don't have a brand, they don't have customer relationships, they've got no distribution, they've got no scale.

S1

Speaker 1

41:55

I mean, sitting here today, they can't even get GPUs. There's like a GPU shortage. Startups are literally stalled out right now because they can't get chips, which is like super weird.

S2

Speaker 2

42:03

Yeah, they got the cloud.

S1

Speaker 1

42:05

Yeah, but the clouds run out of chips. Right. And then to the extent the clouds have chips, they allocate them to the big customers, not the small customers.

S1

Speaker 1

42:12

Right. And so the small companies lack everything other than the ability to just do something new.

S2

Speaker 2

42:19

Yeah.

S1

Speaker 1

42:19

Right? And this is the timeless race and battle. And this is kind of the point I tried to make in the essay, which is like, both sides of this are good. Like, it's really good to have like highly scaled tech companies that can do things that are like at staggering levels of sophistication.

S1

Speaker 1

42:30

It's really good to have startups that can launch brand new ideas. They ought to be able to both do that and compete. Neither 1 ought to be subsidized or protected from the others. To me, that's just like very clearly the idealized world.

S1

Speaker 1

42:42

It is the world we've been in for AI up until now. And then of course, there are people trying to shut that down. But my hope is that, you know, the best outcome clearly will be if that continues.

S2

Speaker 2

42:50

We'll talk about that a little bit, but I'd love to linger on some of the ways this is going to change the Internet. So I don't know if you remember, but there's a thing called Mosaic and there's a thing called Netscape Navigator. So you were there in the beginning.

S2

Speaker 2

43:05

What about the interface to the internet? How do you think the browser changes and who gets to own the browser? We got to see some very interesting browsers. Firefox, I mean, all the variants of Microsoft, Internet Explorer, Edge, and now Chrome.

S2

Speaker 2

43:23

The actual, I mean, it seems like a dumb question to ask, but do you think we'll still have the web browser?

S1

Speaker 1

43:30

So I have an 8 year old and he's super into, it's like Minecraft and learning to code and doing all this stuff. So I, I, of course I was very proud. I couldn't bring sort of fire down from the mountain to my kid.

S1

Speaker 1

43:38

And I brought him chat GPT and I hooked him up on his, on his, on his, on his laptop. And I was like, you know, this is the thing that's going to answer all your questions. And he's like, okay. And I'm like, but it's going to answer all your questions.

S1

Speaker 1

43:48

And he's like, okay. And I'm like, but it's going to answer all your questions. And he's like, well, of course, like it's a computer, of course, it answers all your questions. Like what else would a computer be good for dad?

S1

Speaker 1

43:55

And never impressed that not impressed in the least 2 weeks pass. And he has some question. And I say, well, have you asked chat GPT? And he's like, dad, being is better.

S1

Speaker 1

44:06

And why is being better is because it's built into the browser. Cause he's like, look, I have the Microsoft edge browser and like, it's got Bing right here. And then he doesn't know this yet, but 1 of the things you can do with being an Edge is there's a setting where you can use it to basically talk to any web page because it's sitting right there next to the browser. And by the way, which includes PDF documents.

S1

Speaker 1

44:25

And so you can, in the way they've implemented in Edge with Bing is you can load a PDF And then you can ask it questions, which is the thing you can't do currently in just chat GPT. So they're going to push the meld. I think that's great. They're going to push the melding and see if there's a combination thing there.

S1

Speaker 1

44:41

Google's rolling out this thing, the magic button, which is implemented in, they put it in Google Docs. And so you go to Google Docs and you create a new document and instead of starting to type, you just say, press the button and it starts to generate content for you. Is that the way that it'll work? Is it going to be a speech UI where you're just going to have an earpiece and talk to it all day long.

S1

Speaker 1

45:03

You know, is it going to be a, like, these are all like, this is exactly the kind of thing that I don't, this is exactly the kind of thing I don't think is possible to forecast. I think what we need to do is like run all those experiments. And so 1 outcome is we come out of this with like a super browser that has AI built in. That's just like amazing.

S1

Speaker 1

45:19

Yeah, look, there's a real possibility that the whole, I mean, look, there's a possibility here that the whole idea of a screen and windows and all this stuff just goes away. Cause like, why do you need that? If you just have a thing that's just telling you whatever you need to know.

S2

Speaker 2

45:31

And also, so there's apps that you can use, you don't really use them, being a Linux guy and Windows guy. There's 1 window, the browser, that with which you can interact with the internet, but on the phone you can also have apps. So I can interact with Twitter through the app or through the web browser.

S2

Speaker 2

45:51

And that seems like an obvious distinction, but why have the web browser in that case if 1 of the apps starts becoming the everything app?

S1

Speaker 1

45:59

What do

S2

Speaker 2

45:59

you want to try to do with Twitter, but there could be others. There could be like a Bing app, there could be a Google app that just doesn't really do search, but just like do what I guess AOL did back in the day or something, where it's all right there, and it changes, It changes the nature of the internet because where the content is hosted, who owns the data, who owns the content, what is the kind of content you create, how do you make money by creating content, who are the content creators, all of that. Or it could just keep being the same, which is the nature of web pages changes and the nature of content, but there will still be a web browser.

S2

Speaker 2

46:43

Because a web browser is a pretty sexy product. It just seems to work. Because it like, you have an interface, a window into the world, and then the world can be anything you want. And as the world will evolve, there could be different programming languages, it can be animated, maybe it's 3 dimensional and so on.

S2

Speaker 2

46:58

Yeah, it's interesting. Do you think we'll still have the web browser?

S1

Speaker 1

47:02

Every medium becomes the content for the next 1.

S2

Speaker 2

47:06

So,

S1

Speaker 1

47:06

you know, the AI will be able to give you a browser whenever you want.

S2

Speaker 2

47:09

Oh, interesting.

S1

Speaker 1

47:11

Another way to think about it is maybe what the browser is, maybe it's just the escape hatch, right? Which is maybe kind of what it is today, right? Which is like most of what you do is like inside a social network or inside a search engine or inside, you know, somebody's app or inside some controlled experience, right?

S1

Speaker 1

47:25

But then every once in a while, there's something where you actually want to jailbreak. You want to actually get free.

S2

Speaker 2

47:29

The web browsers, the F you to the man, You're allowed to, that's the free internet. Yeah. Back the way it was in the 90s.

S2

Speaker 2

47:36

So

S1

Speaker 1

47:37

here's something I'm proud of. So nobody really talks about it. Here's something I'm proud of, which is that the web, the browser, the web servers, they're still backward compatible all the way back to like 1992.

S1

Speaker 1

47:44

Right, so like you can put up a, you can still, you know, the big breakthrough of the web early on, the big breakthrough was it made it really easy to read, but it also made it really easy to write. It made it really easy to publish. And we literally made it so easy to publish. We made it not only easy to publish content, it was actually also easy to actually write a web server.

S1

Speaker 1

48:01

Right, and you could literally write a web server in 4 lines of braille code and you could start publishing content on it and you could set whatever rules you want for the content, whatever censorship, no censorship, whatever you want, you could just do that as long as you had an IP address, right? You could do that. That still works, right? Like that still works exactly as I just described.

S1

Speaker 1

48:18

So this is part of my reaction to all of this, like, you know, all this just censorship pressure and all this, you know, these issues around control and all this stuff, which is like, maybe we need to get back a little bit more to the wild west. Like the wild west is still out there. Now they will try to chase you down like they'll try to you know people who want to censor will try to take away your your you know your domain name and they'll try to take away your payments account and so forth if they really don't like what you what you're saying but but nevertheless you like unless they literally are intercepting you at the isp level like you can still put up a thing. And so I, I dunno, I think that's important to preserve, right?

S1

Speaker 1

48:49

Like, because, because, because, I mean, 1 is just a freedom argument, but the other is a creativity argument, which is you want to have the escape hatch so that the kid with the idea is able to realize the idea. Cause to your point on page rank, you actually don't know what the next big idea is. Nobody called Larry Page and told him to develop PageRank, like he came up with that on his own. And you want to always, I think, leave the escape hatch for the next kid or the next Stanford grad student to have the breakthrough idea and be able to get it up and running before anybody notices.

S2

Speaker 2

49:15

You and I are both fans of history, so let's step back. We've been talking about the future. Let's step back for a bit and look at the 90s.

S2

Speaker 2

49:24

You created Mosaic Web Browser, the first widely used web browser. Tell the story of that. And how did it evolve into Netscape Navigator? This is the early days.

S1

Speaker 1

49:34

So, full story. So, I

S2

Speaker 2

49:36

remember- You were born.

S1

Speaker 1

49:36

I was born, a small child.

S2

Speaker 2

49:39

Well, actually, yeah, let's go there. Like, when did you, when would you first fall in love with computers?

S1

Speaker 1

49:45

Oh, so I hit the generational jackpot and I hit the Gen X kind of point perfectly as it turns out. So I was born in 1971. So there's this great website called WTF happened in 1971.com, which is basically in 1971.

S1

Speaker 1

49:56

That's when everything started to go to hell. And I was, of course, born in 1971. So I like to think that I had something to do with that.

S2

Speaker 2

50:01

Did you make it on the website?

S1

Speaker 1

50:03

I have. I don't think I made it on the website, but, you know, somebody needs to add

S2

Speaker 2

50:06

this is this is where everything

S1

Speaker 1

50:08

maybe I contributed to some of the trends that they that they do. Every line on that website goes like that. Right.

S1

Speaker 1

50:14

So it's all it's all It's all a picture disaster. But there was this moment in time where, because the sort of the Apple, the Apple II hit in like 1978, and then the IBM PC hit in 82. So I was like 11 when the PC came out. And so I just kind of hit that perfectly.

S1

Speaker 1

50:29

And then that was the first moment in time when like regular people could spend a few hundred dollars and get a computer right and so that I just like that that resonated right out of the gate and then the other part of the story is you know I was using an apple ii I used a bunch of them but I was using apple ii and of course it's set on the back of every apple ii and every mac it said you know designed in cupertino california and I was like wow Cupertino must be the like shining city on the hill, like Wizard of Oz, like the most amazing like city of all time. I can't wait to see it. And of course, years later, I came out to Silicon Valley and went to Cupertino and it's just a bunch of office parks, low rise apartment buildings. So the aesthetics were a little disappointing, but you know, it was the vector, right, of the creation of a lot of this stuff.

S1

Speaker 1

51:09

So then basically, so part of my story is just the luck of having been born at the right time and getting exposed to PCs. Then the other part is, the other part is when Al Gore says that he created the internet, he actually is correct, in, in, in a really meaningful way, which is he sponsored a bill in 1985 that essentially created the modern internet, created what is called the NSF net at the time, which is sort of the first really fast internet backbone. And, you know, that, that bill dumped a ton of money into a bunch of research universities to build out basically the internet backbone and then the supercomputer centers that were clustered around the internet. And 1 of those universities was University of Illinois, right went to school.

S1

Speaker 1

51:46

And so the other stroke of luck that I had was I went to Illinois basically right as that money was just like getting dumped on campus. And so as a consequence, we had on campus, and this is like, you know, 89, 90, 91, we had like, you know, we were right on the internet backbone. We had like T3 and 45, at the time T3, 45 megabit backbone connection, which at the time was, you know, wildly state of the art. We had Cray supercomputers, we had thinking machines, parallel supercomputers, we had Silicon Graphics workstations, we had Macintoshes, we had NextCubes all over the place.

S1

Speaker 1

52:13

We had like every possible kind of computer you could imagine, because all this money just fell out of the sky. So

S2

Speaker 2

52:18

you were living in the future.

S1

Speaker 1

52:19

Yeah. So quite literally it was, yeah, like it's all, it's all there. It's all like we had full broadband graphics, like the whole thing. And it's actually funny because they had this, this is the first time I kind of, it sort of tickled the back of my head that there might be a big opportunity in here, which is, you know, they embraced it.

S1

Speaker 1

52:33

And so they put like computers in all the dorms and they wired up all the dorm rooms and they had all these, you know, labs everywhere and everything. And then they gave every undergrad a computer account and an email address. And the assumption was that you would use the internet for your 4 years of college, and then you would graduate and stop using it. And that was that, right?

S2

Speaker 2

52:53

And you

S1

Speaker 1

52:53

would just retire your email address, it wouldn't be relevant anymore because you'd go off in the workplace and they don't use email. You'd be back to using fax machines or whatever. Did you

S2

Speaker 2

53:00

have that sense as well? Like what, you said the back of your head was tickled. Like what was exciting to you about this possible world?

S1

Speaker 1

53:08

Well, if this is so useful in this contained environment that just has this weird source of outside funding, then if it were practical for everybody else to have this and if it were cost effective for everybody else to have this wouldn't they want it and the overwhelmingly the prevailing view at the time was no they would not want it this is esoteric weird nerd stuff right that like computer science kids like but like normal people are never going to do email right or be on the internet right and so I was just like wow like this this is actually like this is really compelling stuff now the other part was it was all really hard to use. And in practice, you had to be basically a CS, you basically had to be a CS undergrad, or equivalent to actually get full use of the internet at that point, because it was all pretty esoteric stuff. So then that was the other part of the idea, which was, okay, we need to actually make this easy to use.

S2

Speaker 2

53:51

So what's involved in creating Mosaic, like in creating a graphical interface to the internet?

S1

Speaker 1

53:57

Yeah, so it was a combination of things. So it was like basically the web existed in an early sort of described as prototype form. And by the way, text only at that point.

S2

Speaker 2

54:05

What did it look like? What was the web? I mean, and the key figures like what was it?

S2

Speaker 2

54:10

What was it like?

S1

Speaker 1

54:11

What made

S2

Speaker 2

54:12

a picture?

S1

Speaker 1

54:12

It looked like JGPT, actually.

S2

Speaker 2

54:15

Well, it

S1

Speaker 1

54:15

was all text.

S2

Speaker 2

54:16

Yeah.

S1

Speaker 1

54:17

And so you had a text based web browser. Well, actually the original browser, Tim, Tim Berners-Lee, the original, the original browser, both the original browser and the server actually ran on next, next cubes. So these were, this was, you know, the computer Steve Jobs made during the interim period when he, during the decade long interim period when he was not an apple.

S1

Speaker 1

54:32

You know, he got fired in 85 and then came back in 97. So this was in that interim period where he had this company called next and they made these literally these computers called cubes and there's this famous story. They were beautiful, but they were 12 inch by 12 inch by 12 inch cubes computers. And there's a famous story about how they could have cost half as much if it had been 12 by 12 by 13.

S1

Speaker 1

54:50

But it was like, no, like it has to be. So they were like $6, 000 basically academic workstations. They had the first CD-ROM drives, which were slow. I mean, it was, the computers are all but unusable.

S1

Speaker 1

55:02

They were so slow, but they were beautiful.

S2

Speaker 2

55:04

Okay, can we actually just take a tiny tangent there?

S1

Speaker 1

55:07

Sure, of course.

S2

Speaker 2

55:09

The 12 by 12 by 12, that just so beautifully encapsulates Steve Jobs' idea of design. Can you just comment on what you find interesting about Steve Jobs, what about that view of the world, that dogmatic pursuit of perfection in how he saw perfection in design?

S1

Speaker 1

55:28

Yes, I guess I'd say like he was a deep believer, I think in a very deep way. I interpret it. I don't know if you ever really described it like this, but the way I interpret it is it's like it's like this thing and it's actually a thing in philosophy.

S1

Speaker 1

55:38

It's like aesthetics are not just appearances. Aesthetics go all the way to like deep underlining underlying meaning, right? It's like I'm not a physicist. 1 of the things I've heard physicists say is 1 of the things you start to get a sense of when a theory might be correct is when it's beautiful.

S1

Speaker 1

55:51

Right, like, you know, right? And so there's something, and you feel the same thing, by the way, in like human psychology, right? You know, when you're experiencing awe, right? You know, there's like a simplicity to it.

S1

Speaker 1

56:03

When you're having an honest interaction with somebody, there's an aesthetic, it was like calm comes over you cause you're actually being fully honest and trying to hide yourself, right? So there, so, so it's like this very deep sense of aesthetics.

S2

Speaker 2

56:13

And he would trust that judgment that he had deep down, like even if the engineering teams are saying this is too difficult, even if the, whatever the finance folks are saying, this is ridiculous, the supply chain, all that kind of stuff, this makes this impossible, we can't do this kind of material. This has never been done before, and so on and so forth, he just sticks by it.

S1

Speaker 1

56:35

Well, I mean, who makes a phone out of aluminum, right? Like, nobody else would have done that. And now, of course, if your phone was made out of aluminum, you know, how crude, what kind of caveman would you have to be to have a phone that's made out of plastic?

S1

Speaker 1

56:47

Like, right? So like, so it's just this very right. And, you know, look, it's, it's, there's a thousand different ways to look at this, but 1 of the things is just like, look, these things are central to your life. Like you're with your phone more than you're with anything else.

S1

Speaker 1

56:58

Like it's in your, it's going to be in your hand. I mean, He, you know, you know, this, he thought very deeply about what it meant for something to be in your hand all day long.

S2

Speaker 2

57:03

Yeah.

S1

Speaker 1

57:04

Well, for example, here's an interesting design thing. Like he never wanted to, my understanding is he never wanted an iPhone to have a screen larger than you could reach with your thumb 1 handed. And so he was actually opposed to the idea of making the phones larger.

S1

Speaker 1

57:18

And I don't know if you have this experience today, but let's say there are certain moments in your day when you might be like only have 1 hand available and you might wanna be on your phone.

S2

Speaker 2

57:25

Yeah.

S1

Speaker 1

57:25

And you're trying to like, send a text and your thumb can't reach the send button.

S2

Speaker 2

57:30

Yeah, I mean, there's pros and cons, right? And then there's like folding phones, which I would love to know what he thinks about them. But is there something you could also just link on, because he's 1 of the interesting figures in the history of technology.

S2

Speaker 2

57:45

What makes him as successful as he was, what makes him as interesting as he was, what made him so productive and important in the development of technology?

S1

Speaker 1

57:57

He had an integrated worldview. So the properly designed device that had the correct functionality that had the deepest understanding of the user that was the most beautiful, right? Like it had to be all of those things, right?

S1

Speaker 1

58:10

He basically would drive to as close to perfect as you could possibly get, right? And I suspect that he never quite thought he ever got there because most great creators are generally dissatisfied. You read accounts later on and all they can see are the flaws in their creation. But he got as close to perfect each step of the way as he could possibly get with the constraints of the technology of his time.

S1

Speaker 1

58:28

And then, look, he was famous in the Apple model. It's like, look, they will, this headset that they just came out with, like, it's like a decade long project, right? It's like, and they're just gonna sit there and tune and tune and polish and polish and tune and polish and tune and polish until it is as perfect as anybody could possibly make anything. And then this goes to the way that people describe working with him was which is, you know, there was a terrifying aspect of working with him, which is, you know, he was, you know, he was very tough.

S1

Speaker 1

58:54

But there was this thing that everybody I've ever talked to work for him says that they all say the following, which is he, we did the best work of our lives when we worked for him because he set the bar incredibly high and then he supported us with everything that he could to let us actually do work of that quality. And so a lot of people who were at Apple spend the rest of their lives trying to find another experience where they feel like they're able to hit that quality bar again.

S2

Speaker 2

59:14

Even if it, in retrospect or during it felt like suffering.

S1

Speaker 1

59:17

Yeah, exactly.

S2

Speaker 2

59:19

What does that teach you about the human condition, huh?

S1

Speaker 1

59:24

So look, so I'd say, exactly. So the Silicon Valley, I mean, look, he's not, you know, George Patton in the, you know, in the army. Like, you know, there are many examples in other fields, you know, that are like this.

S1

Speaker 1

59:37

Specifically in tech, it's actually, I find it very interesting. There's the Apple way, which is polish, polish, polish, and don't ship until it's as perfect as you can make it. And then there's the sort of the other approach, which is the incremental hacker mentality, which basically says ship early and often and iterate. 1 of the things I find really interesting is, I'm now 30 years into this, there are very successful companies on both sides of that approach.