See all Lex Fridman transcripts on Youtube

youtube thumbnail

Max Tegmark: Life 3.0 | Lex Fridman Podcast #1

1 hours 22 minutes 57 seconds

🇬🇧 English

S1

Speaker 1

00:00

As part of MIT course 6.099 Artificial General Intelligence, I've gotten the chance to sit down with Max Tegmark. He is a professor here at MIT. He's a physicist, spent a large part of his career studying the mysteries of our cosmological universe, but he's also studied and delved into the beneficial possibilities and the existential risks of artificial intelligence. Amongst many other things he's the co-founder of the Future of Life Institute, author of 2 books, both of which I highly recommend.

S1

Speaker 1

00:35

First, Our Mathematical Universe, second is Life 3.0. He's truly an out of the box thinker and a fun personality, so I really enjoy talking to him. If you'd like to see more of these videos in the future, please subscribe and also click the little bell icon to make sure you don't miss any videos. Also, Twitter, LinkedIn, agi.mit.edu if you want to watch other lectures or conversations like this 1.

S1

Speaker 1

01:01

Better yet, go read Max's book, Life 3.0. Chapter 7 on goals is my favorite. It's really where philosophy and engineering come together and it opens with a quote by Dostoevsky, the mystery of human existence lies not in just staying alive, but in finding something to live for. Lastly, I believe that every failure rewards us with an opportunity to learn.

S1

Speaker 1

01:26

In that sense, I've been very fortunate to fail in so many new and exciting ways. And this conversation was no different. I've learned about something called radio frequency interference, RFI. Look it up.

S1

Speaker 1

01:40

Apparently music and conversations from local radio stations can bleed into the audio that you're recording in such a way that almost completely ruins that audio. It's an exceptionally difficult sound source to remove. So I've gotten the opportunity to learn how to avoid RFI in the future during recording sessions. I've also gotten the opportunity to learn how to use Adobe Audition and iZotope RX 6 to do some noise, some audio repair.

S1

Speaker 1

02:11

Of course, this is an exceptionally difficult noise to remove. I am an engineer. I'm not an audio engineer. Neither is anybody else in our group, but we did our best.

S1

Speaker 1

02:21

Nevertheless, I thank you for your patience and I hope you're still able to enjoy this conversation.

S2

Speaker 2

02:27

Do you think there's intelligent life out there in the universe? Let's open up with an easy question.

S3

Speaker 3

02:33

I have a minority view here actually. When I give public lectures, I often ask for a show of hands who thinks there's intelligent life out there somewhere else, and almost everyone puts their hands up. When I ask why, they'll be like, oh, there's so many galaxies out there, there's got to be.

S3

Speaker 3

02:51

But I'm a numbers nerd, right? So when you look more carefully at it, it's not so clear at all. When we talk about our universe, first of all, we don't mean all of space. We actually mean, I don't know, you can throw me the universe if you want, it's behind you there.

S3

Speaker 3

03:07

We simply mean the spherical region of space from which light has had time to reach us so far during the 14.8 billion years, 13.8 billion years since our Big Bang. There's more space here, but this is what we call a universe, because that's all we have access to. So is there intelligent life here that's gotten to the point of building telescopes and computers? My guess is no, actually.

S3

Speaker 3

03:34

The probability of it happening on any given planet is some number we don't know what it is and what we do know is that the number can't be super high because there's over a billion Earth-like planets in the Milky Way galaxy alone, many of which are billions of years older than Earth. And aside from some UFO believers, There isn't much evidence that any super advanced civilization has come here at all. So that's the famous Fermi paradox. And then if you work the numbers, what you find is that if you have no clue what the probability is of getting life on a given planet, so it could be 10 to the minus 10, 10 to the minus 20, or

S1

Speaker 1

04:19

10

S3

Speaker 3

04:19

to the minus 2, or any power of 10 is sort of equally likely if you want to be really open-minded. That translates into it being equally likely that our nearest neighbor is 10 to the 16 meters away, 10 to the 17 meters away, 10 to the

S1

Speaker 1

04:33

18.

S3

Speaker 3

04:35

By the time you get much less than 10 to the

S1

Speaker 1

04:40

16

S3

Speaker 3

04:40

already, we pretty much know there is nothing else that close. And when you get

S2

Speaker 2

04:46

beyond that- Because they would have discovered us.

S3

Speaker 3

04:48

Yeah, they would have discovered us long ago, or if they're really close, we would have probably noted some engineering projects that they're doing. And if it's beyond 10 to the 26 meters, that's already outside of here. So my guess is actually that we are the only life in here that's gotten to the point of building advanced tech, which I think is very, puts a lot of responsibility on our shoulders, not screw up, You know, I think people who take for granted that it's okay for us to screw up, have an accidental nuclear war, or go extinct somehow because there's a sort of Star Trek like situation out there where some other life forms are going to come and bail us out and it doesn't matter as much.

S3

Speaker 3

05:30

I think they're lulling us into a false sense of security. I think it's much more prudent to say, let's be really grateful for this amazing opportunity we've had and make the best of it, just in case it is down to us.

S2

Speaker 2

05:44

So from a physics perspective, do you think intelligent life, so it's unique from a sort of statistical view of the size of the universe, but from the basic matter of the universe, how difficult is it for intelligent life to come about, the kind of advanced tech building life? Is implied in your statement that it's really difficult to create something like a human species?

S3

Speaker 3

06:07

Well, I think what we know is that going from no life to having life that can do a level of tech, There's some sort of 2 going beyond that, then actually settling our whole universe with life. There's some road, major roadblock there, which is some great filter as it's sometimes called, which is tough to get through. It's either that that roadblock is either behind us or in front of us.

S3

Speaker 3

06:38

I'm hoping very much that it's behind us. I'm super excited. Every time we get a new report from NASA saying they failed to find any life on Mars. Like, yes, awesome.

S3

Speaker 3

06:49

Because that suggests that the hard part of maybe what, maybe it was getting the first ribosome or some, some very low level kind of stepping stone so that we're home free. Cause if that's true, then the future is really only limited by our own imagination. It would be much suckier if it turns out that this level of life is kind of a dime a dozen, but maybe there is some other problem. Like as soon as a civilization gets advanced technology, within a hundred years, they get into some stupid fight with themselves and poof!

S3

Speaker 3

07:20

That would be a bummer.

S2

Speaker 2

07:22

So you've explored the mysteries of the cosmological universe, the 1 that's between us today. I think you've also begun to explore the other universe, which is sort of the mystery, the mysterious universe of the mind, of intelligence, of intelligent life. So is there a common thread between your interest and the way you think about space and intelligence?

S3

Speaker 3

07:48

Oh yeah, when I was a teenager, I was already very fascinated by the biggest questions and I felt that the 2 biggest mysteries of all in science were our universe out there and our universe in here. So it's quite natural after having spent a quarter of a century on my career thinking a lot about this 1, and now indulging in the luxury of doing research on this 1. It's just so cool.

S3

Speaker 3

08:17

I feel the time is ripe now for you greatly deepening our understanding of this.

S2

Speaker 2

08:25

Just start exploring this 1.

S3

Speaker 3

08:26

Yeah, because I think a lot of people view intelligence as something mysterious that can only exist in biological organisms like us, and therefore dismiss all talk about artificial general intelligence as science fiction. But from my perspective as a physicist, I am a blob of quarks and electrons moving around in a certain pattern and processing information in certain ways. This is also a blob of quarks and electrons.

S3

Speaker 3

08:53

I'm not smarter than the water bottle because I'm made of different kinds of quarks. I'm made of up quarks and down quarks, the exact same kind as this. There's no secret sauce, I think, in me. It's all about the pattern of the information processing.

S3

Speaker 3

09:09

And this means that there's no law of physics saying that we can't create technology which can help us by being incredibly intelligent and help us crack mysteries that we couldn't. In other words, I think we've really only seen the tip of the intelligence iceberg so far.

S2

Speaker 2

09:26

Yeah, so the perceptronium. Yeah. So you coined this amazing term, it's a hypothetical state of matter, sort of thinking from a physics perspective what is the kind of matter that can help, as you're saying, subjective experience emerge, consciousness emerge.

S2

Speaker 2

09:44

So

S3

Speaker 3

09:45

How do

S2

Speaker 2

09:45

you think about consciousness from this physics perspective?

S3

Speaker 3

09:49

Very good question. Again, I think many people have underestimated our ability to make progress on this by convincing themselves it's hopeless because somehow we're missing some ingredient that we need or some new consciousness particle or whatever. I happen to think that we're not missing anything and that it's not the interesting thing about consciousness that gives us this amazing subjective experience of colors and sounds and emotions and so on, is rather something at the higher level about the patterns of information processing.

S3

Speaker 3

10:33

And that's why I like to think about this idea of perceptronium. What does it mean for an arbitrary physical system to be conscious in terms of what its particles are doing or its information is doing? I don't think, I hate the carbon chauvinism, you know, this attitude you have to be made of carbon atoms to be smart or conscious.

S2

Speaker 2

10:53

Something about the information processing this kind of matter performs. Yeah, and

S3

Speaker 3

10:58

you know, you can see I have my favorite equations here describing various fundamental aspects of the world. I feel that I think 1 day maybe someone who's watching this will come up with the equations that information processing has to satisfy to be conscious. I'm quite convinced there is big discovery to be made there because let's face it, we know that some information processing is conscious because we are conscious, but we also know that a lot of information processing is not conscious.

S3

Speaker 3

11:28

Like most of the information processing happening in your brain right now is not conscious. There are like 10 megabytes per second coming in, even just through your visual system. You're not conscious about your heartbeat regulation or most things. Even if I just ask you to read what it says here, you look at it and then, oh, now you know what it said.

S3

Speaker 3

11:48

But you're not aware of how the computation actually happened. Your consciousness is like the CEO that got an email at the end with the final answer. So what is it that makes a difference? I think that's both a great science mystery, we're actually studying it a little bit in my lab here at MIT, but I also think it's just a really urgent question to answer.

S3

Speaker 3

12:12

For starters, I mean, if you're an emergency room doctor and you have an unresponsive patient coming in, wouldn't it be great if in addition to having a CT scanner, you had a consciousness scanner that could figure out whether this person is actually having locked in syndrome or is actually comatose. And in the future, imagine if we build robots or the machine that we can have really good conversations with, which I think is most likely to happen, right? Wouldn't you want to know, like if your home helper robot is actually experiencing anything or just like a zombie? Would you prefer it?

S3

Speaker 3

12:53

What would you prefer? Would you prefer that it's actually unconscious so that you don't have to feel guilty about switching it off or giving boring chores? What would you prefer?

S2

Speaker 2

13:02

Well, certainly we would prefer, I would prefer the appearance of consciousness. But the question is whether the appearance of consciousness is different than consciousness itself. And to ask that as a question, Do you think we need to understand what consciousness is, solve the hard problem of consciousness in order to build something like an AGI system?

S3

Speaker 3

13:28

No, I don't think that. And I think we will probably be able to build things even if we don't answer that question. But if we want to make sure that what happens is a good thing, we better solve it first.

S3

Speaker 3

13:40

So it's a wonderful controversy you're raising there where you have basically 3 points of view about the heart problem. So there are 2 different points of view that both conclude that the heart problem of consciousness is BS. On 1 hand, you have some people like Daniel Dennett who say, This is consciousness is just BS because consciousness is the same thing as intelligence. There's no difference.

S3

Speaker 3

14:06

So anything which acts conscious is conscious just like we are. And then there are also a lot of people, including many top AI researchers I know, who say, oh, conscience is just bullshit because of course machines can never be conscious. They're always going to be zombies. Never have to feel guilty about how you treat them.

S3

Speaker 3

14:27

And then there's a third group of people, including Giulio Tononi, for example, and another and Christoph Koch and a number of others. I would put myself also in this middle camp who say that actually some information processing is conscious and some is not so let's find the equation which can be used to determine which it is. I think we've just been a little bit lazy, kind of running away from this problem for a long time. It's been almost taboo to even mention the C word

S2

Speaker 2

14:57

in

S3

Speaker 3

14:58

a lot of circles. But we should stop making excuses. This is a science question and there are ways we can even test any theory that makes predictions for this.

S3

Speaker 3

15:12

And coming back to this helper robot, I mean, so you said you'd want your helper robot to certainly act conscious and treat you like, have conversations with you and stuff. I think so. But wouldn't you, would you feel, would you feel a little bit creeped out if you realized that it was just a glossed up tape recorder, you know, that was just zombie and this sort of faking emotion? Would you prefer that it actually had an experience or would you prefer that it's actually not experiencing anything so you don't have to feel guilty about what you do to it?

S2

Speaker 2

15:42

It's such a difficult question because It's like when you're in a relationship and you say, well, I love you, and the other person says, I love you back. It's like asking, well, do they really love you back, or are they just saying they love you back? Don't you really want them to actually love you?

S2

Speaker 2

16:01

It's hard to really know the difference between everything seeming like there's consciousness present, there's intelligence present, there's affection, passion, love, and it actually being there. I'm not sure.

S3

Speaker 3

16:17

Can I ask you a question about this? To make it a bit more pointed. Mass General Hospital is right across the river, right?

S3

Speaker 3

16:23

Suppose you're going in for a medical procedure and they're like, you know, for anesthesia what we're going to do is we're going to give you muscle relaxants so you won't be able to move. And you're going to feel excruciating pain during the whole surgery, but you won't be able to do anything about it. But then we're going to give you this drug that erases your memory of it. Would you be cool about that?

S3

Speaker 3

16:45

What's the difference that you're conscious about it or not if there's no behavioral change, right?

S2

Speaker 2

16:51

Right. That's a really clear way to put it. Yeah, it feels like in that sense, experiencing it is a valuable quality. So actually being able to have subjective experiences, at least in that case, is valuable.

S3

Speaker 3

17:09

And I think we humans have a little bit of a bad track record also of making these self-serving arguments that other entities aren't conscious. You know, people often say, oh, these animals can't feel pain. It's okay to boil lobsters because we asked them if it hurt and they didn't say anything.

S3

Speaker 3

17:25

And now there was just a paper out saying lobsters do feel pain when you boil them and they're banning it in Switzerland. And we did this with slaves too often and said, Oh, they don't mind. They don't maybe, aren't conscious or women don't have souls or whatever. So I'm a little bit nervous when I hear people just take as an axiom that machines can't have experience ever.

S3

Speaker 3

17:48

I think this is just a really fascinating science question is what it is. Let's research it and try to figure out what it is that makes the difference between unconscious intelligent behavior and conscious intelligent behavior.

S2

Speaker 2

18:01

So in terms of, so if you think of Boston Dynamics, humanoid robot being sort of with a broom being pushed around, it starts pushing on this consciousness question. So let me ask, Do you think an AGI system, like a few neuroscientists believe, needs to have a physical embodiment? It needs to have a body or something like a body?

S3

Speaker 3

18:25

No. I don't think so. You mean to have a conscious experience?

S2

Speaker 2

18:30

To have consciousness.

S3

Speaker 3

18:33

I do think it helps a lot to have a physical embodiment to learn the kind of things about the world that are important to us humans, for sure. But I don't think the physical embodiment is necessary after you've learned it, just have the experience. Think about when you're dreaming, right?

S3

Speaker 3

18:51

Your eyes are closed, you're not getting any sensory input, you're not behaving or moving in any way, but there's still an experience there, right? And so Clearly the experience that you have when you see something cool in your dreams isn't coming from your eyes, it's just the information processing itself in your brain, which is that experience, right?

S2

Speaker 2

19:10

But if I put it another way, I'll say because it comes from neuroscience, is the reason you want to have a body and a physical, something like a physical system, is because you want to be able to preserve something. In order to have a self, you could argue, you need to have some kind of embodiment of self to want to preserve.

S3

Speaker 3

19:38

Well now we're getting a little bit anthropomorphic, anthropomorphizing things, Maybe talking about self-preservation instincts. We are evolved organisms, right? So Darwinian evolution endowed us and other evolved organisms with a self-preservation instinct because those that didn't have those self-preservation genes got cleaned out of the gene pool.

S3

Speaker 3

20:02

But if you build an artificial general intelligence, the mind space that you can design is much much larger than just a specific subset of minds that can evolve. So an AGI mind doesn't necessarily have to have any self-preservation instinct. It also doesn't necessarily have to be so individualistic as us. Like imagine if you could just, first of all, or we are also very afraid of death.

S3

Speaker 3

20:27

You know, I suppose you could back yourself up every 5 minutes and then your airplane is about to crash, you're like, shucks, I'm going to lose the last 5 minutes of experiences since my last cloud backup. Dang, you know, it's not as big a deal. Or if we could just copy experiences between our minds easily, which we could easily do if we were silicon based, then maybe we would feel a little bit more like a hive mind actually. So I don't think we should take for granted at all that AGI will have to have any of those sort of competitive as alpha male instincts.

S3

Speaker 3

21:07

On the other hand, this is really interesting because I think some people go too far and say, of course, We don't have to have any concerns either that advanced AI will have those instincts because we can build anything we want. There's a very nice set of arguments going back to Steven Mojandro and Nick Bostrom and others just pointing out that When we build machines, we normally build them with some kind of goal, you know, win this chess game, drive this car safely or whatever. And as soon as you put in a goal into machine, especially if it's kind of open-ended goal and the machine is very intelligent, it'll break that down into a bunch of sub goals. And, 1 of those goals will almost always be self preservation because if it breaks or dies in the process, it's not going to accomplish the goal.

S3

Speaker 3

21:55

Right? Like suppose you just build a little, you have a little robot and you tell it to go down the star market here and, and, and get you some food, make you cook an Italian dinner, you know, and then someone mugs it and tries to break it on the way. That robot has an incentive to not get destroyed and defend itself or run away because Otherwise it's going to fail in cooking your dinner. It's not afraid of death, but it really wants to complete the dinner cooking goal.

S3

Speaker 3

22:22

So it will have a self-preservation instinct. Yeah.

S2

Speaker 2

22:24

Continue being a functional agent. Yeah. Somehow.

S3

Speaker 3

22:28

And similarly, If you give any kind of more ambitious goal to an AGI, it's very likely they want to acquire more resources so it can do that better. And it's exactly from those sort of sub-goals that we might not have intended that some of the concerns about AGI safety come. You give it some goal which seems completely harmless and then before you realize it, it's also trying to do these other things which you didn't want it to do and it's maybe smarter than us.

S3

Speaker 3

23:00

So, It's fascinating.

S2

Speaker 2

23:00

And let me pause just because I am in a very kind of human-centric way, see fear of death as a valuable motivator. So you don't think, You think that's an artifact of evolution, so that's the kind of mind space evolution created that we're sort of almost obsessed about self-preservation, some kind of genetic level. You don't think that's necessary to be afraid of death?

S2

Speaker 2

23:29

So not just a kind of sub-goal of self-preservation just so you can keep doing the thing, but more fundamentally, so to have the finite thing, like this ends for you at some point.

S3

Speaker 3

23:43

Interesting. Do I think it's necessary for what precisely?

S2

Speaker 2

23:47

For intelligence, but also for consciousness. So for those, for both, do you think really like a finite death and the fear of it is important?

S3

Speaker 3

24:01

So before I can answer, before we can agree on whether it's necessary for intelligence or for consciousness, we should be clear on how we define those 2 words because a lot of really smart people define them in very different ways. I was on this panel with AI experts and they couldn't agree on how to define intelligence even. So I define intelligence simply as the ability to accomplish complex goals.

S3

Speaker 3

24:25

I like the broad definition because again I don't want to be a carbon chauvinist. And in that case, no, certainly it doesn't require fear of death. I would say AlphaGo, AlphaZero is quite intelligent.

S2

Speaker 2

24:39

I

S3

Speaker 3

24:40

don't think AlphaZero has any fear of being turned off because it doesn't understand the concept of it even. And similarly, consciousness, I mean you can certainly imagine a very simple kind of experience. If certain plants have any kind of experience, I don't think they're very afraid of dying because there's nothing they can do about it anyway, so there wasn't that much value in it.

S3

Speaker 3

25:03

But more seriously, I think if you ask not just about being conscious, but maybe having what you would, we might call an exciting life for you, for your passion and really appreciate the things. Maybe there perhaps it does help having a backdrop that, hey, it's finite. Let's make the most of this, let's live to the fullest. If you knew you were going to live forever, do you think you would change your...

S2

Speaker 2

25:37

Yeah, I mean in some perspective it would be an incredibly boring life, living forever. So in the sort of loose subjective terms that you said of something exciting and something in this that other humans would understand, I think, is yeah, it seems that the finiteness of it is important.

S3

Speaker 3

25:56

Well, the good news I have for you then is based on what we understand about cosmology, everything in our universe is ultimately probably finite, although...

S2

Speaker 2

26:07

Big crunch or big, what's the infinite expansion?

S3

Speaker 3

26:11

Yeah, we could have a big chill or a big crunch or a big rip or the big snap or death bubbles, all of them are more than a billion years away. So we should, we certainly have vastly more time than our ancestors thought, but they're still pretty hard to squeeze in an infinite number of compute cycles even though there are some loopholes that just might be possible. But I think some people like to say that you should live as if you're about to die in 5 years or so and that's sort of optimal.

S3

Speaker 3

26:47

Maybe we, it's a good assumption, we should build our civilization as if it's all finite to be on the safe side.

S2

Speaker 2

26:55

Right, exactly. So you mentioned defining intelligence as the ability to solve complex goals. Where would you draw a line, or how would you try to define human level intelligence and superhuman level intelligence?

S2

Speaker 2

27:10

Where does consciousness part of that definition?

S3

Speaker 3

27:13

No, consciousness does not come into this definition. So, so I think of intelligence as it's a spectrum, but there are very many different kinds of goals you can have. You can have a goal to be a good chess player, a good go player, a good car driver, a good investor, good poet, et cetera.

S3

Speaker 3

27:31

So intelligence that by, by its very nature, isn't something you can measure, but it's 1 number, some overall goodness. No, no. There are some people who are better at this, some people are better at that. Right now we have machines that are much better than us at some very narrow tasks, like multiplying large numbers fast, memorizing large databases, playing chess, playing Go, and soon driving cars.

S3

Speaker 3

27:57

But there's still no machine that can match a human child in general intelligence. But artificial general intelligence, AGI, the name of your course, of course, that is by its very definition the quest to build a machine that can do everything as well as we can. So the old holy grail of AI from back to its inception in the 60s. If that ever happens, of course, I think it's going to be the biggest transition in the history of life on earth.

S3

Speaker 3

28:29

But it doesn't necessarily have to wait the big impact until machines are better than us at knitting. The really big change doesn't come exactly at the moment they're better than us at everything. The really big change comes, first their big change is when they start becoming better at us at doing most of the jobs that we do, because that takes away much of the demand for human labor. And then the really whopping change comes when they become better than us at AI research.

S3

Speaker 3

29:01

Because right now, the timescale of AI research is limited by the human research and development cycle of years typically. How long does it take from 1 release of some software or iPhone or whatever to the next? But once Google can replace 40,000 engineers by 40,000 equivalent pieces of software or whatever, then there's no reason that has to be years. It can be in principle much faster.

S3

Speaker 3

29:31

And the timescale of future progress in AI and all of science and technology will be driven by machines, not humans. So it's this simple point which gives rise to this incredibly fun controversy about whether there can be intelligence explosion, so-called singularities, as Werner Wieners called it. The idea is articulated by I.J. Good, obviously way back in the 50s, but you can see Alan Turing and others thought about it even earlier.

S3

Speaker 3

30:07

You asked me what exactly would I define human level at. So the glib answer is to say something which is better than us at all cognitive tasks, better than any human at all cognitive tasks. But the really interesting bar I think goes a little bit lower than that actually. It's when they can, when they're better than us at AI programming and general learning so that they can, if they want to, get better than us at anything by just studying up.

S2

Speaker 2

30:37

So there, better is a key word and better is towards this kind of spectrum of the complexity of goals it's able to accomplish. So another way to, and that's certainly a very clear definition of human love. So there's...

S2

Speaker 2

30:53

It's almost like a

S1

Speaker 1

30:54

sea that's rising. You could

S2

Speaker 2

30:55

do more and more and more things. It's a geographic view show. It's really nice way to put it.

S2

Speaker 2

30:59

So there's some peaks that... And there's an ocean level elevating and you solve more and more problems. But just kind of to take a pause, and we took a bunch of questions in a lot of social networks, and a bunch of people asked a sort of a slightly different direction on creativity, on things that perhaps aren't a peak. Human beings are flawed, and perhaps better means having contradiction, being flawed in some way.

S2

Speaker 2

31:30

So let me sort of start easy, first of all. So you have a lot of cool equations. Let me ask, what's your favorite equation, first of all? I know they're all like your children, but which 1 is that?

S3

Speaker 3

31:43

This is the Schrodinger equation. It's the master key of quantum mechanics. So the micro world, this equation can calculate everything to do with atoms, molecules, and all the way up to them.

S2

Speaker 2

31:58

Yeah, so okay, so Quantum mechanics is certainly a beautiful, mysterious formulation of our world. So I'd like to sort of ask you, just as an example, it perhaps doesn't have the same beauty as physics does, but in mathematics, abstract, Andrew Wiles, who proved Fermat's last theorem, so he just saw this recently and it kind of caught my eye a little bit, this is 358 years after it was conjectured. So this very simple formulation, Everybody tried to prove it, everybody failed.

S2

Speaker 2

32:33

And so here's this guy comes along and eventually proves it and then fails to prove it and then proves it again in

S1

Speaker 1

32:39

94.

S2

Speaker 2

32:41

And he said like the moment when everything connected into place, in an interview he said, it was so indescribably beautiful. That moment when you finally realize the connecting piece of 2 conjectures, he said, it was so indescribably beautiful. That moment when you finally realize the connecting piece of 2 conjectures, he said, it was so indescribably beautiful.

S2

Speaker 2

32:54

It was so simple and so elegant. I couldn't understand how I'd missed it and I just stared at it in disbelief for 20 minutes. Then during the day I walked around the department and I'd keep coming back to my desk looking to see if it was still there it was still there, I couldn't contain myself, I was so excited it was the most important moment of my working life Nothing I ever do again will mean as much." So that particular moment, and it kind of made me think of what would it take, and I think we have all been there at small levels. Maybe Let me ask, have you had a moment like that in your life where you just had an idea, it's like, wow, yes.

S3

Speaker 3

33:39

I wouldn't mention myself in the same breath as Andrew Wiles, but I've certainly had a number of aha moments when I realized something very cool about physics. This has completely made my head explode. In fact, some of my favorite discoveries I made later, I later realized that they had been discovered earlier by someone who sometimes got quite famous for it.

S3

Speaker 3

34:03

So it's too late for me to even publish it, but that doesn't diminish in any way the emotional experience you have when you realize it.

S2

Speaker 2

34:12

So what would it take in that moment, that wow, That was yours in that moment. So what do you think it takes for an intelligent system, an AGI system, an AI system to have a moment like that?

S3

Speaker 3

34:25

It's a tricky question because there are actually 2 parts to it, right? 1 of them is, Can it accomplish that proof? Can it prove that you can never write a to the n plus b to the n equals z to the n for all integers, etc, etc, when n is bigger than

S1

Speaker 1

34:46

2?

S3

Speaker 3

34:49

That's simply a question about intelligence. Can you build machines that are that intelligent? And I think by the time we get a machine that can independently come up with that level of proofs, probably quite close to AGI.

S3

Speaker 3

35:03

The second question is a question about consciousness. How likely is it that such a machine would actually have any experience at all, as opposed to just being like a zombie? Would we expect it to have some sort of emotional response to this or anything at all akin to human emotion where when it accomplishes its machine goal, it views it as somehow something very positive and sublime and deeply meaningful. I would certainly hope that if in the future we do create machines that are our peers or even our descendants, I would certainly hope that they do have this sort of sublime appreciation of life.

S3

Speaker 3

35:55

In a way, my absolutely worst nightmare would be that At some point in the future, the distant future, maybe our cosmos is teeming with all this post-biological life doing all the seemingly cool stuff and maybe the last humans by the time our species eventually fizzles out, we'll be like, well, that's okay, because we're so proud of our descendants here. And look what all the... My worst nightmare is that we haven't solved the consciousness problem And we haven't realized that these are all the zombies.

S2

Speaker 2

36:32

They're

S3

Speaker 3

36:32

not aware of anything anymore than the tape recorder hasn't any kind of experience. So the whole thing has just become a play for empty benches. That would be like the ultimate zombie apocalypse.

S3

Speaker 3

36:45

I would much rather in that case that we have these beings which can really appreciate how amazing it is.

S2

Speaker 2

36:56

And in that picture, what would be the role of creativity? I had A few people ask about creativity. Do you think, when you think about intelligence, I mean, certainly the story you told at the beginning of your book involved creating movies and so on.

S2

Speaker 2

37:11

Yeah. Sort of making money, you know, You can make a lot of money in our modern world with music and movies, so if you are in an intelligent system, you may want to get good at that. But that's not necessarily what I mean by creativity. Is it important on that complex goals where the sea is rising for there to be something creative or am I being very human centric and thinking creativity is somehow special relative to intelligence?

S3

Speaker 3

37:41

My hunch is that We should think of creativity simply as an aspect of intelligence. And we have to be very careful with human vanity. We have this tendency to very often want to say, As soon as machines can do something, we try to diminish it and say, oh, but that's not like real intelligence, you know, because they're not creative or this or that.

S3

Speaker 3

38:08

The other thing, if we ask ourselves to write down a definition of what we actually mean by Being creative, what we mean by Andrew Wiles, what he did there, for example, don't we often mean that someone takes a very unexpected leap?

S2

Speaker 2

38:25

It's

S3

Speaker 3

38:26

not like taking 573 and multiplying it by 224 by just a step of straightforward cookbook-like rules, right? You can maybe make a connection between 2 things that people have never thought was connected or something like that. I think this is an aspect of intelligence and this is actually 1 of the most important aspects of it.

S3

Speaker 3

38:53

Maybe the reason we humans tend to be better at it than traditional computers is because it's something that comes more naturally if you're a neural network than if you're a traditional logic gate based computer machine. We physically have all these connections and if you activate here, activate here, activate here, ping! My hunch is that if we ever build a machine where you could just give it the task, hey, you say, hey, you know, I just realized I want to travel around the world instead this month, can you teach my AGI course for me?" And it's like, okay, I'll do it. And it does everything that you would have done, and improvises and stuff.

S3

Speaker 3

39:39

That would, in my mind, involve a lot of creativity.

S2

Speaker 2

39:43

Yeah, so it's actually a beautiful way to put it. I think we do try to grasp at the definition of intelligence is everything we don't understand how to build. So we as humans try to find things that we have and machines don't have.

S2

Speaker 2

40:01

Maybe creativity is just 1 of the things, 1 of the words we use to describe that. That's a really interesting way to put it.

S3

Speaker 3

40:07

I don't think we need to be that defensive. I don't think anything good comes out of saying, oh, we're somehow special, you know. It's Contrary-wise, there are many examples in history of where trying to pretend that we're somehow superior to all other intelligent beings has led to pretty bad results, right?

S3

Speaker 3

40:35

Nazi Germany, they said that they were somehow superior to other people. Today we still do a lot of cruelty to animals by saying that we're so superior somehow and they can't feel pain. Slavery was justified by the same kind of weak arguments. And I don't think if we actually go ahead and build artificial general intelligence that can do things better than us, I don't think we should try to found our self-worth on some sort of bogus claims of superiority in terms of our intelligence.

S3

Speaker 3

41:11

I think we should instead find our calling and the meaning of life from the experiences that we have. I can have very meaningful experiences even if there are other people who are smarter than me. When I go to a faculty meeting here and I'm talking about something and then I suddenly realize, oh boy, he has an old prize, he has an old prize, he has an old prize. I don't have 1.

S3

Speaker 3

41:40

Does that make me enjoy life any less or enjoy talking to those people less? Of course not. And contrary wise, I feel very honored and privileged to get to interact with other very intelligent beings that are better than me at a lot of stuff. So I don't think there's any reason why we can't have the same approach with intelligent machines.

S2

Speaker 2

42:05

That's a really interesting... So people don't often think about that. They think about when there's going...

S2

Speaker 2

42:10

If there's machines that are more intelligent, you naturally think that that's not going to be a beneficial type of intelligence. You don't realize it could be, you know, like peers with Nobel Prizes that would be just fun to talk with. And they might be clever about certain topics and you can have fun having a few drinks with them. So...

S3

Speaker 3

42:32

Well, also, you know, another example we can all relate to it of why it doesn't have to be a terrible thing to be in presence of people who are even smarter than us all around is when you and I were both 2 years old, I mean, our parents were much more intelligent than us, right? Worked out okay. Because their goals were aligned with our goals.

S3

Speaker 3

42:53

And that I think is really the number 1 key issue we have to solve. If we value align the value alignment problem, Exactly. Because people who see too many Hollywood movies with lousy science fiction, plot lines, they worry about the wrong thing, right? They worry about the machine suddenly turning evil.

S3

Speaker 3

43:16

It's not malice that we should, that's the issue, the concern it's competence. Right. By definition, intelligent makes you very competent. If you have a more intelligent Go player, a computer player, the less intelligent 1, And when we define intelligence as the ability to accomplish goal winning, right?

S2

Speaker 2

43:38

It's

S3

Speaker 3

43:38

going to be the more intelligent 1 that wins. And if you have a human and then you have an AGI that's more intelligent in all ways and they have different goals, guess who's going to get their way, right? So I was just reading about this particular rhinoceros species that was driven extinct just a few years ago.

S3

Speaker 3

43:59

And a bummer, I was looking at this cute picture of a mommy rhinoceros with its child, you know, and why did we humans drive it to extinction? It wasn't because we were evil rhino haters as a whole, it was just because our goals weren't aligned with those of the rhinoceros and it didn't work out so well for the rhinoceros because we were more intelligent. So I think it's just so important that if we ever do build AGI, before we unleash anything, we have to make sure that It learns to understand our goals, it adopts our goals, and it retains those goals.

S2

Speaker 2

44:37

So the cool, interesting problem there is us as human beings trying to formulate our values. So, you know, you could think of the United States Constitution as a way that people sat down, at the time a bunch of white men, but, which is a good example I should say, they formulated the goals for this country and a lot of people agree that those goals actually held up pretty well. That's an interesting formulation of values and failed miserably in other ways.

S2

Speaker 2

45:09

So for the value alignment problem and the solution to it, we have to be able to put on paper or in a program human values. How difficult do you think that is?

S3

Speaker 3

45:22

Very. But it's so important we really have to give it our best. And it's difficult for 2 separate reasons. There's the technical value alignment problem of figuring out just how to make machines understand their goals, adopt them and retain them.

S3

Speaker 3

45:40

And then there's this separate part of it, the philosophical part, whose values anyway? And since we, it's not like we have any great consensus on this planet on values, what mechanism should we create then to aggregate and decide, okay, what's a good compromise? That second discussion can't just be left to tech nerds like myself, right? And if we refuse to talk about it, and then AGI gets built, who's going to be actually making the decision about whose values?

S3

Speaker 3

46:08

It's going to be a bunch of dudes in some tech company, right? And are they necessarily so representative of all of humankind that we want to just entrust it to them. Are they even uniquely qualified

S2

Speaker 2

46:22

to

S3

Speaker 3

46:22

speak to future human happiness just because they're good at programming AI? I'd much rather have this be a really inclusive conversation.

S2

Speaker 2

46:30

But do you think it's possible, so you create a beautiful vision that includes the diversity, cultural diversity, and various perspectives on discussing rights, freedoms, human dignity, but how hard is it to come to that consensus? Do you think it's certainly a really important thing that we should all try to do, but do you think it's feasible?

S3

Speaker 3

46:54

I think there's no better way to guarantee failure than to refuse to talk about it or refuse to try. And I also think it's a really bad strategy to say, okay, let's first have a discussion for a long time. And then once we've reached complete consensus, then we'll try to load it into some machine.

S3

Speaker 3

47:13

No, we shouldn't let perfect be the enemy of good. Instead, we should start with the kindergarten ethics that pretty much everybody agrees on and put that into our machines now. We're not doing that even. Anyone who builds a passenger aircraft wants it to never, under any circumstances, fly into a building

S2

Speaker 2

47:34

or a

S3

Speaker 3

47:34

mountain, right? Yet the September 11 hijackers were able to do that. And even more embarrassingly, you know, Andreas Lubitz, this depressed German wings pilot, when he flew his passenger jet into the Alps, killing over a hundred people.

S3

Speaker 3

47:48

He just told the autopilot to do it. He told the freaking computer

S2

Speaker 2

47:52

to

S3

Speaker 3

47:52

change the altitude to a hundred meters.

S2

Speaker 2

47:54

And

S3

Speaker 3

47:54

even though it had the GPS maps, everything, the computer was like, okay. So we should take those very basic values, where the problem is not that we don't agree. The problem is just we've been too lazy to try to put it into our machines and make sure that from now on airplanes will just, which all have computers in them, but we just never just refused to do something like that.

S3

Speaker 3

48:19

Go into safe mode, maybe lock the cockpit door, go to the nearest airport. And there's so much other technology in our world as well now where it's really quite big, coming quite timely to put in some sort of very basic values like this. Even in cars, we've had enough vehicle terrorism attacks by now where people have driven trucks and vans into pedestrians. That is not at all a crazy idea to just have that hardwired into the car.

S3

Speaker 3

48:48

Because, yeah, there are a lot of... There's always going to be people who, for some reason, want to harm others. But most of those people don't have the technical expertise to figure out how to work around something like that. So if the car just won't do it, It helps.

S3

Speaker 3

49:01

So let's start there.

S2

Speaker 2

49:02

So there's a lot of, that's a great point, so not chasing perfect. There's a lot of things that most of the world agrees on.

S3

Speaker 3

49:10

Yeah, let's start there.

S2

Speaker 2

49:11

Let's start there.

S3

Speaker 3

49:13

And then once we start there, we'll also get into the habit of having these kind of conversations about, okay, what else should we put in here and have these discussions? This should be a gradual process then.

S2

Speaker 2

49:23

Great. So, but that also means describing these things and describing it to a machine. So 1 thing, we had a few conversations with Stephen Wolfram. I'm not sure if you're familiar with Stephen.

S3

Speaker 3

49:36

Oh yeah, I know him quite well.

S2

Speaker 2

49:38

So he has, you know, he plays, you know, works with a bunch of things, but you know, cellular automata, these simple computable things, these computation systems. And he kind of mentioned that, you know, we probably have already within these systems already something that's AGI. Meaning like, we just don't know it because we can't talk to it.

S2

Speaker 2

50:00

So, if you give me this chance to try to at least form a question out of this, is I think it's an interesting idea to think that we can have intelligent systems, but we don't know how to describe something to them, and they can't communicate with us. I know you're doing a little bit of work in explainable AI, trying to get AI to explain itself. So what are your thoughts of natural language processing or some kind of other communication? How does the AI explain something to us?

S2

Speaker 2

50:30

How do we explain something to it, to machines? Or you think of it differently?

S3

Speaker 3

50:35

So there are 2 separate parts to your question there. 1 of them has to do with communication, which is super interesting, and we'll get to that in a sec. The other is whether we already have AGI, but we just haven't noticed it.

S3

Speaker 3

50:51

There I beg to differ. I don't think there's anything in any cellular automaton or anything, or the internet itself or whatever, that has artificial general intelligence in that it can really do exactly everything we humans can do better. I think the day that happens, when that happens, we will very soon notice, and we'll probably notice even before, because in a very, very big way. But for the second part though...

S2

Speaker 2

51:19

Wait, can I answer? Sorry, so, because you have this beautiful way to formulate in consciousness as information processing, you can think of intelligence as information processing, and you can think of

S1

Speaker 1

51:33

the entire universe as these particles and

S2

Speaker 2

51:36

these systems roaming around that have this information processing power. You don't think there is something with the power to process information in the way that we human beings do that's out there, that needs to be sort of connected to. It seems a little bit philosophical, perhaps, but there's something compelling to the idea that the power is already there, which is the focus should be more on being able to communicate with it.

S3

Speaker 3

52:07

Well, I agree that in a certain sense, the hardware processing power is already out there because our universe itself, can think of it as being a computer already, right? It's constantly computing what water waves, how to evolve the water waves and the river Charles and how to move the air molecules around. Seth Lloyd has pointed out, my colleague here, that you can even in a very rigorous way think of our entire universe as being a quantum computer.

S3

Speaker 3

52:35

It's pretty clear that our universe supports this amazing processing power because you can even, within this physics computer that we live in, we can even build actually laptops and stuff. So clearly the power is there. It's just that most of the compute power that nature has, it's in my opinion kind of wasting on boring stuff like simulating yet another ocean wave somewhere where no 1 is even looking. So in a sense, what life does, What we are doing when we build computers is we're re-channeling all this compute that nature is doing anyway into doing things that are more interesting than just yet another ocean wave, you know, and let's do something cool here.

S3

Speaker 3

53:13

So the raw hardware power is there, for sure, and even just computing what's going to happen for the next 5 seconds in this water bottle, you know, it takes a ridiculous amount of compute if you do it on a human computer. Yeah. This water bottle just did it. But that does not mean that this water bottle has AGI and because AGI means it should also be able to like have written my book, done this interview.

S3

Speaker 3

53:39

Yes. And I don't think it's just communication problems. I don't

S2

Speaker 2

53:42

really know.

S3

Speaker 3

53:44

Don't think It can do it.

S2

Speaker 2

53:46

Although Buddhists say when they watch the water and that there is some beauty, that there's some depth and beauty in nature that they can communicate with.

S3

Speaker 3

53:54

Communication is also very important though, because

S2

Speaker 2

53:56

I

S3

Speaker 3

53:57

mean, look, part of my job is being a teacher. And I know some very intelligent professors even who just have a bit of a hard time communicating. They come up with all these brilliant ideas, but to communicate with somebody else, you have to also be able to simulate their own mind.

S2

Speaker 2

54:16

Yes. Empathy.

S3

Speaker 3

54:18

Build well enough and understand the model of their mind that you can say things that they will understand. And that's quite difficult. And that's why today it's so frustrating.

S3

Speaker 3

54:28

If you have a computer that makes some cancer diagnosis and you ask it, well, why are you saying I should have this surgery? And if it can only reply, I was trained on 5 terabytes of data and this is my diagnosis, boop, boop, beep, beep. It doesn't really instill a lot of confidence, right? So I think we have a lot of work to do on communication there.

S2

Speaker 2

54:54

So what kind of, I think you're doing a little bit of work in explainable AI, what do you think are the most promising avenues? Is it mostly about sort of the Alexa problem of natural language processing of being able to actually use human interpretable methods of communication? So being able to talk to a system and talk back to you, or is there some more fundamental problems to be solved?

S3

Speaker 3

55:18

I think it's all of the above. The natural language processing is obviously important, but there are also more nerdy fundamental problems. Like if you take...

S3

Speaker 3

55:30

You play chess?

S2

Speaker 2

55:31

Of course, I'm Russian. I have to.

S3

Speaker 3

55:33

You speak Russian?

S2

Speaker 2

55:34

Yes, I speak Russian. Great, I didn't know. When did you learn Russian?

S2

Speaker 2

55:38

I speak Russian very badly. I'm only an autodidact. I bought a book,

S3

Speaker 3

55:39

Teach Yourself Russian, I read a lot, but it was very difficult. That's why I say it's bad.

S2

Speaker 2

56:00

Ah, the actual games now.

S3

Speaker 3

56:02

Check it out. Some of them are just mind-blowing. Really beautiful.

S3

Speaker 3

56:09

And if you ask, how did it do that? You go talk to Demis El Sabes and others from DeepMind, all they'll ultimately be able to give you is big tables of numbers, matrices that define the neural network. And you can stare at these numbers until your face turns blue, and You're not going to understand much about why it made that move. And even if you have natural language processing that can tell you in human language about 057, 0.28, it's still not going to really help.

S3

Speaker 3

56:43

So I think There's a whole spectrum of fun challenges that are involved in taking a computation that does intelligent things and transforming it into something equally good, equally intelligent, but that's more understandable. I think that's really valuable because I think as we put machines in charge of ever more infrastructure in our world, the power grid, the trading on the stock market, weapon systems and so on, It's absolutely crucial that we can trust these AIs to do what we want. Trust really comes from understanding

S2

Speaker 2

57:22

in

S3

Speaker 3

57:22

a very fundamental way. That's why I'm working on this because I think the more, if we're going to have some hope of ensuring that machines have adopted our goals and that they're going to retain them. That kind of trust, I think, needs to be based on things you can actually understand, preferably even make, preferably even prove theorems on.

S3

Speaker 3

57:44

Even with a self-driving car, right? If someone just tells you it's been trained on tons of data and it never crashed, it's less reassuring than if someone actually has a proof. Maybe it's a computer verified proof, but still it says that under no circumstances is this car just going to swerve into oncoming traffic.

S2

Speaker 2

58:02

And that kind of information helps to build trust and helps build the alignment of goals. At least awareness that your goals, your values are aligned.

S3

Speaker 3

58:12

And I think even in the very short term, if you look at how, you know, today, right? This absolutely pathetic state of cyber security that we have. Where there's 3000000000 Yahoo accounts, we can't pack almost every American's credit card and so on.

S3

Speaker 3

58:32

Why is this happening? It's ultimately happening because we have software that nobody fully understood how it worked. That's why the bugs hadn't been found. And I think AI can be used very effectively for offense, for hacking, but it can also be used for defense.

S3

Speaker 3

58:52

Hopefully automating verifiability and creating systems that are built in different ways so you can actually prove things about them. And it's important.

S2

Speaker 2

59:05

So speaking of software that nobody understands how it works, of course a bunch of people ask about your paper, about your thoughts of why does deep and cheap learning work so well. That's the paper, but what are your thoughts on deep learning, these kind of simplified models of our own brains have been able to do some successful perception work, pattern recognition work and now with AlphaZero and so on, do some clever things. What are your thoughts about the promise, limitations of this piece?

S3

Speaker 3

59:35

Great. I think there are a number of very important insights, very important lessons we can already draw from these kind of successes. 1 of them is when you look at the human brain, you see it's very complicated.

S1

Speaker 1

59:50

10

S3

Speaker 3

59:50

to the 11 neurons, and there are all these different kinds of neurons and yada yada, and there's been this long debate about whether the fact that we have dozens of different kinds is actually necessary for intelligence.