See all Lex Fridman transcripts on Youtube

youtube thumbnail

Ben Goertzel: Artificial General Intelligence | Lex Fridman Podcast #103

4 hours 8 minutes 57 seconds

🇬🇧 English

S1

Speaker 1

00:00

The following is

S2

Speaker 2

00:00

a conversation with Ben Goertzel, 1 of the most interesting minds in the artificial intelligence community. He's the founder of SingularityNet, designer of OpenCog AI framework, formerly a director of research at the Machine Intelligence Research Institute, and Chief Scientist of Hanson Robotics, the company that created the Sophia robot. He has been a central figure in the AGI community for many years, including in his organizing and contributing to the Conference on Artificial General Intelligence, the 2020 version of which is actually happening this week, Wednesday, Thursday, and Friday.

S2

Speaker 2

00:36

It's virtual and free. I encourage you to check out the talks, including by Yosha Bach from episode 101 of this podcast. Quick summary of the ads, 2 sponsors, the Jordan Harbinger Show and Masterclass. Please consider supporting this podcast by going to jordanharbinger.com slash Lex and signing up at masterclass.com slash Lex.

S2

Speaker 2

01:00

Click the links, buy all the stuff. It's the best way to support this podcast and the journey I'm on in my research and startup. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, review it with 5 Stars on Apple Podcast, support it on Patreon, or connect with me on Twitter at Lex Friedman, spelled without the E, just F-R-I-D-M-A-N.

S2

Speaker 2

01:25

As usual, I'll do a few minutes of ads now and never any ads in the middle that can break the flow of the conversation. This episode is supported by the Jordan Harbinger Show. Go to jordanharbinger.com slash Lex, it's how he knows I sent you. On that page, there's links to subscribe to it on Apple Podcasts, Spotify, and everywhere else.

S2

Speaker 2

01:44

I've been binging on his podcast, Jordan is great. He gets the best out of his guests, dives deep, calls them out when it's needed, and makes the whole thing fun to listen to. He's interviewed Kobe Bryant, Mark Cuban, Neil deGrasse Tyson, Garry Kasparov, and many more. His conversation with Kobe is a reminder how much focus and hard work is acquired for greatness in sport, business, and life.

S2

Speaker 2

02:11

I highly recommend the episode if you want to be inspired. Again, go to jordanharbinger.com slash Lex. It's how Jordan knows I sent you. This show is sponsored by Masterclass.

S2

Speaker 2

02:23

Sign up at masterclass.com slash Lex to get a discount and to support this podcast. When I first heard about Masterclass, I thought it was too good to be true. For 180 bucks a year, you get an all-access pass to watch courses from. To list some of my favorites, Chris Hadfield on Space Exploration, Neil deGrasse Tyson on Scientific Thinking and Communication, Will Wright, creator of the greatest city-building game ever, SimCity and Sims on game design.

S2

Speaker 2

02:52

Carlos Santana on guitar. Garry Kasparov, the greatest chess player ever on chess. Daniel Negrano on poker and many more. Chris Hadfield explaining how rockets work and the experience of being launched into space alone is worth the money.

S2

Speaker 2

03:08

Once again, sign up at masterclass.com slash Lex to get a discount and to support this podcast. And now, here's my conversation with Ben Gretzel. What books, authors, ideas had

S1

Speaker 1

03:23

a lot of impact on you in your life in

S3

Speaker 3

03:25

the early days? You know, what got me into AI and science fiction and such in the first place wasn't a book, but the original Star Trek TV show, which my dad watched with me like in its first run. It would have been like 1968, 69 or something.

S3

Speaker 3

03:43

And that was incredible because every show they visited a different alien civilization with different culture and weird mechanisms. But that got me into science fiction, and there wasn't that much science fiction to watch on TV at that stage. So that got me into reading the whole literature of science fiction, you know, from the beginning of the previous century until that time. And I mean, there was so many science fiction writers who were inspirational to me.

S3

Speaker 3

04:12

I'd say if I had to pick 2, it would have been Stanislaw Lem, the Polish writer. Yeah, Solaris, and then he had a bunch of more obscure writings on superhuman AIs that were engineered. Solaris was sort of a superhuman, naturally occurring intelligence. Then Philip K.

S3

Speaker 3

04:32

Dick, who, you know, ultimately my fandom for Philip K. Dick is 1 of the things that brought me together with David Hanson, my collaborator on robotics projects. So, you know, Stanislas Stalin was very much an intellectual, right? So he had a very broad view of intelligence going beyond the human and into what I would call, you know, open-ended super intelligence.

S3

Speaker 3

04:57

The Solaris super intelligent ocean was intelligent in some ways more generally intelligent than people, but in a complex and confusing way so that human beings could never quite connect to it, but it was still palpably very, very smart. And then the Golem 4 supercomputer in 1 of Lem's books. This was engineered by people, but eventually it became very intelligent in a different direction than humans and decided that humans were kind of trivial and not that interesting. So it put some impenetrable shield around itself, shut itself off from humanity, and then issued some philosophical screed about the pathetic and hopeless nature of humanity and all human thought, and then disappeared.

S3

Speaker 3

05:48

Now, Philip K. Dick, he was a bit different. He was human-focused, right? His main thing was human compassion and the human heart and soul are going to be the constant that will keep us going through whatever aliens we discover or telepathy machines or super AIs or whatever it might be.

S3

Speaker 3

06:09

So he didn't believe in reality, like the reality that we see may be a simulation or a dream or something else we can't even comprehend, but he believed in love and compassion as something persistent through the various simulated realities. So those 2 science fiction writers had a huge impact on me. Then a little older than that, I got into Dostoevsky and Friedrich Nietzsche and Rimbaud and a bunch of more literary typewriting.

S1

Speaker 1

06:37

We talk about some of those things. So on the Solaris side, Stanislav Lem, this kind of idea of there being intelligences out there that are different than our own. Do you think there are intelligences maybe all around us that we're not able to even detect?

S1

Speaker 1

06:56

So this kind of idea of, maybe you can comment also on Stephen Wolfram, thinking that there's computations all around us, and we're just not smart enough to detect their intelligence, or appreciate their intelligence.

S3

Speaker 3

07:10

Yeah, so my friend Hugo de Garis, who I've been talking to about these things for many decades, since the early 90s. He had an idea he called SIPI, the Search for Intra Particulate Intelligence. So the concept there was, as AIs get smarter and smarter and smarter, assuming the laws of physics as we know them now are still what these superintelligences perceived to hold and are bound by, as they get smarter and smarter, they're gonna shrink themselves littler and littler because special relativity makes it so they can communicate between 2 spatially distant points.

S3

Speaker 3

07:49

So they're gonna get smaller and smaller, but then ultimately, what does that mean? The minds of the super, super, super intelligences, they're gonna be packed into the interaction of elementary particles or quarks, or the partons inside quarks, or whatever it is. So what we perceive as random fluctuations on the quantum or subquantum level may actually be the thoughts of the micro, micro, micro miniaturized super intelligences because there's no way we can tell random from structured, but with an algorithmic information more complex than our brains, right? We can't tell the difference.

S3

Speaker 3

08:24

So what we think is random could be the thought processes of some really tiny super minds, and if so, there's not a damn thing we can do about it except try to upgrade our intelligences and expand our minds so that we can perceive more

S1

Speaker 1

08:40

of what's around us. But if those random fluctuations, like even if we go to quantum mechanics, if that's actually superintelligent systems, aren't we then part of the soup of super intelligence? Aren't we just like a finger of the entirety of the body of the super intelligent system?

S3

Speaker 3

09:01

It could be, I mean a finger is a strange metaphor. I mean, we. Well,

S1

Speaker 1

09:07

a finger is dumb

S3

Speaker 3

09:08

is what

S1

Speaker 1

09:09

I mean. But a finger

S3

Speaker 3

09:11

is also useful and is controlled with intent by the brain, whereas we may be much less than that, right? I mean, yeah, we may be just some random epiphenomenon that they don't care about too much. Like, think about the shape of the crowd emanating from a sports stadium or something, right?

S3

Speaker 3

09:28

There's some emergent shape to the crowd. It's there. You could take a picture of it. It's kind of cool.

S3

Speaker 3

09:33

It's irrelevant to the main point of the sports event or where the people are going or what's on the minds of the people making that shape in the crowd, right? So we may just be some semi-arbitrary higher level pattern popping out of a lower level hyper-intelligent self-organization. And I mean, so be it, right? I mean, that's 1 thing that's.

S3

Speaker 3

09:57

Still a fun ride. Yeah, I mean, the older I've gotten, the more respect I've achieved for our fundamental ignorance. I mean, mine and everybody else's. I mean, I look at my 2 dogs, 2 beautiful little toy poodles, and they watch me sitting at the computer typing.

S3

Speaker 3

10:14

They just think I'm sitting there wiggling my fingers to exercise and maybe, or guarding the monitor on the desk that they have no idea that I'm communicating with other people halfway around the world, let alone creating complex algorithms running in RAM on some computer server in St. Petersburg or something, right, Although they're right there in the room with me. So what things are there right around us that we're just too stupid or close-minded to comprehend? Probably quite a lot.

S3

Speaker 3

10:42

You're very brutal could

S1

Speaker 1

10:45

also be communicating across multiple dimensions with other beings and you're too unintelligent to understand the kind of communication mechanism they're going through.

S3

Speaker 3

10:56

There have been various TV shows and science fiction novels, Puzzling Cats, dolphins, mice and whatnot are actually superintelligences. Here to observe that. I would guess as 1 or the other quantum physics founders said, those theories are not crazy enough to be true.

S3

Speaker 3

11:15

The reality's probably crazier than that.

S1

Speaker 1

11:17

Beautiful point, so on the human side, with Philip K. Dick and in general, where do you fall on this idea that love and just the basic spirit of human nature persists throughout these multiple realities. Are you on the side, like, the thing that inspires you about artificial intelligence, is it the human side of somehow persisting through all of the different systems we engineer, or does AI inspire you to create something that's greater than human, that's beyond human, that's almost non-human?

S3

Speaker 3

11:59

I would say my motivation to create AGI comes from both of those directions, actually. So when I first became passionate about AGI when I was, it would have been 2 or 3 years old after watching robots on Star Trek, I mean, then it was really a combination of intellectual curiosity, like can a machine really think, how would you do that? And yeah, just ambition to create something much better than all the clearly limited and fundamentally defective humans I saw around me.

S3

Speaker 3

12:31

Then as I got older and got more enmeshed in the human world and got married, had children, so my parents begin to age, I started to realize, well, not only will AGI let you go far beyond the limitations of the human, but it could also stop us from dying and suffering and feeling pain and tormenting ourselves mentally. So you can see AGI has amazing capability to do good for humans, as humans, alongside with its capability to go far, far beyond the human level. So, I mean, both aspects are there, which makes it even more exciting and important.

S1

Speaker 1

13:13

So you mentioned Dostoevsky and Nietzsche. What did you pick up from those guys? I mean...

S3

Speaker 3

13:18

That would probably go beyond the scope of a brief interview, certainly. But both of those are amazing thinkers who 1 will necessarily have a complex relationship with. So, I mean, Dostoevsky on the minus side, he's kind of a religious fanatic, and he sort of helped squash the Russian nihilist movement, which was very interesting, because what nihilism meant originally in that period of the mid-late 1800s in Russia was not taking anything fully 100% for granted.

S3

Speaker 3

13:52

It was really more like what we'd call Bayesianism now, where you don't want to adopt anything as a dogmatic certitude and always leave your mind open. And how Dostoevsky parodied nihilism was a bit different. He parodied it as people who believe absolutely nothing, so they must assign an equal probability weight to every proposition, which doesn't really work. So on the 1 hand, I didn't really agree with Dostoevsky on his sort of religious point of view.

S3

Speaker 3

14:26

On the other hand, if you look at his understanding of human nature and sort of the human mind and heart and soul. It's really unparalleled. He had an amazing view of how human beings, you know, construct a world for themselves based on their own understanding and their own mental predisposition. And I think if you look in the Brothers Karamazov in particular, the Russian literary theorist Mikhail Bakhtin wrote about this as a polyphonic mode of fiction, which means it's not third person, but it's not first person from any 1 person really.

S3

Speaker 3

15:05

There are many different characters in the novel and each of them is sort of telling part of the story from their own point of view. So the reality of the whole story is an intersection like synergetically of the many different characters world views And that really, it's a beautiful metaphor and even a reflection, I think, of how all of us socially create our reality. Like, each of us sees the world in a certain way. Each of us, in a sense, is making the world as we see it, based on our own minds and understanding, but it's polyphony, like in music, where multiple instruments are coming together to create the sound.

S3

Speaker 3

15:44

The ultimate reality that's created comes out of each of our subjective understandings intersecting with each other, and that was 1 of

S1

Speaker 1

15:52

the many beautiful things in Dostoevsky. So maybe a little bit to mention, you have a connection to Russia and the Soviet culture. I mean, I'm not sure exactly what the nature of the connection is, but at least the spirit of your thinking

S3

Speaker 3

16:07

is in there. Well, my ancestry is 3 quarters Eastern European Jewish. So, I mean, 3 of my great grandparents emigrated to New York from Lithuania and border regions of Poland, which were in and out of Poland, around the time of World War I.

S3

Speaker 3

16:28

And they were socialists and communists as well as Jews, mostly Menshevik, not Bolshevik, and they sort of, they fled at just the right time to the US for their own personal reasons. And then almost all, or maybe all of my extended family that remained in Eastern Europe was killed either by Hitler's or Stalin's minions at some point. So the branch of the family that emigrated to the US was pretty much the only 1 that survived. So how much of

S1

Speaker 1

16:57

the spirit of the people is in your blood still? Like, when you look in the mirror, do you see, what do you see?

S3

Speaker 3

17:04

Meat. I see a bag of meat that I want to transcend by uploading into some sort of superior reality. But very, I mean, yeah, very clearly, I mean, I'm not religious in a traditional sense, but clearly the Eastern European Jewish tradition was what I was raised in. I mean, there was, my grandfather, Leo Zewail, was a physical chemist who worked with Langis Pauling and a bunch of the other early greats in quantum mechanics.

S3

Speaker 3

17:38

I mean, he was into x-ray diffraction. He was on the material science side, an experimentalist rather than a theorist. His sister was also a physicist. And my father's father, Victor Goertzel, was a PhD in psychology who had the unenviable job of giving psychotherapy to the Japanese in internment camps in the US in World War II, like to counsel them why they shouldn't kill themselves, even though they'd had all their stuff taken away and been imprisoned for no good reason.

S3

Speaker 3

18:10

So I mean, there's a lot of Eastern European Jewishness in my background. 1 of my great uncles was, I guess, conductor of San Francisco Orchestra, so there's a lot of Mickey Salkind, bunch of music in there also. And clearly, this culture was all about learning and understanding the world and also not quite taking yourself too seriously while you do it, right? There's a lot of Yiddish humor in there.

S3

Speaker 3

18:42

So I do appreciate that culture, Although the whole idea that the Jews are the chosen people of God never resonated with me too much. The graph

S1

Speaker 1

18:53

of the Gerzel family, I mean just the people I've encountered just doing some research and just knowing your work through the decades, it's kind of fascinating. Just the number of PhDs.

S3

Speaker 3

19:07

It's kind of fascinating. My dad is a sociology professor who recently retired from Rutgers University, but that clearly, that gave me a head start in life. I mean, my grandfather gave me all those quantum mechanics books when I was like 7 or 8 years old.

S3

Speaker 3

19:24

I remember going through them, and it was all the old quantum mechanics, like Rutherford atoms and stuff. So I got to the part of wave functions, which I didn't understand, although I was a very bright kid. And I realized he didn't quite understand it either, but at least, like he pointed me to some professor he knew at UPenn nearby who understood these things, right? So that's an unusual opportunity for a kid to have, right?

S3

Speaker 3

19:49

My dad, he was programming Fortran when I was

S1

Speaker 1

19:52

10

S3

Speaker 3

19:52

or 11 years old on like HV3000 mainframes at Rutgers University. So I got to do linear regression in Fortran on punch cards when I was in middle school, because he was doing, I guess, analysis of demographic and sociology data. So yes, certainly that gave me a head start and a push towards science beyond what would have been the case with many, many different situations.

S1

Speaker 1

20:19

When did you first fall in love with AI? Is it the programming side of Fortran? Is it maybe the sociology, psychology that you picked up from your dad?

S1

Speaker 1

20:28

Or is

S3

Speaker 3

20:28

it the quantitatives? I fell in love with AI when I was probably 3 years old when I saw a robot on Star Trek. It was turning around in a circle going, error, error, error, error, because Spock and Kirk had tricked it into a mechanical breakdown by presenting it with a logical paradox.

S3

Speaker 3

20:42

And I was just like, well, this makes no sense. This AI is very, very smart. It's been traveling all around the universe, but these people could trick it with a simple logical paradox. Like what, if the human brain can get beyond that paradox, why can't this AI?

S3

Speaker 3

20:59

So I felt the screenwriters of Star Trek had misunderstood the nature of intelligence. I complained to my dad about it, and he wasn't gonna say anything 1 way or the other. But before I was born, when my dad was at Antioch College in the middle of the US, he led a protest movement called SLAM, Student League Against Mortality. They were protesting against death wandering across the campus.

S3

Speaker 3

21:31

So he was into some futuristic things even back then, but whether AI could confront logical paradoxes or not, he didn't know. But when I, 10 years after that or something, I discovered Douglas Hofstadter's book, Gordal S. Scherbach, and that was sort of to the same point of AI and paradox and logic, right? Because he was over and over with Gordal's incompleteness theorem, and can an AI really fully model itself reflexively, or does that lead you into some paradox?

S3

Speaker 3

22:02

Can the human mind truly model itself reflexively, or does that lead you into some paradox? Can the human mind truly model itself reflexively or does that lead you into some paradox? So I think that book, Gorda Läscher-Bach, which I think I read when it first came out, I would've been 12 years old or something. I remember it was like 16 hour day.

S3

Speaker 3

22:17

I read it cover to cover and then re-read it. I re-read it after that, because there was a lot of weird things with little formal systems in there that were hard for me at the time. But that was the first book I read that gave me a feeling for AI as like a practical academic or engineering discipline that people were working in. Because before I read Cordelius Scherbach, I was into AI from the point of view of a science fiction fan.

S3

Speaker 3

22:43

And I had the idea, well, it may be a long time before we can achieve immortality in superhuman AGI, so I should figure out how to build a spacecraft traveling close to the speed of light, go far away, then come back to the Earth in a million years when technology is more advanced and we can build these things. Reading Gertilischer Bach, well, it didn't all ring true to me. A lot of it did, but I could see there are smart people right now at various universities around me who are actually trying to work on building what I would now call AGI, although Hofstadter didn't call it that. So really, it was when I read that book, which would have been probably middle school, that then I started to think, well, this is something that I could practically work on.

S1

Speaker 1

23:28

Yeah, as opposed to flying away and waiting it out, you can actually be 1 of the people that actually builds the system. Yeah, exactly,

S3

Speaker 3

23:35

and if you think about, I mean, I was interested in what we'd now call nanotechnology and in the human immortality and time travel, all the same cool things as every other science fiction loving kid. But AI seemed like if Hofstede was right, you just figure out the right program, sit there and type it. Like you don't need to spin stars into weird configurations or get government approval to cut people up and fill it with their DNA or something, right?

S3

Speaker 3

24:04

It's just programming. And then of course, that can achieve anything else. There's another book from back then which was by Feinbaum, Gerald Feinbaum, who was a physicist at Princeton. And that was the Prometheus Project.

S3

Speaker 3

24:24

And this book was written in the late 1960s, though I encountered it in the mid-70s. But what this book said is in the next few decades, humanity is gonna create superhuman thinking machines, molecular nanotechnology, and human immortality. And then the challenge we'll have is what to do with it. Do we use it to expand human consciousness in a positive direction?

S3

Speaker 3

24:45

Or do we use it just to further vapid consumerism? And what he proposed was that the UN should do a survey on this. And the UN should send people out to every little village in remotest Africa or South America and explain to everyone what technology was going to bring the next few decades and the choice that we had about how to use it and let everyone on the whole planet vote about whether we should develop super AI nanotechnology and immortality for expanded consciousness or for rampant consumerism. And Needless to say, that didn't quite happen.

S3

Speaker 3

25:22

I think this guy died in the mid-80s, so we didn't even see his ideas start to become more mainstream. But it's interesting, many of the themes I'm engaged with now, from AGI and immortality, even to trying to democratize technology as I've been pushing forward with Singularity and my work in the blockchain world. Many of these themes were there in Feinbaum's book in the late 60s even. And of course, Valentin Turchin, a Russian writer and a great Russian physicist who I got to know when we both lived in New York in the late 90s and early aughts, I mean, He had a book in the late 60s in Russia, which was The Phenomenon of Science, which laid out all these same things as well.

S3

Speaker 3

26:10

And Val died in, I don't remember, 2004 or 5 or something of Parkinson's-ism. So yeah, it's easy for people to lose track now of the fact that the futurist and the singularitarian advanced technology ideas that are now almost mainstream and are on TV all the time. I mean, these are not that new, right? They're sort of new in the history of the human species.

S3

Speaker 3

26:37

But I mean, these were all around in fairly mature form in the middle of the last century, were written about quite articulately by fairly mainstream people who were professors at top universities. It's just until the enabling technologies got to a certain point, then you couldn't make it real. So, and even in the 70s, I was sort of seeing that and living through it, right? From Star Trek to Douglas Hofstadter, things were getting very, very practical from the late 60s to the late 70s.

S3

Speaker 3

27:11

And the first computer I bought, you could only program with hexadecimal machine code and you had to solder it together. And then like a few years later, there's punch cards. And a few years later, you could get like Atari 400 and Commodore VIC-20, and you could type on the keyboard and program in higher level languages alongside the assembly language. So these ideas have been building up a while, and I guess my generation got to feel them build up, which is different than people coming into the field Now, for whom these things have just been part of the ambiance of culture for their whole career, or even their whole life.

S1

Speaker 1

27:54

Well, it's fascinating to think about there being all of these ideas kind of swimming, almost with a noise all around the world, all the different generations, and then some kind of nonlinear thing happens where they percolate up and capture the imagination of the mainstream. And that seems to be what's happening with AI now.

S3

Speaker 3

28:14

I mean, Nietzsche, who you mentioned, had the idea of the Superman, right? But he didn't understand enough about technology to think you could physically engineer a Superman by piecing together molecules in a certain way. He was a bit vague about how the Superman would appear, but he was quite deep in thinking about what the state of consciousness and the mode of cognition of a Superman would be.

S3

Speaker 3

28:42

He was a very astute analyst of how the human mind constructs the illusion of a self, how it constructs the illusion of free will, how it constructs values like good and evil out of its own desire to maintain and advance its own organism. He understood a lot about how human minds work. Then he understood a lot about how post-human minds would work. I mean, the superman was supposed to be a mind that would basically have complete root access to its own brain and consciousness and be able to architect its own value system and inspect and fine-tune all of its own biases.

S3

Speaker 3

29:24

So that's a lot of powerful thinking there, which then fed in and sort of seeded all of postmodern continental philosophy and all sorts of things have been very valuable in development of culture and indirectly even of technology. But of course, without the technology there, it was all some quite abstract thinking. So Now we're at a time in history when a lot of these ideas can be made real, which is amazing and scary, right?

S1

Speaker 1

29:54

It's kind of interesting to think, what do you think Nietzsche would, if he was born a century later, or transported through time, what do you think he would say about AI? I mean.

S3

Speaker 3

30:02

Well, those are quite different. If he's born a century later, we're transported through time. Well, he'd

S1

Speaker 1

30:07

be on like TikTok and Instagram and he would never write the great works he's written. So let's transport him through time.

S3

Speaker 3

30:13

Maybe also Sprach Zarathustra would be a music video, right? I mean, who knows?

S1

Speaker 1

30:19

Yeah, but if he was transported through time, do you think, that'd be interesting actually to go back, you just made me realize that it's possible to go back and read Nietzsche with an eye of, is there some thinking about artificial beings. I'm sure he had inklings, I mean with Frankenstein before him, I'm sure he had inklings of artificial beings somewhere in the text. It'd be interesting to see, to try to read his work to see if he had an, if Superman was actually an AGI system, like if he had inklings of that kind of thinking.

S1

Speaker 1

30:57

He didn't. He didn't.

S3

Speaker 3

31:00

No, I would say not. I mean, he had a lot of inklings of modern cognitive science, which are very interesting. If you look in like the third part of the collection that's been titled The Will to Power, I mean, in book 3 there, there's very deep analysis of thinking processes.

S3

Speaker 3

31:20

But he wasn't so much of a physical tinkerer type guy, right, he was very abstract.

S1

Speaker 1

31:29

Do you think, What do you think about the will to power? Do you think, what do you think drives humans? Is it?

S3

Speaker 3

31:37

Oh, an unholy mix of things. I don't think there's 1 pure, simple, and elegant objective function driving humans by any means.

S1

Speaker 1

31:50

What do you think, if we look at, I know it's hard to look at humans in an aggregate, but do you think overall humans are good? Or do we have both good and evil within us that, depending on the circumstances, depending on the whatever, can percolate to the top.

S3

Speaker 3

32:08

Good and evil are very ambiguous, complicated, and in some ways silly concepts. But if we could dig into your question from a couple directions. So I think if you look in evolution, humanity is shaped both by individual selection and what biologists would call group selection, like tribe level selection, right?

S3

Speaker 3

32:32

So individual selection has driven us in a selfish DNA sort of way, so that each of us does to a certain approximation what will help us propagate our DNA to future generations. I mean, that's why I've got 4 kids so far, and probably that's not the last 1. On the other hand. I like the ambition.

S3

Speaker 3

32:56

Tribal, like group selection, means humans in a way will do what will advocate for the persistence of the DNA of their whole tribe or their social group. And in biology, you have both of these, right? And you can see, say, an ant colony or a beehive, there's a lot of group selection in the evolution of those social animals. On the other hand, say a big cat or some very solitary animal, it's a lot more biased toward individual selection.

S3

Speaker 3

33:26

Humans are an interesting balance. And I think this reflects itself in what we would view as selfishness versus altruism to some extent. So we just have both of those objective functions contributing to the makeup of our brains. And then as Nietzsche analyzed in his own way, and others have analyzed in different ways.

S3

Speaker 3

33:49

I mean, we abstract this as, well, we have both good and evil within us, right? Because a lot of what we view as evil is really just selfishness. A lot of what we view as good is altruism, which means doing what's good for the tribe. And on that level, we have both of those just baked into us, and that's how it is.

S3

Speaker 3

34:13

Of course, there are psychopaths and sociopaths and people who get gratified by the suffering of others, and that's a different thing.

S1

Speaker 1

34:25

Yeah, those are exceptions, but on the whole.

S3

Speaker 3

34:27

Yeah, but I think at core, we're not purely selfish, we're not purely altruistic, we are a mix and that's the nature of it. And we also have a complex constellation of values that are just very specific to our evolutionary history. Like we love waterways and mountains, and the ideal place to put a house is in a mountain overlooking the water, right?

S3

Speaker 3

34:56

And we care a lot about our kids, and we care a little less about our cousins and even less about our fifth cousins. I mean, there are many particularities to human values, which whether they're good or evil depends on your perspective. Say, I spent a lot of time in Ethiopia in Addis Ababa, where we have 1 of our AI development offices for my SingularityNet project. And when I walk through the streets in Addis, there's people lying by the side of the road, like just living there by the side of the road, just living there by the side of the road, dying probably of curable diseases without enough food or medicine.

S3

Speaker 3

35:37

And when I walk by them, I feel terrible, I give them money. When I come back home to the developed world, they're not on my mind that much. I do donate some, but I mean, I also spend some unlimited money I have enjoying myself in frivolous ways rather than donating it to those people who are right now starving, dying, and suffering on the roadside. So does that make me evil?

S3

Speaker 3

36:02

I mean, it makes me somewhat selfish and somewhat altruistic, and we each balance that in our own way, right? So whether that will be true of all possible AGIs is a subtler question. That's how humans are. So you have a sense, you kind

S1

Speaker 1

36:22

of mentioned that there's a selfish, I'm not gonna bring up the whole Ayn Rand idea of selfishness being the core virtue, that's a whole interesting kind of tangent that I think we'll

S3

Speaker 3

36:34

just distract ourselves on. I have to make 1 amusing comment. Sure.

S3

Speaker 3

36:39

Or a comment that has amused me anyway. So the, yeah, I have extraordinary negative respect for Ayn Rand. Negative, what's

S1

Speaker 1

36:48

a negative respect? But when

S3

Speaker 3

36:51

I worked with a company called Genescient, which was evolving flies to have extraordinary long lives in Southern California. So we had flies that were evolved by artificial selection to have 5 times the lifespan of normal fruit flies. But the population of super long-lived flies was physically sitting in a spare room at an Ayn Rand elementary school in Southern California.

S3

Speaker 3

37:17

So that was just like, well, if I saw this in a movie, I wouldn't believe it.

S1

Speaker 1

37:23

Well, yeah, the universe has a sense of humor in that kind of way. That fits in, humor fits in somehow into this whole absurd existence. But you mentioned the balance between selfishness and altruism as kind of being innate.

S1

Speaker 1

37:36

Do you think it's possible that's kind of an emergent phenomenon, those peculiarities of our value system, how much of it is innate, how much of it is something we collectively, kind of like a Dostoevsky novel, bring to

S2

Speaker 2

37:52

life together as a civilization?

S3

Speaker 3

37:54

I mean, the answer to nature versus nurture is usually both, and of course it's Nature versus nurture versus self-organization, as you mentioned. So clearly, there are evolutionary roots to individual and group selection leading to a mix of selfishness and altruism. On the other hand, different cultures manifest that in different ways.

S3

Speaker 3

38:19

Well, we all have basically the same biology. And if you look at sort of pre-civilized cultures, you have tribes like the Yanomamo in Venezuela, which their culture is focused on killing other tribes. And you have other Stone Age tribes that are mostly peaceable and have big taboos against violence. So you can certainly have a big difference in how culture manifests these innate biological characteristics.

S3

Speaker 3

38:50

But still, there's probably limits that are given by our biology. I used to argue this with my great-grandparents who were Marxists, actually, because they believed in the withering away of the state. They believed that as you move from capitalism to socialism to communism, people would just become more social-minded so that a state would be unnecessary and everyone would give everyone else what they needed. Now, setting aside that that's not what the various Marxist experiments on the planet seem to be heading toward in practice, just as a theoretical point, I was very dubious that human nature could go there.

S3

Speaker 3

39:37

Like at that time when my great-grandparents were alive, I was just like, you know, I'm a cynical teenager. I think humans are just jerks. The state is not going to wither away. If you don't have some structure keeping people from screwing each other over, they're going to do it.

S3

Speaker 3

39:52

So now I actually don't quite see things that way. I mean, I think my feeling now subjectively is the culture aspect is more significant than I thought it was when I was a teenager. And I think you could have a human society that was dialed dramatically further toward self-awareness, other awareness, compassion, and sharing than our current society. And of course, greater material abundance helps, but to some extent, material abundance is a subjective perception also, because many Stone Age cultures perceived themselves as living in great material abundance, that they had all the food and water they wanted, they lived in a beautiful place, they had sex lives, they had children, I mean, they had abundance without any factories, right?

S3

Speaker 3

40:43

So I think humanity probably would be capable of fundamentally more positive and joy-filled mode of social existence than what we have now. Clearly Marx didn't quite have the right idea about how to get there. I mean, he missed a number of key aspects of human society and its evolution. And if we look at where we are in society now, how to get there is a quite different question because there are very powerful forces pushing people in different directions than a positive, joyous, compassionate existence, right?

S1

Speaker 1

41:26

So if we were to try to, you know, Elon Musk is dreams of colonizing Mars at the moment. So we maybe you'll have a chance to start a new civilization with a new governmental system. And certainly there's quite a bit of chaos.

S1

Speaker 1

41:41

We're sitting now, I don't know what the date is, but this is June. There's quite a bit of chaos in all different forms going on in the United States and all over the world. So there's a hunger for new types of governments, new types of leadership, new types of systems. And So what are the forces at play and how do we move forward?

S3

Speaker 3

42:04

Yeah, I mean, colonizing Mars, first of all, it's a super cool thing to do. We should

S1

Speaker 1

42:09

be doing it. So you love the idea?

S3

Speaker 3

42:11

Yeah, I mean, it's more important than making chocolatier chocolates and sexier lingerie and many of the things that we spend a lot more resources on as a species, right? So I mean, we certainly should do it. I think the possible futures in which a Mars colony makes a critical difference for humanity are very few.

S3

Speaker 3

42:40

I mean, I think, I mean, assuming we make a Mars colony and people go live there in a couple decades, I mean, their supplies are gonna come from Earth, the money to make the colony came from Earth, and whatever powers are supplying the goods there from Earth are going to, in effect, be in control of that Mars colony. Of course, There are outlier situations where Earth gets nuked into oblivion and somehow Mars has been made self-sustaining by that point. And then Mars is what allows humanity to persist. But I think that those are very, very, very unlikely.

S1

Speaker 1

43:19

Do you don't think it could be a first step on a long journey? Of course it's

S3

Speaker 3

43:23

a first step on a long journey, which is awesome. I'm guessing the colonization of the rest of the physical universe will probably be done by AGIs that are better designed to live in space than by the meat machines that we are. But I mean, who knows?

S3

Speaker 3

43:42

We may cryopreserve ourselves in some superior way to what we know now, and shoot ourselves out to Alpha Centauri and beyond. I mean, that's all cool, it's very interesting, and it's much more valuable than most things that humanity is spending its resources on. On the other hand, with AGI, we can get to a singularity before the Mars colony becomes sustaining, for sure, possibly before it's even operational. And so-

S1

Speaker 1

44:09

So your intuition is that that's the problem if we really invest resources in, we can get to faster than a legitimate, full, self-sustaining colonization of Mars?

S3

Speaker 3

44:19

Yeah, and it's very clear that we will, to me, because there's so much economic value in getting from narrow AI toward AGI, whereas the Mars colony, there's less economic value until you get quite far out into the future. So I think that's very interesting. I just think it's somewhat off to the side.

S3

Speaker 3

44:44

I mean, just as I think, say, art and music are very, very interesting, and I wanna see resources go into amazing art and music being created, and I'd rather see that than a lot of the garbage that society spends their money on. On the other hand, I don't think Mars colonization or inventing amazing new genres of music is not 1 of the things that is most likely to make a critical difference in the evolution of human or non-human life in this part of the universe over the next decade.

S1

Speaker 1

45:19

Do you think AGI is really?

S3

Speaker 3

45:21

AGI is by far the most important thing that's on the horizon, and then technologies that have direct ability to enable AGI or to accelerate AGI are also very important. For example, say quantum computing. I don't think that's critical to achieve AGI, but certainly you could see how the right quantum computing architecture could massively accelerate AGI.

S3

Speaker 3

45:49

Similar other types of nanotechnology, right? Now, the quest to cure aging and end disease, while not in the big picture as important as AGI, of course it's important to all of us as individual humans, and if someone made a super longevity pill and distributed it tomorrow, I mean, that would be huge and a much larger impact than a Mars colony is gonna have for quite some time.

S1

Speaker 1

46:20

But perhaps not as much as an AGI system.

S3

Speaker 3

46:23

No, because if you can make a benevolent AGI, then all the other problems are solved. I mean, then the AGI can be, once it's as generally intelligent as humans, it can rapidly become massively more generally intelligent than humans, and then that AGI should be able to solve science and engineering problems much better than human beings, as long as it is in fact motivated to do so. That's why I said a benevolent AGI.

S3

Speaker 3

46:52

There could be other kinds.

S1

Speaker 1

46:53

Maybe it's good to step back a little bit. I mean, we've been using the term AGI. People often cite you as the creator, or at least the popularizer of the term AGI, artificial general intelligence.

S1

Speaker 1

47:05

Can you tell the origin story of the term?

S3

Speaker 3

47:09

Sure, sure. So yeah, I would say I launched the term AGI upon the world for what it's worth without ever fully being in love with the term. What happened is I was editing a book, and this process started around 2001 or

S1

Speaker 1

47:27

2002,

S3

Speaker 3

47:27

I think the book came out 2005 finally. I was editing a book which I provisionally was titling Real AI, and I mean the goal was to gather together fairly serious academicish papers on the topic of making thinking machines that could really think in the sense like people can, or even more broadly than people can. So then I was reaching out to other folks that I'd encountered here or there who were interested in that, which included some other folks who I knew from the transhumanist and singularitarian world, like Peter Vos, who has a company, AGI Incorporated still, in California, and included Shane Legg, who had worked for me at my company, WebMind, in New York in the late 90s, who by now has become rich and famous.

S3

Speaker 3

48:20

He was 1 of the co-founders of Google DeepMind. But at that time, Shane was, I think he may have just started doing his PhD with Markus Hutter, who at that time hadn't yet published his book, Universal AI, which sort of gives a mathematical foundation for artificial general intelligence. So I reached out to Shane and Markus and Peter Vos and Pei Wang, who was another former employee of mine who had been Douglas Hofstadter's PhD student, who had his own approach to AGI, and a bunch of some Russian folks reached out to these guys and they contributed papers for the book. But that was my provisional title, but I never loved it because in the end, I was doing some, what we would now call narrow AI, as well, like applying machine learning to genomics data or chat data for sentiment analysis.

S3

Speaker 3

49:17

And I mean, that work is real. In a sense, it's really AI, it's just a different kind of AI. Ray Kurzweil wrote about narrow AI versus strong AI. But that seemed weird to me because first of all, Neuro and Strong are not entities.

S3

Speaker 3

49:37

Right? Right. That's right. I mean, but secondly, strong AI was used in the cognitive science literature to mean the hypothesis that digital computer AIs could have true consciousness like human beings.

S3

Speaker 3

49:49

So there was already a meaning to strong AI, which was complexly different but related, right? So we were tossing around on an email list what title it should be. And so we talked about narrow AI, broad AI, wide AI, narrow AI, general AI, and I think it was either Shane Legg or Peter Vos on the private email discussion we had, he said, well, why don't we go with AGI, artificial general intelligence. Pei Wang wanted to do GAI, general artificial intelligence, because in Chinese it goes in that order.

S3

Speaker 3

50:27

But we figured gay wouldn't work in US culture at that time. So we went with the AGI, we used it for the title of that book. And part of Peter and Shane's reasoning was you have the G factor in psychology, which is IQ, general intelligence, right? So you have a meaning of GI, general intelligence in psychology, so then you're looking like artificial GI.

S3

Speaker 3

50:56

So then we use that for the title of the book. And so I think, maybe both Shane and Peter think they invented the term, but then later, after the book was published, this guy Mark Gubrid came up to me and he's like, well, I published an essay with the term AGI in like 1997 or something. And so I'm just waiting for some Russian to come out and say they published that in 1953. I mean, that term is not dramatically innovative or anything, it's 1 of these obvious in hindsight things, which is also annoying in a way because, you know, Joe Chabac, who you interviewed, is a close friend of mine.

S3

Speaker 3

51:40

He likes the term synthetic intelligence, which I like much better, but it hasn't actually caught on. Because, I mean, artificial is a bit off to me because artifice is like a tool or something, but not all AGIs are gonna be tools. I mean, they may be now, but we're aiming toward making them agents rather than tools. And in a way, I don't like the distinction between artificial and natural, because I mean, we're part of nature also, and machines are part of nature.

S3

Speaker 3

52:11

I mean, you can look at evolved versus engineered, but That's a different distinction. Then it should be engineered general intelligence, right? And then general, well, if you look at Marcus Huder's book, Universally, what he argues there is, within the domain of computation theory, which is limited but interesting, So if you assume computable environments and computable reward functions, then he articulates what would be a truly general intelligence, a system called AIXI, which is quite beautiful. AIXI.

S3

Speaker 3

52:44

AIXI, and That's the middle name of my latest child, actually.

S1

Speaker 1

52:49

What's the first name?

S3

Speaker 3

52:50

First name is Qorxi, Q-O-R-X-I, which my wife came up with, but that's an acronym for quantum organized rational expanding intelligence. And His middle name is Exiphones, actually, which means the former principal underlying Aixi.

S1

Speaker 1

53:08

But in any case- You're giving Elon Musk's new child a run for his money.

S3

Speaker 3

53:12

Well, I did it first. He copied me with his new freakish name. But now if I have another baby, I'm gonna have to outdo him.

S3

Speaker 3

53:20

It's become an arms race of weird, geeky baby names. We'll see what the babies think about it. But I mean, my oldest son, Zarathustra, loves his name, and my daughter, Sherazade, loves her name. So, so far, basically, if you give your kids weird names, they live

S1

Speaker 1

53:37

up to it.

S3

Speaker 3

53:37

Well, you're obliged to make the kids weird enough that they like the names, right? It directs their upbringing in a certain way. But, yeah, anyway, I mean, what Marcus showed in that book is that a truly general intelligence, theoretically is possible, but would take infinite computing power.

S3

Speaker 3

53:53

So then the artificial is a little off, the general is not really achievable within physics as we know it, and I mean, physics as we know it may be limited, but that's what we have to work with now. Intelligence.

S1

Speaker 1

54:05

Infinitely general, you mean, like information processing perspective, yeah.

S3

Speaker 3

54:10

Yeah, intelligence is not very well defined either. I mean, What does it mean? I mean, in AI now, it's fashionable to look at it as maximizing an expected reward over the future, but that sort of definition is pathological in various ways.

S3

Speaker 3

54:28

And my friend David Weinbaum, AKA Weaver, he had a beautiful PhD thesis on open-ended intelligence, trying to conceive intelligence in a- Without a reward.

S1

Speaker 1

54:38

Without objective function.

S3

Speaker 3

54:38

Yeah, he's just looking at it differently. He's looking at complex self-organizing systems and looking at an intelligent system as being 1 that revises and grows and improves itself in conjunction with its environment without necessarily there being 1 objective function it's trying to maximize. Although over certain intervals of time, it may act as if it's optimizing a certain objective function.

S3

Speaker 3

55:01

Very much Solaris from Stanislav Lenz novels, right? So yeah, the point is artificial general and intelligence. They're all bad. On the other hand, everyone knows what AI is, and AGI seems immediately comprehensible to people with a technical background.

S3

Speaker 3

55:17

So I think that the term has served a sociological function. Now it's out there everywhere, which baffles me.

S1

Speaker 1

55:24

It's like KFC, I mean, that's it. We're stuck with AGI probably for a very long time until AGI systems take over and rename themselves.

S3

Speaker 3

55:33

Yeah, and then we'll be biological. We'll start with GPUs too, which mostly have nothing to do with graphics anymore.

S1

Speaker 1

55:40

I wonder what the AGI system will call us humans. That was maybe.

S3

Speaker 3

55:44

Grandpa. Yeah. Yeah. GPs.

S3

Speaker 3

55:47

Yeah. Grandpa processing unit, yeah.

S1

Speaker 1

55:50

Biological grandpa processing units. Yeah. Okay, so maybe also just a comment on AGI representing, before even the term existed, representing a kind of community.

S1

Speaker 1

56:04

Now you've talked about this in the past, sort of AI has come in waves, but there's always been this community of people who dream about creating general human level superintelligence systems. Can you maybe give your sense of the history of this community as it exists today, as it existed before this deep learning revolution, all throughout the winters and the summers of AI?

S3

Speaker 3

56:29

Sure, first I would say as a side point, the winters and summers of AI are greatly exaggerated by Americans. And if you look at the publication record of the artificial intelligence community since say the 1950s, you would find a pretty steady growth in advance of ideas and papers. And what's thought of as an AI winter or summer was sort of how much money is the US military pumping into AI, which was meaningful.

S3

Speaker 3

57:04

On the other hand, there was AI going on in Germany, UK, and Japan, and Russia, all over the place, while US military got more and less enthused about AI. So, I mean.

S1

Speaker 1

57:17

That happened to be, just for people who don't know, the US military happened to be the main source of funding for AI research. So another way to phrase that is it's up and down of funding for artificial intelligence research.

S3

Speaker 3

57:30

And I would say the correlation between funding and intellectual advance was not 100%, right? Because, I mean, in Russia, as an example, or in Germany, there was less dollar funding than in the US, but many foundational ideas were laid out, but it was more theory than implementation. And US really excelled at sort of breaking through from theoretical papers to working implementations, which did go up and down somewhat with US military funding, but still, I mean, you can look in the 1980s, Dietrich Doerner in Germany had self-driving cars on the Autobahn, right?

S3

Speaker 3

58:11

And I mean, this, it was a little early with regard to the car industry, so it didn't catch on such as has happened now. But I mean, that whole advancement of self-driving car technology in Germany was pretty much independent of AI military summers and winters in the US. So there's been more going on in AI globally than not only most people on the planet realize, but then most new AI PhDs realize, because they've come up within a certain subfield of AI and haven't had to look so much beyond that. But I would say, when I got my PhD in 1989 in mathematics, I was interested in AI already.

S1

Speaker 1

58:55

In Philadelphia, by the way.

S3

Speaker 3

58:56

Yeah, I started at NYU, then I transferred to Philadelphia, to Temple University, good old North Philly.

S1

Speaker 1

59:03

North Philly.

S3

Speaker 3

59:04

Yeah. The pearl of the US. You never stopped at a red light then because you were afraid if you stopped at a red light, someone will carjack you. So you strive through every red light.

S3

Speaker 3

59:15

Yeah. Everyday driving or bicycling to temple from my house? Was it like a new adventure? But yeah, the reason I didn't do a PhD in AI was what people were doing in the academic AI field then was just astoundingly boring and seemed wrong-headed to me.

S3

Speaker 3

59:34

It was really like rule-based expert systems and production systems. And actually, I loved mathematical logic. I had nothing against logic as the cognitive engine for an AI, But the idea that you could type in the knowledge that AI would need to think seemed just completely stupid and wrong-headed to me. I mean, you can use logic if you want, but somehow the system has got to be.