2 hours 24 minutes 51 seconds
🇬🇧 English
Speaker 1
02:00:00
So we're always modifying our bodies. So I think it's hard to imagine exactly what it will be like in the future.
Speaker 2
02:00:07
But on the Turing test side, do you think... So forget about love for a second. Let's talk about just like the Alexa Prize.
Speaker 2
02:00:16
Actually, I was invited to be a, what is the interviewer for the Alexa Prize or whatever, that's in 2 days. Their idea is success looks like a person wanting to talk to an AI system for a prolonged period of time, like 20 minutes. How far away are we? And why is it difficult to build an AI system with which you'd want to have a beer and talk for an hour or 2 hours?
Speaker 2
02:00:47
Not to check the weather or to check music, but just like to, to talk as friends.
Speaker 1
02:00:53
Yeah. Well, you know, we saw, we saw, Weissenbaum back in the sixties with his program, Eliza, being shocked at how much people would talk to Eliza. I remember in the 70s typing stuff to Eliza to see what it would come back with. You know, I think right now, and this is a thing that Amazon's been trying to improve with the likes.
Speaker 1
02:01:20
There is no continuity of topic. You can't refer to what we talked about yesterday. It's not the same as talking to a person where there seems to be an ongoing existence, which changes.
Speaker 2
02:01:35
We share moments together and they last in our memory together.
Speaker 1
02:01:38
Yeah, but there's none of that. And there's no sort of intention of these systems that they have any goal in life, even if it's to be happy. They don't even have a semblance of that.
Speaker 1
02:01:53
Now, I'm not saying this can't be done. I'm just saying, I think this is why we don't feel that way about them. Although that's sort of a minimal requirement. If you want the sort of interaction you're talking about, it's a minimal requirement.
Speaker 1
02:02:07
Whether it's gonna be sufficient, I don't know, we haven't seen it yet. We don't know
Speaker 2
02:02:12
what it feels like. I tend to think it's not as difficult as solving intelligence, for example, and I think it's achievable in the near term. But on the Turing test, why don't you think the Turing test is a good test of intelligence?
Speaker 1
02:02:33
Oh, because, you know, again, the Turing, if you read the paper, Turing wasn't saying this is a good test. He was using a rhetorical device to argue that if you can't tell the difference between a computer and a person, you must say that the computer's thinking because you can't tell the difference when it's thinking. You can't say something different.
Speaker 1
02:02:59
What it has become as this sort of weird game of fooling people. So back at the AI lab in the late 80s, we had this thing that still goes on called the AI Olympics. And 1 of the events we had 1 year was the original imitation game, as Turing talked about, because he starts by saying, can you tell whether it's a man or a woman? So we did that at the lab.
Speaker 1
02:03:28
We had, you know, you'd go and type and the thing would come back and you had to tell whether it was a man or a woman. And the, 1 of the, 1 man came up with a question that he could ask, which was always a dead giveaway of whether the other person was really a man or a woman. He would ask them, did you have green plastic toy soldiers as a kid? Yeah.
Speaker 1
02:04:01
What'd you do with them? And a woman, a woman trying to be a man would say, Oh, I lined them up. We had wars. We had battles.
Speaker 1
02:04:08
And the man just being announced, I stomped on them. I burned them. So, you know, that's what, that's what the Turing test, the Turing test with computers has become. What's the trick question?
Speaker 1
02:04:23
That's why
Speaker 2
02:04:24
it's sort of
Speaker 1
02:04:26
devolved into this.
Speaker 2
02:04:29
Nevertheless, conversation not formulated as a test is a fascinatingly challenging dance. That's a really hard problem. To me, conversation, when not posed as a test, is a more intuitive illustration how far away we are from solving intelligence than like computer vision.
Speaker 2
02:04:48
It's hard. Computer vision is harder for me to pull apart, but with language, with conversation, you could
Speaker 1
02:04:55
see. Because language is so human.
Speaker 2
02:04:57
It's so human. We can so clearly see it. Shit, you mentioned something I was going to go off on.
Speaker 2
02:05:07
Okay. I mean, I have to ask you, because you were the head of CSAIL, AI lab for a long time. You're, I don't know, to me, when I came to MIT, you're like 1 of the greats at MIT. So what was that time like?
Speaker 2
02:05:26
What, and plus you, you're, I don't know, friends with, but you knew Minsky and all the folks there, all the legendary AI people of which you're 1. So what was that time like? What are memories that stand out to you from that time? From your time at MIT, from the AI lab, from the dreams that the AI lab represented to the actual revolutionary work.
Speaker 1
02:05:53
Let me tell you first a disappointment in myself. As I've been researching this book and so many of the players were active in the 50s and 60s. I knew many of them when they're older, and I didn't ask them all the questions now I wish I had asked.
Speaker 1
02:06:11
I'd sit with them at our Thursday lunches, which we had a faculty lunch, and I didn't ask them so many questions that now I wish I had.
Speaker 2
02:06:19
Can I ask you that question? Because you wrote that. You wrote that you were fortunate to know and rub shoulders with many of the greats, those who founded AI, robotics, and computer science, and the World Wide Web.
Speaker 2
02:06:31
And you wrote that your big regret nowadays is that often I have questions for those who have passed on. Yeah. And I didn't think to ask them any of these questions,
Speaker 1
02:06:40
right.
Speaker 2
02:06:41
Even as I saw them and said hello to them on a daily basis. So maybe also another question I want to ask, if you could talk to them today, what question would you ask? What questions would you ask?
Speaker 1
02:06:54
Oh, well, Rick Leiter. I would ask him, you know, he had the vision for humans and computers working together, And he really founded that at DARPA and he gave the money to MIT, which started Project Mac in 1963. And I would have talked to him about what the successes were, what the failures were, what he saw as progress, etc.
Speaker 2
02:07:21
I would
Speaker 1
02:07:21
have asked him more questions about that because now I could use it in my book, but I think it's lost. It's lost forever. A lot of the motivations are lost.
Speaker 1
02:07:36
I should have asked Marvin why he and Seymour Pappert came down so hard on neural networks in 1968 in their book Perceptrons, because Marvin's PhD thesis was on neural networks.
Speaker 2
02:07:50
How do you make sense of that?
Speaker 1
02:07:51
That book destroyed the field.
Speaker 2
02:07:53
He probably, do you think he knew the effect that book would have?
Speaker 1
02:08:00
All the theorems are negative theorems. Yeah. Yeah.
Speaker 1
02:08:05
So yeah.
Speaker 2
02:08:07
That's just the way of, that's the way of life. Yeah. But still, it's kind of tragic that he was both the proponent and the destroyer of neural networks.
Speaker 2
02:08:17
Yeah. Is there other memories stand out from the robotics and the AI work at MIT?
Speaker 1
02:08:28
Well, yeah, but you gotta be more specific.
Speaker 2
02:08:31
Well, I mean, like it's such a magical place. I mean, to me, it's a little bit also heartbreaking that, you know, with Google and Facebook, like DeepMind and so on, so much of the talent, you know, it doesn't stay necessarily for prolonged periods of time in these universities.
Speaker 1
02:08:50
Oh yeah. I mean, some of the companies are more guilty than others of paying fabulous salaries to some of the highest producers. And then just, you never hear from them again.
Speaker 1
02:09:02
They're not allowed to give public talks. They're sort of locked away. And it's sort of like collecting, collecting, you know, Hollywood stars or something. And they're not allowed to make movies anymore.
Speaker 1
02:09:14
I own them.
Speaker 2
02:09:15
Yeah, that's tragic. Because I mean, there's an openness to the university setting where you do research to both in the space of ideas and space like publication, all those kinds of things.
Speaker 1
02:09:25
Yeah, you know, and, you know, there's the publication and all that. And often, you know, although these places say they publish, there's pressure. And but I think, for instance, you know, net net, I think Google buying those 8 or 9 robotics company was bad for the field because it locked those people away.
Speaker 1
02:09:50
They didn't have to make the company succeed anymore, locked them away for years and then sort of all threaded away. Yeah. So-
Speaker 2
02:10:02
Do you have hope for MIT? For MIT?
Speaker 1
02:10:07
Yeah, why shouldn't I?
Speaker 2
02:10:08
Well, I could be harsh and say that... I'm not sure I would say MIT is leading the world in AI. Or even Stanford.
Speaker 2
02:10:18
Or Berkeley. I would say... I would say, DeepMind, Google AI, Facebook AI...
Speaker 1
02:10:26
I would take a slightly different approach, a different answer. I'll come back to Facebook in a minute, but I think those other places are Following a dream of 1 of the founders And I'm not sure that it's well founded the dream and I'm not sure that it's going to have the impact that he believes it is.
Speaker 2
02:10:55
You're talking about Facebook and Google and so on?
Speaker 1
02:10:57
I'm talking about Google. Google. But the
Speaker 2
02:10:59
thing is, those research labs aren't... There's the big dream. And I'm usually a fan of no matter what the dream is, a big dream is a unifier.
Speaker 2
02:11:09
Because what happens is you have a lot of bright minds working together on a dream. What results is a lot of like adjacent ideas. I mean, this is how so much progress is made.
Speaker 1
02:11:20
Yeah, so I'm not saying they're actually leading. I'm not saying that the universities are leading, but I don't think those companies are leading in general because they're, you know, we saw this incredible spike in, you know, attendees at NeurIPS. And as I said in my January 1st review this year for 2020, 2020 will not be remembered as a watershed year for machine learning or AI.
Speaker 1
02:11:49
You know, there was nothing surprising happened anyway, unlike when deep learning hit ImageNet. That was a shake. And there's a lot more people writing papers, but the papers are fundamentally boring and uninteresting. Incremental work.
Speaker 2
02:12:14
Yeah. Is there particular memories you have with Minsky or somebody else at MIT that stand out? Funny stories. I mean, unfortunately, he's another 1 that's passed away.
Speaker 2
02:12:26
You've known some of the biggest minds in AI.
Speaker 1
02:12:29
Yeah. And they did amazing things And sometimes they were grumpy.
Speaker 2
02:12:36
Well, he was he was interesting because he was very grumpy. But that that was his. I remember him saying in an interview that the key to success or being to keep being productive is to hate everything you've ever done in the past.
Speaker 1
02:12:52
Maybe that explains the Perceptron book. There it was, he told you exactly.
Speaker 2
02:12:58
But he, meaning like, just like, I mean, maybe that's the way to not treat yourself too seriously. Just, always be moving forward. That was the idea.
Speaker 2
02:13:09
I mean, that crankiness, I mean, there's a, that's the scary. So let me, let me, let me
Speaker 1
02:13:15
tell you, you know, what really, you know, the joy memories are about having access to technology before anyone else has seen it. So I got to Stanford in 1977 and we had terminals that could show live video on them, digital sound system. We had a Xerox graphics printer.
Speaker 1
02:13:45
We could print. It wasn't like a typewriter ball hitting in characters, it could print arbitrary things, only in 1 bit, black or white, but you could arbitrary pictures. This was science fiction sort of stuff. At MIT, the LISP machines, which, you know, they were the first personal computers and, you know, they cost a hundred thousand dollars each.
Speaker 1
02:14:12
And I could, you know, I got there early enough in the day. I got 1 for the day. Couldn't, couldn't stand up. I had to keep working.
Speaker 1
02:14:18
Yeah. Yeah.
Speaker 2
02:14:21
Yeah. Yeah. Yeah. Yeah.
Speaker 2
02:14:21
So having that direct glimpse into the future.
Speaker 1
02:14:25
Yeah. And I've had email every day since 1977. And the host field was only 8 bits, you know, in that many places, but I could send email to other people at a few places. So that was pretty exciting to be in that world so different from what the rest of the world knew.
Speaker 2
02:14:50
Let me ask you, I'll probably edit this out, but just in case you have a story. I'm hanging out with Don Knuth for a while tomorrow. Did you ever get a chance, it's such a different world than yours.
Speaker 2
02:15:03
He's a very kind of theoretical computer science, the puzzle of computer science and mathematics. And you're so much about the magic of robotics, like the practice of it. You mentioned him earlier for like, not, you know, about computation. Did your worlds cross?
Speaker 1
02:15:19
They did in a, you know, I know him now, we talk, but let me tell you my Donald Knuth story. So, you know, besides analysis of algorithms, He's well known for writing tech, which is in LaTeX, which is the academic publishing system. So he did that at the AI lab and he would work overnight at the AI lab.
Speaker 1
02:15:45
And 1 day, 1 night, the mainframe computer went down, and a guy named Robert Poole was there. He did his PhD at the Media Lab at MIT, And he was an engineer. And so he and I tracked down what were the problem was. It was 1 of this big refrigerator size or washing machine size disk drives had failed.
Speaker 1
02:16:13
And that's what brought the whole system down. So we got panels pulled off and we're pulling, you know circuit cards out and Donald Knuth who's a really tall guy walks in and he's looking down list. When will it be fixed? You know, he wanted to get back to right his tax system And so we we figured out, you know, it was a particular chip, 7400 series chip, which was socketed.
Speaker 1
02:16:38
We popped it out. We put a replacement in, put it back in, smoke comes out because we put it in backwards because we were so nervous that the whole canoe was standing over us. Anyway, we eventually got it fixed and got the mainframe running again.
Speaker 2
02:16:53
So that was your little, when was that again?
Speaker 1
02:16:56
Well, that must have been before October 79 because we moved out of that building then. Probably 78 sometime or early 79.
Speaker 2
02:17:05
Yeah, all those figures are just fascinating. All the people with past past through MIT is really fascinating. Is there a, let me ask you to put on your big wise man hat.
Speaker 2
02:17:19
Is there advice that you can give to young people today, whether in high school or college, who are thinking about their career, who are thinking about life, how to live a life they're proud of, a successful life.
Speaker 1
02:17:36
Yeah, so many people ask me for advice and have asked for, and I talk to a lot of people all the time. And there is no 1 way. You know, there's a lot of pressure to produce papers that will be acceptable and be published.
Speaker 1
02:18:00
Maybe I was, Maybe I come from an age where I could be a rebel against that and still succeed. Maybe it's harder today. But I think it's important not to get too caught up with what everyone else is doing. And if you, well, it depends on what you want in life.
Speaker 1
02:18:23
If you want to have real impact, you have to be ready to fail a lot of times. So you have to make a lot of unsafe decisions. And the only way to make that work is to make, keep doing it for a long time. And then 1 of them will be work out.
Speaker 1
02:18:40
And so that, that, that will make something successful
Speaker 2
02:18:43
or not. Or you just may, you know, end up, you know, not having a you know, having a lousy career I mean, it's certainly possible taking the risk is the thing.
Speaker 1
02:18:53
Yeah, so but But there's no way to to Make all safe decisions and actually really contribute.
Speaker 2
02:19:06
Do you think about your death, about your mortality?
Speaker 1
02:19:12
I gotta say when COVID hit, I did, because in the early days, we didn't know how bad it was going to be. That made me work on my book harder for a while. But then I'd started this company, and now I'm doing more than full-time at the company, so the book's on hold.
Speaker 1
02:19:28
But I do want to finish this book.
Speaker 2
02:19:30
When you think about it, are you afraid of it?
Speaker 1
02:19:35
I'm afraid of dribbling. Yeah. Of losing it.
Speaker 2
02:19:42
The details of, okay. Yeah. Yeah.
Speaker 2
02:19:45
But the fact that the ride ends?
Speaker 1
02:19:49
I've known that for a long time. So it's
Speaker 2
02:19:53
yeah, but there's knowing and knowing it's such
Speaker 1
02:19:55
a Yeah, and it
Speaker 2
02:19:58
really sucks.
Speaker 1
02:19:58
It feels it feels a lot closer so my in in my my blog with my predictions my sort of pushback against that was I said, I'm going to review these every year for 32 years. That puts me into my mid-90s.
Speaker 2
02:20:17
Every time you write the blog posts, you're getting closer and closer to your own prediction. That's true. Of your death.
Speaker 1
02:20:23
Yeah. What do you
Speaker 2
02:20:25
hope your legacy is? You're 1 of the greatest roboticist AI researchers of all time.
Speaker 1
02:20:34
What I hope is that I actually finish writing this book, and that there's 1 person who reads it, and see something about changing the way they're thinking, and that leads to the next big.
Speaker 2
02:20:56
And then there'll be on a podcast 100 years from now saying I once read that book. And that changed everything. What do you think is the meaning of life?
Speaker 2
02:21:08
This whole thing, the existence, all the hurried things we do on this planet, what do you think is the meaning of it all?
Speaker 1
02:21:16
Well, I think we're all really bad at it.
Speaker 2
02:21:19
Life or finding meaning or both?
Speaker 1
02:21:21
Yeah, we get caught up in the, it's easier to get, easier to do the stuff that's immediate and not do the stuff that's not immediate. So the
Speaker 2
02:21:31
big picture, we're bad at. Yeah. Do you have a sense of what that big picture is?
Speaker 2
02:21:36
Like why you ever look up to the stars and ask why the hell are we here?
Speaker 1
02:21:43
You know my my My atheism tells me it's just random, but I want to understand the way random, and that's what I talk about in this book, how order comes from disorder.
Speaker 2
02:22:00
But it kind of sprung up, like most of the whole thing is random, but this little pocket of complexity they will call earth That like why the hell does that happen?
Speaker 1
02:22:10
And and what we don't know is how common that Pop those pockets of complexity are or how often? Because they may not last forever
Speaker 2
02:22:22
which is a more exciting slash sad to you if we're alone or if there's infinite number
Speaker 1
02:22:30
of- Oh, I think it's impossible for me to believe that we're alone. That would just be too horrible, too cruel.
Speaker 2
02:22:41
Could be like the sad thing. It could be like a graveyard of intelligent civilizations.
Speaker 1
02:22:46
Oh, everywhere. Yeah, that may be the most likely outcome
Speaker 2
02:22:50
and for us too. Yeah, exactly Yeah, and all of this will be forgotten. Yeah Including all the robots you build everything forgotten
Speaker 1
02:23:01
Well on average everyone has been forgotten in history. Right? Yeah.
Speaker 1
02:23:07
Most people are not remembered beyond the generation or 2.
Speaker 2
02:23:12
I mean, yeah. Well, not just on average, basically. Very close to 100% of people who have ever lived are forgotten.
Speaker 1
02:23:18
Yeah, I mean, in the long arc of time. I don't know anyone alive who remembers my great-grandparents because we didn't meet them.
Speaker 2
02:23:28
Still, this life is pretty fun somehow. Yeah. Even the immense absurdity and at times meaninglessness of it all.
Speaker 2
02:23:39
It's pretty fun. And for me, 1 of the most fun things is robots. And I've looked up to your work. I've looked up to you for a long time.
Speaker 2
02:23:47
Rod, it's an honor that you would spend your valuable time with me today talking. It was an amazing conversation. Thank you so much for being here.
Speaker 1
02:23:56
No, thanks for talking with me. I've enjoyed it.
Speaker 3
02:24:00
Thanks for listening to this conversation with Rodney Brooks. To support this podcast, please check out our sponsors in the description. And now, let me leave you with the 3 laws of robotics from Isaac Asimov.
Speaker 3
02:24:12
1, a robot may not injure a human being or, through inaction, allow a human being to come to harm. 2, a robot must obey the orders given to it by human beings except when such orders would conflict with the first law. And 3, a robot must protect its own existence as long as such protection does not conflict with the first or the second laws. Thank you for listening.
Speaker 3
02:24:39
I hope to see
Speaker 1
02:24:45
you
Omnivision Solutions Ltd