2 hours 23 minutes 50 seconds
🇬🇧 English
Speaker 1
00:00
The following is
Speaker 2
00:00
a conversation with Charles Isbell, Dean of the College of Computing at Georgia Tech, a researcher and educator in the field of artificial intelligence, and someone who deeply thinks about what exactly is the field of computing and how do we teach it. He also has a fascinatingly varied set of interests, including music, books, movies, sports, and history. They make him especially fun to talk with.
Speaker 2
00:28
When I first saw him speak, his charisma immediately took over the room. And I had a stupid excited smile on my face. And I knew I had to eventually talk to him on this podcast. Quick mention of each sponsor, followed by some thoughts related to the episode.
Speaker 2
00:44
First is Neuro, the maker of functional sugar-free gum and mints that I use to give my brain a quick caffeine boost. Second is Decoding Digital, a podcast on tech and entrepreneurship that I listen to and enjoy. Third is Masterclass, online courses that I watch from some of the most amazing humans in history. And finally, Cash App, the app I use to send money to friends for food and drinks.
Speaker 2
01:10
Please check out the sponsors in the description to get a discount and to support this podcast. As a side note, let me say that I'm trying to make it so that the conversations with Charles, Eric Weinstein, and Dan Carlin will be published before Americans vote for president on November 3rd. There's nothing explicitly political in these conversations, but they do touch on something in human nature that I hope can bring context to our difficult time, and maybe for a moment, allow us to empathize with people we disagree with. With Eric, we talk about the nature of evil.
Speaker 2
01:45
With Charles, besides AI and music, we talk a bit about race in America and how we can bring more love and empathy to our online communication. And with Dan Carlin, well, we talk about Alexander the Great, Genghis Khan, Hitler, Stalin, and all the complicated parts of human history in between, with a hopeful eye toward a brighter future for our humble little civilization here on Earth. The conversation with Dan will hopefully be posted tomorrow on Monday, November 2nd. If you enjoy this thing, subscribe on YouTube, review it with 5 Stars on Apple Podcasts, follow on Spotify, support on Patreon, or connect with me on Twitter, Alex Friedman.
Speaker 2
02:31
And now, here's my conversation with Charles Isbell. You've mentioned that you love movies and TV shows.
Speaker 1
02:41
Let's ask an easy question, but you have to be definitively, objectively conclusive. What's your top 3 movies of all time?
Speaker 3
02:49
So you're asking me to be definitive and to be conclusive. That's a little hard, I'm gonna tell you why. It's very simple.
Speaker 3
02:55
It's because movies is too broad of a category. I gotta pick sub-genres. But I will tell you that of those genres, I'll pick 1 or 2 from each of the genres. I'll get us to 3, so I'm not gonna cheat.
Speaker 3
03:06
So my favorite comedy of all times, which probably my favorite movie of all time, is His Girl Friday, which is probably a movie that you've not ever heard of, but it's based on a play called The Front Page from, I don't know, early 1900s. And the movie is a fantastic film.
Speaker 1
03:25
What's the story? What's the independent film?
Speaker 3
03:28
No, no, no. What are we talking about? This is 1 of the movies that would have been very popular, it's a screwball comedy.
Speaker 3
03:33
You ever see Moonlighting, the TV show? You know what I'm talking about? So you've seen these shows where there's a man and a woman and they clearly are in love with 1 another and they're constantly fighting and always talking over each other. Banter, banter, banter, banter, banter.
Speaker 3
03:45
This was the movie that started all that as far as I'm concerned. It's very much of its time. So it's I don't know it must have come out sometime between 1934 and 1939. I'm not sure exactly when the movie itself came out It's black and white.
Speaker 3
03:59
It's it's just a fantastic film is hilarious. It's mostly conversation Oh, not entirely but mostly mostly Just a lot of back and forth. There's a story there, someone's on death row, and they're newspaper men, including her. They're all newspaper men.
Speaker 3
04:16
They were divorced, the editor, the publisher I guess, and the reporter, they were divorced, but they clearly, he's thinking, trying to get back together, and there's this whole other thing that's going on, but none of that matters, the plot doesn't matter. What matters is the role.
Speaker 1
04:30
It's just a little play in conversation.
Speaker 3
04:31
It's fantastic, And I just love everything about the conversation. Because at the end of the day, sort of narrative and conversation are the sort of things that drive me. And so I really like that movie for that reason.
Speaker 3
04:41
Similarly, I'm now gonna cheat and I'm gonna give you 2 movies as 1. And they're Crouching Tiger, Hidden Dragon, and John Wick. Both relatively modern, John Wick of course is. 1, 2, or 3.
Speaker 3
04:52
1. It gets increasingly, I love them all for different reasons, and increasingly more ridiculous. Kind of like loving Alien and Aliens, despite the fact they're 2 completely different movies. But the reason I put Crouching Tiger, Hidden Dragon and John Wick together is because I actually think they're the same movie, or what I like about them, the same movie, which is both of them create a world that you're coming in the middle of, and they don't explain it to you.
Speaker 3
05:17
But the story is done so well that you pick it up. So anyone who's seen John Wick, you know, you have these little coins and they're headed out and there are these rules and apparently every single person in New York City is an assassin. There's like 2 people who come through who aren't, but otherwise they are. But there's this complicated world and everyone knows each other.
Speaker 3
05:34
They don't sit down and explain it to you, but you figure it out. Crouching Tiger Hidden Dragon's a lot like that. You get the feeling that this is chapter 9 of a 10-part story and you've missed the first 8 chapters and they're not gonna explain it to you, but there's this sort of rich world behind you.
Speaker 1
05:46
So I love it. You get pulled in anyway, like immediately.
Speaker 3
05:47
You get pulled in anyway. So it's just excellent storytelling in both cases and very, very different.
Speaker 1
05:51
And also you like the outfit, I assume. The John Wick outfit.
Speaker 3
05:54
Oh yeah, of course, of course. Yes, I think John Wick outfit. And so that's number 2.
Speaker 3
05:59
And then.
Speaker 1
05:59
But Sorry to pause on that, martial arts? You have a long list of hobbies, like it scrolls off the page, but I didn't see martial arts as 1 of them.
Speaker 3
06:07
I do not do martial arts, but I certainly
Speaker 1
06:09
watch martial arts.
Speaker 3
06:10
Oh, I appreciate it very much. Oh, we could talk about every Jackie Chan movie ever made, and I would be on board with that.
Speaker 1
06:15
Like The Couch Hour 2, like that kind of, the comedy of it, cop.
Speaker 3
06:19
Yes, yes. By the way, my favorite Jackie Chan movie would be Drunken Master 2. Known in the States usually as Legend of the Drunken Master.
Speaker 3
06:29
Actually, Drunken Master, the first 1, is the first kung fu movie I ever saw, but I did not know that. The first Jackie
Speaker 1
06:35
Chan movie?
Speaker 3
06:36
No, first 1 ever that I saw and remember. But I had no idea that that's what it was. I didn't know that was Jackie Chan.
Speaker 3
06:41
That was like his first major movie. I was a kid, It was done in the 70s. I only later rediscovered that that was actually.
Speaker 1
06:49
And he creates his own martial art by, was he actually drinking or was he play drinking?
Speaker 3
06:58
You mean as an actor or? No.
Speaker 1
07:00
I'm sure as an actor. He was in the 70s or whatever.
Speaker 3
07:04
He was definitely drinking. And in the end, he drinks industrial grade alcohol.
Speaker 1
07:09
Ah, yeah.
Speaker 3
07:10
Yeah, and has 1 of the most fantastic fights ever in that subgenre. Anyway, that's my favorite 1 of his movies. But I'll tell you the last movie is actually a movie called Nothing But a Man, which is a 1960s, starred Ivan Dixon, who you'll know from Hogan's Heroes, and Abbie Lincoln.
Speaker 3
07:31
It's just a really small little drama. It's a beautiful story. But my favorite scenes, I'm cheating, my favorite, 1 of my favorite movies just for the ending is The Godfather. I think the last scene of that is just fantastic.
Speaker 3
07:45
It's the whole movie all summarized in just 8, 9 seconds. Godfather part 1? Part 1. How does it end?
Speaker 3
07:51
I don't think you need to worry about spoilers if you haven't seen The Godfather. Spoiler alert, it ends with the wife coming to Michael and he says, just this once I'll let you ask me my business. And she asks him if he did this terrible thing. And he looks her in the eye and he lies and he says, no.
Speaker 3
08:09
And she says, thank you. And she walks out the door and you see her going out of the door, and all these people are coming in, and they're kissing Michael's hands, and Godfather. And then the camera switches perspective, so instead of looking at him, you're looking at her, and the door closes in her face, and that's the end of the movie. And that's the whole movie right there.
Speaker 1
08:34
Do you see parallels between that and your position as dean at Georgia Tech, Colonel? Just kidding, trick question.
Speaker 3
08:42
Sometimes, certainly if the door gets closed on me every once in a while.
Speaker 1
08:45
Okay, that was a rhetorical question. You've also mentioned that you, I think, enjoy all kinds of experiments, including on yourself. But I saw a video where you said you did an experiment where you tracked all kinds of information about yourself and a few others, sort of wiring up your home.
Speaker 1
09:05
And this little idea that you mentioned in that video, which is kind of interesting, that you thought that 2 days worth of data is enough to capture majority of the behavior of the human being. First, can you describe what the heck you did to collect all the data, because it's fascinating, just like little details of how you collect that data, and also what your intuition behind the 2 days is.
Speaker 3
09:30
So, first off, it has to be the right 2 days. But I was thinking of a very specific experiment. There's actually a suite of them that I've been a part of, and other people have done this, of course.
Speaker 3
09:38
I just sort of dabbled in that part of the world. But to be very clear, the specific thing that I was talking about had to do with recording all the IR going on in my, infrared going on in my house. So this is a long time ago. So this is everything's being controlled by pressing buttons on remote controls as opposed to speaking to Alexa or Siri or someone like that.
Speaker 3
09:57
And I was just trying to figure out if you could get enough data on people to figure out what they were gonna do with their TVs or their lights. My house was completely wired up at the time. Which you know, what, I'm about to look at a movie, or I'm about to turn on the TV or whatever and just see what I could predict from it. It was kind of surprising, it shouldn't have been.
Speaker 3
10:16
But that's all very easy to do, by the way, just capturing all the little stuff. I mean, it's a bunch of computer systems. It's really easy to capture, if you know what you're looking for. At Georgia Tech, long before I got there, we had this thing called the Aware Home, where everything was wired up, and you saw, you captured everything that was going on.
Speaker 3
10:29
Nothing even difficult, not with video or anything like that, just the way that the system was just capturing everything. So it turns out that, and I did this with myself and then I had students and they worked with many other people, and it turns out at the end of the day, people do the same things over and over and over again. So it has to be the right 2 days, like a weekend. But it turns out not only can you predict what someone's going to do next, at the level of what button they're gonna press next on a remote control, but you can do it with something really, really simple.
Speaker 3
11:01
Like you don't even need a hidden Markov model, it's like a Mark, just simply, I press this, this is my prediction of the next thing, and it turns out you get
Speaker 1
11:07
93%
Speaker 3
11:08
accuracy just by doing something very simple and stupid and just counting statistics. But what was actually more interesting is that you could use that information. This comes up again and again in my work.
Speaker 3
11:18
If you try to represent people or objects by the things they do, the things you can measure about them that have to do with action in the world, so a distribution over actions, and you try to represent them by the distribution of actions that are done on them, then you do a pretty good job of sort of understanding how people are and they cluster remarkably well. In fact, irritatingly so. And so by clustering people this way, you can, maybe, you know, I got the 93% accuracy of what's the next button you're gonna press, but I can get 99% accuracy, or somewhere there's about, on the collections of things you might press. And it turns out the things that you might press are all related to number, to each other, in exactly the way that you would expect.
Speaker 3
12:01
So for example, all the numbers on a keypad, it turns out, all have the same behavior with respect to you as a human being. And so you would naturally cluster them together and you discover that numbers are all related to 1 another in some way and all these other things. And then, and here's the part that I think's important. I mean, you can see this in all kinds of things.
Speaker 3
12:22
Every individual is different, but any given individual is remarkably predictable, because you keep doing the same things over and over again. And the 2 things that I've learned in the long time that I've been thinking about this is people are easily predictable and people hate when you tell them that they're easily predictable.
Speaker 1
12:39
But they are, and there you go. What about, let me play devil's advocate and philosophically speaking, is it possible to say that what defines humans is the outlier? So even though some large percentage of our behaviors, whatever the signal we measure is the same, and it would cluster nicely, but maybe it's the special moments of when we break out of the routine is the definitive things, and the way we break out of that routine for each 1 of us might be different?
Speaker 3
13:09
It's possible. I would say it a little differently, I think. I would make 2 things.
Speaker 3
13:13
1 is, I'm gonna disagree with the premise, I think, but that's fine. I think the way I would put it is, there are people who are very different from lots of other people, but they're not 0%, they're closer to 10%, right? So in fact, even if you do this kind of clustering of people that'll turn out to be this small number of people. They all behave like each other, even if they individually behave very differently from everyone else.
Speaker 3
13:37
So I think that's kind of important. But what you're really asking, I think, and I think this is really a question is, you know, what do you do when you're faced with the situation you've never seen before? What do you do when you're faced with an extraordinary situation maybe you've seen others do and you're actually forced to do something and you react to that very differently and that is the thing that makes you human. I would agree with that, at least at a philosophical level, that it's the times when you are faced with something difficult, a decision that you have to make, where the answer isn't easy, even if you know what the right answer is, that's sort of what defines you as the individual, and I think what defines people broadly.
Speaker 3
14:11
It's the hard problem, it's not the easy problem, it's the thing that's gonna hurt you, it's not the thing. It's not even that it's difficult, it's just that you know that the outcome is going to be highly suboptimal for you. And I do think that that's a reasonable place to start for the question of what makes us human. So before we talk about, sort of explore the different ideas underlying interactive artificial intelligence, which we are working on.
Speaker 3
14:34
Let me just go along this thread
Speaker 1
14:36
to skip to kind of our world of social media, which is something that, at least on the artificial intelligence side, you think about there's a popular narrative. I don't know if it's true, but that we have these silos in social media and we have these clusterings as you're kind of mentioning. And the idea is that, along that narrative is that we wanna break each other out of those silos so we can be empathetic to other people.
Speaker 1
15:10
If you're a Democrat, you're empathetic to the Republican. If you're a Republican, you're empathetic to a Democrat. Those are just 2 silly bins that we seem to be very excited about, but there's other binnings that we can think about. Is there, from an artificial intelligence perspective, because you're just saying we cluster along the data, but then interactive artificial intelligence is referring to throwing agents into that mix, AI systems in that mix, helping us, interacting with us humans and maybe getting us out of those silos.
Speaker 1
15:43
Is that something that you think is possible? Do you see a hopeful possibility for artificial intelligence systems in these large networks of people to get us outside of our habits in at least the idea space to where we can sort of be empathetic to other people's lived experiences, other people's points of view, you know, all that kind of stuff.
Speaker 3
16:11
Yes, and I actually don't think it's that hard. Well, it's not hard in this sense. So imagine that you can, now let's make life simple for a minute.
Speaker 3
16:20
Let's assume that you can do a kind of partial ordering over ideas or clusterings of behavior. It doesn't even matter what I mean here. So long as there's some way that this is a cluster, this is a cluster, there's some edge between them, right? And this is kind of, they don't quite touch even, or maybe they come very close.
Speaker 3
16:36
If you can imagine that conceptually, then the way you get from here to here is not by going from here to here, the way you get from here to here is you find the edge and you move slowly together, right? And I think that machines are actually very good at that sort of thing, once we can kind of define the problem either in terms of behavior or ideas or words or whatever. So it's easy in the sense that if you already have the network and you know the relationships, you know the edges and sort of the strings on them and you kind of have some semantic meaning for them, the machine doesn't have to, You do as the designer. Then yeah, I think you can kind of move people along and sort of expand them.
Speaker 3
17:06
But it's harder than that. And the reason it's harder than that, or sort of coming up with the network structure itself is hard, is because I'm going to tell you a story that someone else told me. And I don't, I may get some of the details a little bit wrong, but it's roughly, it roughly goes like this. You take 2 sets of people from the same backgrounds, and you want them to solve a problem.
Speaker 3
17:27
So you separate them up, which we do all the time, right, oh, you know, we're gonna break out in the, we're gonna break out groups, you're gonna go over there and you're gonna talk about this, you're gonna go over there and you're gonna talk about this. And then you have them sort of in this big room, but far apart from 1 another, and you have them sort of interact with 1 another. When they come back to talk about what they learned, you wanna merge what they've done together, it can be extremely hard because they don't, they basically don't speak the same language anymore. Like when you create these problems and you dive into them, you create your own language.
Speaker 3
17:53
So the example this 1 person gave me, which I found kind of interesting because we were in the middle of that at the time, was they're sitting over there and they're talking about these rooms that you can see, but you're seeing them from different vantage points, depending upon which side of the room you're on. They can see a clock very easily, and so they start referring to the room as the 1 with the clock. This group over here, looking at the same room, they can see the clock, but it's not in their line of sight or whatever, so they end up referring to it by some other way. When they get back together and they're talking about things, they're referring to the same room and they don't even realize they're referring to the same room.
Speaker 3
18:28
In fact, this group doesn't even see that there's a clock there and this group doesn't see whatever it is. The clock on the wall is the thing that stuck with me. So if you create these different silos, the problem isn't that the ideologies disagree. It's that you're using the same words and they mean radically different things.
Speaker 3
18:42
The hard part is just getting them to agree on the, well, maybe we'd say the axioms in our world, but just get them to agree on some basic definitions. Because right now, they're talking past each other, just completely talking past each other. That's the hard part, getting them to meet, getting them to interact, that may not be that difficult. Getting them to see where their language is leading them to lead past 1 another, that's the hard part.
Speaker 1
19:07
It's a really interesting question to me. It could be on the layer of language, but it feels like there's multiple layers to this. Like it could be worldview, it could be, I mean, it all boils down to empathy, being able to put yourself in the shoes of the other person,
Speaker 3
19:19
to
Speaker 1
19:19
learn the language, to learn visually how they see the world, to learn the, I mean, I experience this now with trolls, the degree of humor in that world. For example, I talk about love a lot. I'm very lucky to have this amazing community of loving people, but whenever I encounter trolls, they always roll their eyes at the idea of love because it's so quote unquote cringe.
Speaker 1
19:48
So they show love by derision, I would say. And I think about, on the human level, that's a whole nother discussion, that's psychology, that's sociology, so on. But I wonder if AI systems can help somehow and bridge the gap of what is this person's life like? Encourage me to just ask that question, to put myself in their shoes, to experience the agitations, the fears, the hopes they have, to experience, even just to think about what was their upbringing like, like having a single parent home or a shitty education or all those kinds of things, just to put myself in that mind space.
Speaker 1
20:38
It feels like that's really important for us to bring those clusters together, to find that similar language, but it's unclear how AI can help that because it seems like AI systems need to understand both parties first.
Speaker 3
20:52
So, the word understand there's doing a lot of work, right? Yes. So, do you have to understand it or do you just simply have to note that there is something similar as a point to touch, right?
Speaker 3
21:03
So, you know, you use the word empathy, and I like that word for a lot of reasons. I think you're right in the way that you're using and the way that you're describing it, but let's separate it from sympathy, right? So, you know, Sympathy is feeling sort of for someone. Empathy is kind of understanding where they're coming from and how they feel, right?
Speaker 3
21:20
And for most people, those things go hand in hand. For some people, some are very good at empathy and very bad at sympathy. Some people cannot experience, well, my observation would be, I'm not a psychologist, my observation would be that some people seem incapable of feeling sympathy unless they feel empathy first. You can understand someone, understand where they're coming from and still think, no, I can't support that.
Speaker 3
21:43
It doesn't mean that the only way, Because if that isn't the case, then what it requires is that you must, the only way that you can, to understand someone means you must agree with everything that they do. Which isn't right, right? And if the only way I can feel for someone is to completely understand them and make them like me in some way, well then we're lost, right? Because we're not all exactly like each other.
Speaker 3
22:09
I don't have to understand everything that you've gone through. It helps, clearly. But they're separable ideas, right? Even though they get clearly tangled up in 1 another.
Speaker 3
22:16
So what I think AI could help you do, actually, is if, and I'm being quite fanciful, as it were, but if you think of these as kind of, I understand how you interact, the words that you use, the actions you take, I have some way of doing this, let's not worry about what that is, but I can see you as a kind of distribution of experiences and actions taken upon you, things you've done and so on. And I can do this with someone else, and I can find the places where there's some kind of commonality, a mapping as it were, even if it's not total. If I think of it as distribution, right, then I can take the cosine of the angle between you and if it's 0, you've got nothing in common. If it's 1, you're completely the same person.
Speaker 3
22:56
Well, you're probably not 1. You're almost certainly not 0. I can find the place where there's the overlap, then I might be able to introduce you on that basis or connect you in that way and make it easier for you to take that step of empathy. It's not impossible to do, although I wonder if it requires that everyone involved is at least interested in asking the question.
Speaker 3
23:21
So maybe the hard part is just getting them interested in asking the question. In fact, maybe if you can get them to ask the question, how are we more alike than we are different, they'll solve it themselves. Maybe that's the problem that AI should be working on, not telling you how you're similar or different, but just getting you to decide that it's worthwhile asking the question. So it feels like an economist's answer, actually.
Speaker 1
23:38
Well, people, okay, first of all, people like it when I would disagree. So let me disagree slightly, which is I think everything you said is brilliant, but I tend to believe, philosophically speaking, that people are interested underneath it all. And I would say that AI, the possibility that an AI system would show the commonality is incredible.
Speaker 1
24:01
That's a really good starting point. I would say if on social media, I could discover the common things, deep or shallow, between me and a person who there's tension with, I think that my basic human nature would take over from there. And I think enjoy that commonality and there's something sticky about that that my mind will linger on and that person in my mind will become warmer and warmer and I'll start to feel more and more compassion towards them. I think for majority of the population that's true, but that might be, that's a hypothesis.
Speaker 3
24:39
Yeah, I mean it's an empirical question, right? You'd have to figure it out. I mean I want to believe you're right, and so I'm gonna say that I think you're right.
Speaker 3
24:46
Of course, some people come to those things for the purpose of trolling, right? And it doesn't matter, they're playing a different game. But I don't know, my experience is it requires 2 things. It requires, In fact, maybe this is really at the end what you're saying, and I do agree with this for sure.
Speaker 3
25:03
So, it's hard to hold onto that kind of anger or to hold onto just the desire to humiliate someone for that long. It's just difficult to do. It takes a toll on you. But more importantly, we know this, both from people having done studies on it, but also from our own experiences, that it is much easier to be dismissive of a person if they're not in front of you, if they're not real.
Speaker 3
25:30
So much of the history of the world is about making people other, right? So if you're on social media, if you're on the web, if you're doing whatever on the internet, being forced to deal with someone as a person, some equivalent to being in the same room, makes a huge difference, because then you're, 1, you're forced to deal with their humanity because it's in front of you. The other is, of course, that they might punch you in the face if you go too far, so you know, both of those things kind of work together, I think, to the right end. So I think bringing people together is really a kind of substitute for forcing them to see the humanity in another person and to not be able to treat them as bits.
Speaker 3
26:07
It's hard to troll someone when you're looking them in the eye. This is very difficult to do.
Speaker 1
26:12
Agreed. Your broad set of research interests fall under interactive AI, as I mentioned, which is a fascinating set of ideas, and you have some concrete things that you're particularly interested in, but maybe could you talk about how you think about the field of interactive artificial intelligence?
Speaker 3
26:31
Sure, so let me say up front that if you look at, certainly my early work, but even if you look at most of it, I'm a machine learning guy. Right, I do machine learning. First paper I ever published was in NIPS.
Speaker 3
26:43
Back then it was NIPS, now it's in IRPS. It's a long story there. Anyway, that's another thing. But so, I'm a machine learning guy, right?
Speaker 3
26:49
I believe in data, I believe in statistics and all those kind
Speaker 1
26:51
of things.
Speaker 3
26:52
And the reason I'm bringing that up is even though I'm a newfangled statistical machine learning guy and have been for a very long time, the problem I really care about is AI, right? I care about artificial intelligence. I care about building some kind of intelligent artifact, however that gets expressed, that would be at least as intelligent as humans and as interesting as humans, perhaps in their own way.
Speaker 1
27:16
So that's the deep underlying love and dream is the bigger AI. Yes, the AI. Whatever the heck that is.
Speaker 3
27:22
Yeah, the machine learning in some ways is a means to the end. It is not the end. And I don't understand how 1 could be intelligent without learning, So therefore I gotta figure out how to do that, right?
Speaker 3
27:32
So that's important. But machine learning, by the way, is also a tool. I said statistical because that's what most people think of themselves, machine learning people. That's how they think.
Speaker 3
27:39
I think Pat Langley might disagree, or at least 1980s Pat Langley might disagree with what it takes to do machine learning. But I care about the AI problem, which is why it's interactive AI, not just interactive ML. I think it's important to understand that, that there's a long-term goal here, which I will probably never live to see, but I would love to have been a part of, which is building something truly intelligent outside of ourselves.
Speaker 1
28:02
Can we take a tiny tangent? Or am I interrupting? Which is, is there something you can say concrete about the mysterious gap between the subset ML and the bigger AI?
Speaker 1
28:15
What's missing? What do you think? I mean, obviously it's totally unknown, not totally, but in part unknown at this time, but is it something like with Pat Langley's, is it knowledge, like expert system reasoning type of kind of thing?
Speaker 3
28:30
So AI is bigger than ML, but ML is bigger than AI. This is kind of the real problem here, is that they're really overlapping things that are really interested in slightly different problems. I tend to think of ML, and there are many people out there are gonna be very upset at me about this, but I tend to think of ML being much more concerned with the engineering of solving a problem.
Speaker 3
28:46
And AI about the sort of more philosophical goal of true intelligence, and that's the thing that motivates me, even if I end up finding myself living in this kind of engineering-ish space. I've now made Michael Jordan upset. But you know, To me, they just feel very different. You're just measuring them differently.
Speaker 3
29:04
Your sort of goals of where you're trying to be are somewhat different. But to me, AI is about trying to build that intelligent thing. And typically, but not always, for the purpose of understanding ourselves a little bit better. Machine learning is, I think, trying to solve the problem, whatever that problem is.
Speaker 3
29:20
Now, that's my take. Others, of course, would disagree.
Speaker 1
29:23
So on that note, so with the interactive AI, do you tend to, in your mind, visualize AI as a singular system, or is it as a collective, huge amount of systems interacting with each other? Like is the social interaction of us humans and of AI systems fundamental to intelligence?
Speaker 3
29:41
I think, well, it's certainly fundamental to our kind of intelligence, right? And I actually think it matters quite a bit. So the reason the interactive AI part matters to me is because I don't, this is gonna sound simple, but I don't care whether a tree makes a sound when it falls and there's no 1 around because I don't think it matters, right?
Speaker 3
30:03
If there's no observer in some sense. And I think what's interesting about the way that we're intelligent is we're intelligent with other people, right? Or other things anyway. And we go out of our way to make other things intelligent.
Speaker 3
30:15
We're hardwired to like find intention even whether there is no intention. Why we anthropomorphize everything. I think anyway. I think the interactive AI part is being intelligent in and of myself in isolation is a meaningless act in some sense.
Speaker 3
30:30
The correct answer is you have to be intelligent in the way that you interact with others. That's also efficient because it allows you to learn faster because you can import from past history. It also allows you to be efficient in the transmission of that. So we ask ourselves about me, am I intelligent?
Speaker 3
30:45
Clearly, I think so, but I'm also intelligent as a part of a larger species and group of people, and we're trying to move the species forward as well. And so I think that notion of being intelligent with others is kind of the key thing, because otherwise you come and you go, and then It doesn't matter. And so that's why I care about that aspect of it. And it has lots of other implications.
Speaker 3
31:07
1 is not just building something intelligent with others, but understanding that you can't always communicate with those others. They have been in a room where there's a clock on the wall that you haven't seen, which means you have to spend an enormous amount of time communicating with 1 another constantly in order to figure out what each other wants, right? So, I mean, this is why people project, right? You project your own intentions and your own reasons for doing things onto others as a way of understanding them so that you know how to behave.
Speaker 3
31:32
But by the way, you, completely predictable person, I don't know how you're predictable, I don't know you well enough, but you probably eat the same 5 things over and over again or whatever it is that you do, right? I know I do. If I'm going to a new Chinese restaurant, I will get General Gao's chicken because that's the thing that's easy to get. I will get hot and sour soup.
Speaker 3
31:49
People do the things that they do, but other people get the chicken and broccoli. I can push this analogy way too far. The chicken and broccoli. I don't
Speaker 1
31:56
know what's wrong with those people.
Speaker 3
31:57
I don't know what's wrong with them either.
Speaker 1
32:00
That's not good.
Speaker 3
32:00
We have all had our trauma. So they get their chicken and broccoli and their egg drop soup or whatever. We got to communicate and it's gonna change, right?
Speaker 3
32:08
So it's not, interactive AI is not just about learning to solve a problem or a task. It's about having to adapt that over time, over a very long period of time and interacting with other people, who will themselves change? This is what we mean about things like adaptable models, right, that you have to have a model, that model's gonna change. And by the way, it's not just the case that you're different from that person, but you're different from the person you were 15 minutes ago or certainly 15 years ago, and I have to assume that you're at least gonna drift, hopefully not too many discontinuities, but you're gonna drift over time, and I have to have some mechanism for adapting to that as you an individual over time and across individuals over time.
Speaker 1
32:46
On the topic of adaptive modeling, and you talk about lifelong learning, which is, I think, a topic that's understudied, or maybe because nobody knows what to do with it. But if you look at Alexa, or most of our artificial intelligence systems that are primarily machine learning based systems or dialogue systems, all those kinds of things, they know very little about you in the sense of the lifelong learning sense that we learn as humans, we learn a lot about each other, not in the quantity effects, but like the temporally rich set of information that seems to like pick up the crumbs along the way that somehow seems to capture a person pretty well. Do you have any ideas how to do lifelong learning?
Speaker 1
33:41
Because it seems like most of the machine learning community does not.
Speaker 3
33:45
No, well by the way, not only does the machine learning community not spend a lot of time on lifelong learning, I don't think they spend a lot of time on learning period in the sense that they tend to be very task-focused. Everybody is overfitting to whatever problem it is they happen to have. They're over-engineering their solutions to the task.
Speaker 3
34:01
Even the people, and I think these people do, are trying to solve a hard problem of transfer learning, right? I'm going to learn on 1 task, then learn the other task. You still end up creating the task. It's like looking for your keys where the light is, because that's where the light is, right?
Speaker 3
34:12
It's not because the keys have to be there. I mean, 1 could argue that we tend to do this in general. We tend to kind of do it as a group. We tend to hill climb and get stuck in local optima.
Speaker 3
34:23
And I think we do this in the small as well. I think it's very hard to do. Because, so, Look, here's the hard thing about AI, right? The hard thing about AI is it keeps changing on us, right?
Speaker 3
34:34
You know, what is AI? AI is the art and science of making computers act the way they do in the movies, right? That's what it is, right? And, but beyond that, it's-
Speaker 1
34:43
And they keep coming out with new movies. Yes, and they just, right, exactly.
Speaker 3
34:47
We are driven by this kind of need to the sort of ineffable quality of who we are, which means that the moment you understand something is no longer AI, right? Well, like we understand this, that's just, you take the derivative and you divide by 2 and then you average it out over time in the window. So therefore that's no longer AI.
Speaker 3
35:03
So the problem is unsolvable because it keeps kind of going away. This creates a kind of illusion, which I don't think is an entire illusion, of either there's very simple task-based things you can do very well and over-engineer. There's all of AI, And there's like nothing in the middle. Like it's very hard to get from here to here, and it's very hard to see how to get from here to here.
Speaker 3
35:21
And I don't think that we've done a very good job of it because we get stuck trying to solve the small problem that's in front of it, myself included. I'm not gonna pretend that I'm better at this than anyone else. And of course, all the incentives in academia and in industry are set to make that very hard because you have to get the next paper out, you have to get the next product out, you have to solve this problem, and it's very sort of naturally incremental. And none of the incentives are set up to allow you to take a huge risk unless you're already so well established you can take that big risk.
Speaker 3
35:53
And if you're that well established that you can take that big risk, then you've probably spent much of your career taking these little risks, relatively speaking, And so you have got a lifetime of experience telling you not to take that particular big risk, right? So the whole system's set up to make progress very slow. That's fine, it's just the way it is. But it does make this gap seem really big, which is my long way of saying, I don't have a great answer to it, except that stop doing n equals 1.
Speaker 3
36:17
At least try to get n equal 2 and maybe n equal 7 so that you can say I'm gonna, or maybe T is a better variable here. I'm gonna not just solve this problem, I'm gonna solve this problem and another problem. I'm not gonna learn just on you, I'm gonna keep living out there in the world and just seeing what happens and that we'll learn something as designers, and our machine learning algorithms, and our AI algorithms can learn as well. But unless you're willing to build a system which you're gonna have live for months at a time in an environment that is messy and chaotic you cannot control, then you're never going to make progress in that direction.
Speaker 3
36:48
So I guess my answer to you is yes. My idea is that you should, it's not no, it's yes. You should be deploying these things and making them live for a month at a time and be okay with the fact that It's gonna take you 5 years to do this. Not rerunning the same experiment over and over again and refining the machine so it's slightly better at whatever but actually having it out there and living in the chaos of the world and seeing what it's learning algorithm, say, can learn, what data structure it can build and how it can go from there.
Speaker 3
37:17
Without that, you're gonna be stuck ultimately.
Speaker 1
37:19
What do you think about the possibility of n equals 1 growing, it's probably a crude approximation, but growing like if you look at language models like GPT-3,
Speaker 3
37:30
If
Speaker 1
37:31
you just make it big enough, it'll swallow the world. Meaning like, it'll solve all your T to infinity by just growing in size of this. Taking the small, over-engineered solution and just pumping it full of steroids in terms of compute, in terms of size of training data, and the Yann LeCun style self-supervised, or open AI self-supervised, just throw all of YouTube at it, and it will learn how to reason, how to paint, how to create music, how to love, all of that by watching YouTube videos.
Speaker 3
38:06
I mean, I can't think of a more terrifying world to live in than a world that is based on YouTube videos. But yeah, I think the answer, I just kind of don't think that'll quite, well, it won't work that easily. You will get somewhere and you will learn something, which means it's probably worth it, but you won't get there.
Speaker 3
38:23
You won't solve the problem. You know, here's the thing. We build these things and we say we want them to learn, But what actually happens, and let's say they do learn, I mean, certainly every paper I've gotten published that things learn, I don't know about anyone else, but they actually change us, right? We react to it differently, right?
Speaker 3
38:41
So we keep redefining what it means to be successful, both in the negative, in the case, but also in the positive, in that, oh, well, this is an accomplishment. I'll give you an example, which is like the 1 you just described with G2. Let's get completely out of machine learning. Well, not completely, but mostly out of machine learning.
Speaker 3
38:57
Think about Google. People were trying to solve information retrieval, the ad hoc information retrieval problem forever. I mean, first major book I ever read about it was what, 71, I think was when it came out? Anyway, it's, you know, we'll treat everything as a vector and we'll do these vector space models and whatever and that was all great.
Speaker 3
39:16
And we made very little progress. I mean, we made some progress. And then Google comes and makes the ad hoc problem seem pretty easy. I mean, it's not.
Speaker 3
39:25
There's lots of computers and databases involved, but, you know, and there's some brilliant algorithmic stuff behind it too, and some systems building. But the problem changed, right? If you've got a world that's that connected so that you have, you know, there are 10 million answers quite literally to the question that you're asking, then the problem wasn't give me the things that are relevant. The problem is don't give me anything that's irrelevant, at least in the first page, because nothing else matters.
Speaker 3
39:56
So Google is not solving the information retrieval problem, at least not on this web page. Google is minimizing false positives, which is not the same thing as getting an answer. It turns out it's good enough for what it is we wanna use Google for, but it also changes what the problem was we thought we were trying to solve in the first place. You thought you were trying to find an answer, but you're not, or you're trying to find the answer, but it turns out you're just trying to find an answer.
Speaker 3
40:20
Now, yes, it is true it was also very good at finding you exactly that webpage. Of course, you trained yourself to figure out what the keywords were to get you that webpage. But in the end, By having that much data, you've just changed the problem into something else. You haven't actually learned what you set out to learn.
Speaker 3
40:35
Now, the counter to that would be, maybe we're not doing that either. We just think we are. Because, you know, we're in our own heads. Maybe we're learning the wrong problem in the first place.
Speaker 3
40:44
But I don't think that matters. I think the point is, is that Google has not solved information retrieval. Google has done amazing service. I have nothing bad to say about what they've done.
Speaker 3
40:52
Lord knows my entire life is better because Google exists, in form for Google Maps. I don't think I've ever found this, but.
Speaker 1
40:59
Where is this? Like 95, I
Speaker 3
41:00
see 110 and I see, but where did 95 go? So I'm very grateful for Google, but they just have to make certain the first 5 things are right. And everything after that is wrong.
Speaker 3
41:12
Look, we're going off on a totally different topic here, but think about the way we hire faculty. It's exactly the same thing.
Speaker 1
41:21
Now you're getting controversial.
Speaker 3
41:22
I'm not getting controversial. It's exactly the same problem, right? It's minimizing false positives.
Speaker 3
41:30
We say things like we want to find the best person to be an assistant professor at MIT in the new College of Computing, which I will point out was founded 30 years after the College of Computing I'm a part of. Both of my alma mater, both
Speaker 1
41:46
of my teachers.
Speaker 3
41:47
I'm just saying, I appreciate all that they did and all that they're doing. Anyway, so we're gonna try to hire the best professor. That's what we say, the best person for this job.
Speaker 3
41:59
But that's not what we do at all, right? Do you know which percentage of faculty in the top 4 earn their PhDs from the top 4? Say in 2017, for which we have, which is the most recent year for which I have data. Maybe a large percentage.
Speaker 3
42:14
About 60%. 60. 60% of the faculty in the top 4 earn their PhDs in the top 4. This is computer science, for which there is no top 5.
Speaker 3
42:21
There's only a top 4, right? Because they're all tied for 1.
Speaker 1
42:23
For people who don't know, by the way, that would be MIT, Stanford, Berkeley, CMU. Yep.
Speaker 3
42:29
Georgia Tech. Number 8.
Speaker 1
42:31
Number 8, you're keeping track.
Speaker 3
42:34
Oh yes, it's a large part of my job. Number 5 is Illinois, number 6 is a tie with UW and Cornell, and Princeton and Georgia Tech are tied for 8, and UT Austin is number
Speaker 1
42:43
10.
Speaker 3
42:45
Michigan's number 11, by the way. So if you look at the top
Speaker 1
42:48
10,
Speaker 3
42:48
you know what percentage of faculty in the top 10 earn their PhDs from the top
Speaker 1
42:52
10? 65,
Speaker 3
42:54
roughly,
Speaker 1
42:55
65%.
Speaker 3
42:56
If you look at the top 55 ranked departments,
Speaker 1
43:00
50%
Speaker 3
43:01
of the faculty earn their PhDs from the top 10. There's no universe in which all the best faculty, even just for R1 universities, the majority of them come from 10 places. There's just no way That's true, especially when you consider how small some of those universities are in terms of the number of PhDs they produce.
Speaker 3
43:20
Now, that's not a negative. I mean, it is a negative. It also has a habit of entrenching certain historical inequities and accidents. But What it tells you is, well, ask yourself the question.
Speaker 3
43:34
Why is it like that? Well, because it's easier. If we go all the way back to the 1980s, you know, there was a saying that, you know, nobody ever lost his job buying a computer from IBM. And it was true.
Speaker 3
43:46
And nobody ever lost their job hiring a PhD from MIT, right? If the person turned out to be terrible, well, you know, they came from MIT, what did you expect me to know? However, that same person coming from, pick whichever is your least favorite place that produces PhDs in say, computer science, well, you took a risk, right? So all the incentives, particularly because you're only gonna hire 1 this year, well, now we're hiring 10, but you know, you're only gonna hire 1 or 2 or 3 this year, and by the way, when they come in, you're stuck with them for at least 7 years in most places, because that's before you know whether you're getting tenure or not.
Speaker 3
44:18
And if they get tenure, you're stuck with them for a good 30 years unless they decide to leave. That means the pressure to get this right is very high. So what are you gonna do? You're gonna minimize false positives.
Speaker 3
44:27
You don't care about saying no inappropriately. You only care about saying yes inappropriately. So all the pressure drives you into that particular direction. Google, not to put too fine a point on it, was in exactly the same situation with their search.
Speaker 3
44:41
It turns out you just don't want to give people the wrong page in the first 3 or 4 pages. And if there's 10 million right answers and 100 bazillion wrong answers, just make certain the wrong answers don't get up there and who cares if you, the right answer was actually the 13th page. A right answer, a satisficing answer, is number 123, or 4, so Who cares? Or
Speaker 1
45:00
an answer that will make you discover something beautiful, profound, to your question.
Speaker 3
45:05
Well, that's a different problem, right?
Speaker 1
45:06
But isn't that the problem? Can we linger on this topic without sort of walking with grace? How do we get, for hiring faculty, how do we get that 13th page with a truly special person?
Speaker 1
45:25
I mean, it depends on the department. Computer science probably has those kinds of people. Like you have the Russian guy, Grigori Perlman, like just these awkward, strange minds that don't know how to play the little game of etiquette that faculty have all agreed somehow, like converged over the decades, how to play with each other. And also is not, you know, on top of that, is not from the top 4, top whatever numbers, the schools and maybe actually just says FU every once in a while to the traditions of old within the computer science community.
Speaker 1
46:05
Maybe talks trash about machine learning is a total waste of time, and that's there on their resume. So how do you allow the system to give those folks a chance?
Speaker 3
46:19
Well, you have to be willing to take a certain kind of, without taking a particular position on any particular person, you'd have to take, you have to be willing to take risk, right? A small amount of risk. I mean, if we were treating this as a, well, as a machine learning problem, right?
Speaker 3
46:31
As a search problem, which is what it is. It's a search problem. If we were treating it that way, you would say, oh, well, the main thing is you want, you know, you've got a prior, you want some data, cause I'm Bayesian. If you don't want to do it that way, we'll just inject some randomness in and it'll be okay.
Speaker 3
46:44
The problem is that feels very, very hard to do with people. All the incentives are wrong there, but it turns out, and let's say that's the right answer. Let's just give, for the sake of argument, that injecting randomness in the system at that level for who you hire is just not worth doing because the price is too high or the cost is too high. We had infinite resources, sure, but we don't.
Speaker 3
47:05
And also you've got to teach people. So, you know, you're ruining other people's lives if you get it too wrong. But we've taken that principle, even if I grant it, and pushed it all the way back, right? So we could have a better pool than we have of people we look at and give an opportunity to.
Speaker 3
47:25
If we do that, then we have a better chance of finding that. Of course, that just pushes the problem back another level. But let me tell you something else. You know, I did a sort of study, I call it a study, I called up 8 of my friends and asked them for all of their data for graduate admissions, but then someone else followed up and did an actual study.
Speaker 3
47:41
And it turns out that I can tell you how everybody gets into grad school, more or less. More or less. You basically admit everyone from places higher ranked than you. You admit most people from places ranked around you, and you admit almost no 1 from places ranked below you, with the exception of the small liberal arts colleges that aren't ranked at all, like Harvey Mudd, because they don't, they don't have PhDs, so they aren't ranked.
Speaker 3
48:00
This is all CS. Which means the decision of whether, you know, you become a professor at Cornell was determined when you were 17, by what you knew to go to undergrad to do whatever. So if we can push these things back a little bit and just make the pool a little bit bigger, at least you raise the probability that you will be able to see someone interesting and take the risk. The other answer to that question, by the way, which you could argue is the same as, you either adjust the pool so the probabilities go up, that's a way of injecting a little bit of uniform noise in the system, as it were, is you change your loss function.
Speaker 3
48:39
You just let yourself be measured by something other than whatever it is that we're measuring ourselves by now. I mean, US News and World Report, every time they change their formula for determining rankings, move entire universities to behave differently because rankings matter. Can you talk trash about those rankings for
Speaker 1
48:58
a second? No, I'm joking about talking trash. I actually, it's so funny how, from my perspective, from very shallow perspective, how dogmatic, like how much I trust those rankings.
Speaker 1
49:11
They're almost ingrained in my head. I mean, at MIT, everybody kind of, it's a propagated, mutually agreed upon, like idea that those rankings matter. And I don't think anyone knows what they're, like most people don't know what they're based on. And what are they exactly based on and what are the flaws in that?
Speaker 3
49:34
Well, so it depends on which rankings you're talking about. Do you wanna talk about computer science or you wanna talk about universities?
Speaker 1
49:40
Computer science, US News, isn't that the main 1? Yeah, it's
Speaker 3
49:43
US News. The only 1 that matters is US News. Nothing else matters.
Speaker 3
49:46
Sorry, csrankings.org, but nothing else matters but US News. So US News has formula that it uses for many things, but not for computer science, because computer science is considered a science, which is absurd. So the rankings for computer science is 100% reputation. So 2 people at each department, it's not really a department, but whatever, at each department basically rank everybody.
Speaker 3
50:13
Slightly more complicated than that, But whatever, they rank everyone. And then those things are put together and somehow
Speaker 1
50:19
ranks them up. So that means, how do you improve reputation? How do you move up and down the space of reputation?
Speaker 3
50:25
Yes, that's exactly the question. Twitter? It can help.
Speaker 3
50:30
I can tell you how Georgia Tech did it, or at least how I think Georgia Tech did it, because Georgia Tech is actually the case to look at. Not just because I'm at Georgia Tech, but because Georgia Tech is the only computing unit that was not in the top 20 that has made it into the top 10. It's also the only 1 in the last 2 decades, I think, that moved up in the top
Speaker 1
50:49
10,
Speaker 3
50:50
as opposed to having someone else move down. So we used to be number 10, and then we became number 9 because UT Austin went down slightly and now we were tied for ninth because that's how rankings work. And we moved from 9 to 8 because our raw score moved up a point.
Speaker 3
51:06
So Georgia, something, something, something about Georgia Tech, computer science, or computing anyway. I think it's because we have shown leadership at every crisis level, right? So we created a college, first public university to do it, second college, second university to do it after CMU is number 1. I also think it's no accident that CMU is the largest and we're, depending upon how you count and depending on exactly where MIT ends up with its final College of Computing, second or third largest.
Speaker 3
51:31
I don't think that's an accident. We've been doing this for a long time. But in the 2000s, when there was a crisis about undergraduate education, Georgia Tech took a big risk and succeeded at rethinking undergrad education and computing. I think we created these schools at a time when most public universities in a way were afraid to do it.
Speaker 3
51:50
We did the online masters. And that mattered because people were trying to figure out what to do with MOOCs and so on. I think it's about being observed by your peers and having an impact. So I mean that is what reputation is, right?
Speaker 3
52:04
So the way you move up in the reputation rankings is by doing something that makes people turn and look at you and say, that's good, they're better than I thought. Beyond that, it's just inertia. And there's a huge history system in the system, right? Like, I mean, there was these, I can't remember this, this may be apocryphal, but the, you know, there were, there's a major or a department that like MIT was ranked number 1 in and they didn't have it, right?
Speaker 3
52:27
It's just about what you, I don't know if that's true, but someone said that to me anyway. But it's a thing, right? It's all about reputation. Of course MIT is great, because MIT is great.
Speaker 3
52:36
It's always been great. By the way, because MIT is great, the best students come, which keeps it being great. I mean, it's just a positive feedback loop. It's not surprising.
Speaker 3
52:45
I don't think it's wrong.
Speaker 1
52:46
Yeah, but it's almost like a narrative, like it doesn't actually have to be backed by reality. And it's, you know, not to say anything about MIT, but like it does feel like we're playing in the space of narratives, not the space of something grounded. And 1 of the surprising things when I showed up at MIT and just all the students I've worked with and all the research I've done is it like, they're the same people as I've met other places.
Speaker 3
53:18
I mean, what MIT is going for, well MIT has many things going for it, but 1 of the things MIT has going for it
Speaker 1
53:22
is. Nice logo.
Speaker 3
53:23
Is a nice logo, it's a lot better than it was when I was here. Nice colors too, terrible, Terrible name for a mascot. But the thing that MIT has going for it is it really does get the best students.
Speaker 3
53:36
It just doesn't get all of the best students. There are many more best students out there, right? And the best students wanna be here because it's the best place to be, or 1 of the best places to be, and it just kind of, it's a sort of positive feedback. But you said something earlier, which I think is worth examining for a moment, right?
Speaker 3
53:52
You said it's, I forget the word you used, you said, we're living in the space of narrative as opposed to something objective. Narrative is objective. I mean, 1 could argue that the only thing that we do as humans is narrative. We just build stories to explain why we do this.
Speaker 3
54:07
Someone once asked me, but wait, there's nothing objective. No, it's completely an objective measure. It's an objective measure of the opinions of everybody else. Now, is that physics?
Speaker 3
54:19
I don't know. Tell me something you think is actually objective and measurable in a way that makes sense. Like cameras, they don't, do you know that, I mean you're getting me off on something here, but do you know that cameras, which are just reflecting light and putting them on film, like did not work for dark-skinned people until like the 1970s? You know why?
Speaker 3
54:43
Because you were building cameras for the people who were gonna buy cameras, who all, at least in the United States and Western Europe, were relatively light-skinned. Turns out, took terrible pictures of people who look like me. That got fixed with better film and whole processes. Do you know why?
Speaker 3
55:00
Because furniture manufacturers wanted to be able to take pictures of mahogany furniture, right? Because candy manufacturers wanted to be able to take pictures of chocolate. Now, the reason I bring that up is because you might think that cameras are objective. They're objective, they're just capturing light.
Speaker 3
55:19
No, they're made, they are doing the things that they are doing based upon decisions by real human beings to privilege, if I may use that word, some physics over others. Because it's an engineering problem, there are trade-offs, right? So I can either worry about this part of the spectrum or this part of the spectrum. This costs more, that costs less, this costs the same, but I have more people paying money over here, right?
Speaker 3
55:40
And it turns out that if a giant conglomerate demands that you do something different and it's gonna involve all kinds of money for you, suddenly the trade-offs change, right? And so there you go. I actually don't know how I ended up there. Oh, it's because of this notion of objectiveness, right?
Speaker 3
55:54
So even the objective isn't objective because at the end you've gotta tell a story, you've gotta make decisions, you've gotta make trade-off, and What else is engineering other than that? So I think that the rankings capture something. They just don't necessarily capture what people assume they capture.
Speaker 1
56:11
Just to linger on this idea, why is there not more people who just like play with whatever that narrative is, have fun with it, have like excite the world, whether it's in the Carl Sagan style of like that calm, sexy voice of explaining the stars and all the romantic stuff, or the Elon Musk, dare I even say Donald Trump, where you're like trolling and shaking up the system and just saying controversial things. I talked to Lisa Feldman Barrett, who's a neuroscientist who just enjoys playing the controversy, thinks like, finds the counterintuitive ideas in the particular science and throws them out there and sees how they play in the public discourse. Like why don't we see more of that?
Speaker 1
57:00
And why doesn't academia attract an Elon Musk type?
Speaker 3
57:03
Well, tenure is a powerful thing that allows you to do whatever you want. But getting tenure typically requires you to be relatively narrow, right? Because people are judging you.
Speaker 3
57:14
Well, I think the answer is we have told ourselves a story, a narrative, that that is vulgar, what you just described as vulgar. It's certainly unscientific, right? And it is easy to convince yourself that in some ways you're the mathematician, right? The fewer there are in your major, the more that proves your purity, right?
Speaker 3
57:41
So once you tell yourself that story, then It is beneath you to do that kind of thing, right? I think that's wrong. I think that, and by the way, everyone doesn't have to do this. Everyone's not good at it, and everyone, even if they would be good at it, would enjoy it.
Speaker 3
57:56
So it's fine. But I do think you need some diversity in the way that people choose to relate to the world as academics. Because I think the great universities are ones that engage with the rest of the world. It is a home for public intellectuals.
Speaker 3
58:15
And in 2020, being a public intellectual probably means being on Twitter. Whereas of course, that wasn't true 20 years ago, because Twitter wasn't around 20 years ago. And if it was, it wasn't around in a meaningful way. I don't actually know how long Twitter's been around.
Speaker 3
58:28
As I get older, I find that My notion of time has gotten worse and worse. Like Google really has been around that long? Anyway, the point is that I think that, I think that we sometimes forget that a part of our job is to impact the people who aren't in the world that we're in and
Speaker 1
58:46
that
Speaker 3
58:46
that's the point of being at a great place and being a great person, frankly.
Speaker 1
58:50
There's an interesting force in terms of public intellectuals. You know, forget Twitter, we could look at just online courses that are public-facing in some part. Like, there is a kind of force that pulls you back, I would, let me just call it out because I don't give a damn at this point.
Speaker 1
59:09
There's a little bit of, all of us have this, but certainly faculty have this, which is jealousy. It's whoever's popular at being a good communicator, exciting the world with their science. And of course, when you excite the world with the science, it's not peer-reviewed, clean, it all sounds like bullshit. It's like a TED Talk.
Speaker 1
59:35
And people roll their eyes and they hate that a TED Talk gets millions of views or something like that. And then everybody pulls each other back. There's this force that just kinda, it's hard to stand out unless you like win a Nobel prize or whatever. Like it's only when you like get senior enough where you just stop giving a damn.
Speaker 1
59:54
But just like you said, even when you get tenure, that was always the surprising thing to me.
Omnivision Solutions Ltd