2 hours 32 seconds
🇬🇧 English
Speaker 1
00:00
The following is a conversation with Matt Botmanick, Director of Neuroscience Research at DeepMind. He's a brilliant, cross-disciplinary mind navigating effortlessly between cognitive psychology, computational neuroscience, and artificial intelligence. Quick summary of the ads. 2 sponsors, The Jordan Harbinger Show and Magic Spoon Cereal.
Speaker 1
00:23
Please consider supporting the podcast by going to jordanharbinger.com slash Lex and also going to magicspoon.com slash Lex and using code Lex at checkout after you buy all of their cereal. Click the links, buy the stuff, it's the best way
Speaker 2
00:41
to support this podcast and
Speaker 1
00:43
the journey I'm on. If you Enjoy this podcast, subscribe on YouTube, review it with 5 Stars on Apple Podcast, follow on Spotify, support on Patreon, or connect with me on Twitter at Lex Friedman, spelled, surprisingly, without the E, just F-R-I-D-M-A-N. As usual, I'll do a few minutes of ads now and never any ads in the middle that can break the flow of the conversation.
Speaker 1
01:07
This episode is supported by the Jordan Harbinger Show. Go to jordanharbinger.com slash Lex. It's how he knows I sent you. On that page, subscribe to his podcast, on Apple, podcast, Spotify, and you know where to look.
Speaker 1
01:24
I've been binging on his podcast. Jordan is a great interviewer and even a better human being. I recently listened to his conversation with Jack Barsky, former sleeper agent for the KGB in the 80s and author of Deep Undercover, which is a memoir that paints yet another interesting perspective on the Cold War era. I've been reading a lot about the Stalin and then Gorbachev and Putin eras of Russia, but this conversation made me realize that I need to do a deep dive into the Cold War era to get a complete picture of Russia's recent history.
Speaker 1
01:57
Again, go to jordanharbinger.com slash Lex, subscribe to his podcast, That's how he knows I sent you. It's awesome, you won't regret it. This episode is also supported by Magic Spoon. Low carb, keto friendly, super amazingly delicious cereal.
Speaker 1
02:15
I've been on a keto or very low carb diet for a long time now. It helps with my mental performance. It helps with my physical performance, even during this crazy push up, pull up challenge I'm doing, including the running. It just feels great.
Speaker 1
02:30
I used to love cereal. Obviously, I can't have it now because most cereals have a crazy amount of sugar, which is terrible for you. So I quit it years ago. But Magic Spoon, amazingly, somehow, is a totally different thing.
Speaker 1
02:45
0 sugar, 11 grams of protein, and only 3 net grams of carbs. It tastes delicious. It has a lot of flavors, 2 new ones, including peanut butter, but if you know what's good for you, you'll go with cocoa. My favorite flavor and the flavor of champions.
Speaker 1
03:04
Click the magicspoon.com slash Lex link in the description and use code Lex at checkout for free shipping and to let them know I sent you. They've agreed to sponsor this podcast for a long time. They're an amazing sponsor and an even better cereal, I highly recommend it. It's delicious, it's good for you, you won't regret it.
Speaker 1
03:24
And now, here's my conversation with Matt Botvinick.
Speaker 2
03:29
How much of the human brain do you think we understand?
Speaker 3
03:33
I think we're at a weird moment in the history of neuroscience in the sense that I feel like we understand a lot about the brain at a very high level, but a very coarse level.
Speaker 2
03:52
When you say high level, what are you thinking? Are you thinking functional? Are you
Speaker 3
03:55
thinking structurally? So in other words, what is the brain for? You know, what kinds of computation does the brain do?
Speaker 3
04:04
You know, what kinds of behaviors would we have to, would we have to explain if we were gonna look down at the mechanistic level? And at that level, I feel like we understand much, much more about the brain than we did when I was in high school. But it's almost like we're seeing it through a fog. It's only at a very coarse level.
Speaker 3
04:26
We don't really understand what the neuronal mechanisms are that underlie these computations. We've gotten better at saying, you know, what are the functions that the brain is computing that we would have to understand, you know, if we were going to get down to the neuronal level. And at the other end of the spectrum, we, you know, in the last few years, incredible progress has been made in terms of technologies that allow us to see, actually literally see in some cases, what's going on at the single unit level, even the dendritic level. And then there's this yawning gap in between.
Speaker 2
05:05
Well, that's interesting. So at the high level, so that's almost a cognitive science level? Yeah, yeah.
Speaker 2
05:09
And then at the neuronal level, that's neurobiology and neuroscience, just studying single neurons, the synaptic connections and all the dopamine, all
Speaker 3
05:19
the kind of neurotransmitters. 1 blanket statement I should probably make is that as I've gotten older, I have become more and more reluctant to make a distinction between psychology and neuroscience? To me, the point of neuroscience is to study what the brain is for.
Speaker 3
05:42
If you're a nephrologist and you want to learn about the kidney, you start by saying, what is this thing for? Well, it seems to be for taking blood on 1 side that has metabolites in it that shouldn't be there, sucking them out of the blood while leaving the good stuff behind, and then excreting that in the form of urine. That's what the kidney is for. It's like obvious.
Speaker 3
06:10
So the rest of the work is deciding how it does that. And this, it seems to me, is the right approach to take to the brain, you say, well, what is the brain for? The brain, as far as I can tell, is for producing behavior. It's for going from perceptual inputs to behavioral outputs, and the behavioral outputs should be adaptive.
Speaker 3
06:31
So that's what psychology is about. It's about understanding the structure of that function. And then the rest of neuroscience is about figuring out how those operations are actually carried out at a mechanistic level.
Speaker 2
06:44
That's really interesting, But so unlike the kidney, the brain, the gap between the electrical signal and behavior, so you truly see neuroscience as the science that touches behavior, how the brain generates behavior, or how the brain converts raw visual information into understanding, like you basically see cognitive science, psychology, and neuroscience as all 1 science.
Speaker 3
07:15
Yeah.
Speaker 2
07:18
It's a
Speaker 3
07:18
personal statement.
Speaker 2
07:19
I don't mean to. Is that a hopeful or a realistic statement? So certainly you will be correct in your feeling in some number of years, but that number of years could be 200, 300 years from now.
Speaker 3
07:31
Oh, well, there's a...
Speaker 2
07:33
Is that aspirational or is that pragmatic engineering feeling that you have? It's both
Speaker 3
07:41
in the sense that this is what I hope and expect will bear fruit over the coming decades. But it's also pragmatic in the sense that I'm not sure what we're doing in either psychology or neuroscience if that's not the framing. I don't know what it means to understand the brain if part of the enterprise is not about understanding the behavior that's being produced?
Speaker 2
08:19
I mean, yeah, but I would compare it to maybe astronomers looking at the movement of the planets and the stars and without any interest of the underlying physics, right? And I would argue that at least in the early days, there's some value to just tracing the movement of the planets and the stars without thinking about the physics too much because it's such a big leap to start thinking about the physics before you even understand even the basic structural elements of. Oh, I agree with that, I agree.
Speaker 2
08:50
But you're saying in the end the goal should be Yeah. To deeply understand.
Speaker 3
08:54
Well, right, and I think, so I thought about this a lot when I was in grad school because a lot of what I studied in grad school was psychology, And I found myself a little bit confused about what it meant to, it seems like what we were talking about a lot of the time were virtual causal mechanisms. Like, oh, well, you know, attentional selection then selects some object in the environment, and that is then passed on to the motor, you know, information about that is passed on to the motor system. But these are virtual mechanisms.
Speaker 3
09:29
These are, you know, they're metaphors. There's no reduction going on in that conversation to some physical mechanism that, you know, which is really what it would take to fully understand, you know, how behavior is arising. But the causal mechanisms are definitely neurons interacting. I'm willing to say that at this point in history.
Speaker 3
09:53
So in psychology, at least for me personally, there was this strange insecurity about trafficking in these metaphors, which were supposed to explain the function of the mind. If you can't ground them in physical mechanisms, then what is the explanatory validity of these explanations? And I managed to soothe my own nerves by thinking about the history of genetics research. So I'm very far from being an expert on the history of this field, but I know enough to say that, Mendelian genetics preceded Watson and Crick.
Speaker 3
10:42
And so there was a significant period of time during which people were productively investigating the structure of inheritance using what was essentially a metaphor, the notion of a gene. Oh, genes do this and genes do that, but where are the genes? They're sort of an explanatory thing that we made up. And we ascribed to them these causal properties.
Speaker 3
11:08
Oh, there's a dominant, there's a recessive, and then they recombine it. And then later, there was a kind of blank there that was filled in with a physical mechanism. That connection was made. But it was worth having that metaphor because that gave us a good sense of what kind of cause, what kind of causal mechanism we were looking for.
Speaker 3
11:34
Right? And the fundamental metaphor of cognition, you said, is the interaction of neurons.
Speaker 2
11:40
Is that, what is the metaphor?
Speaker 3
11:42
No, no, the metaphor, the metaphors we use in cognitive psychology are things like attention, the way that memory works. I retrieve something from memory. A memory retrieval occurs.
Speaker 3
12:01
What is that? That's not a physical mechanism that I can examine in its own right, but it's still worth having that metaphorical level.
Speaker 2
12:14
Yeah, I misunderstood actually. So the higher level abstractions is the metaphor that's most useful. But what about, so how does that connect to the idea that that arises from interaction of neurons?
Speaker 2
12:34
Is the interaction of neurons also not a metaphor to you? Or is it literally, that's no longer a metaphor. That's already the lowest level of abstractions that could actually be directly studied?
Speaker 3
12:50
Well, I'm hesitating because I think what I wanna say could end up being controversial. So what I wanna say is, yes, the interactions of neurons, that's not metaphorical. That's a physical fact.
Speaker 3
13:05
That's where the causal interactions actually occur. Now, I suppose you could say, well, even that is metaphorical relative to the quantum events that underlie, I don't wanna go down that rabbit hole.
Speaker 2
13:17
It's always turtles on top of turtles. Yeah, there's turtles all
Speaker 3
13:19
the way down. There is a reduction that you can do. You can say these psychological phenomena can be explained through a very different kind of causal mechanism, which has to do with neurotransmitter release.
Speaker 3
13:31
And so what we're really trying to do in neuroscience writ large, you know, as I say, which for me includes psychology, is to take these psychological phenomena and map them onto neural events. I think remaining forever at the level of description that is natural for psychology, for me personally, would be disappointing. I want to understand how mental activity arises from neural activity. But the converse is also true.
Speaker 3
14:13
Studying neural activity without any sense of what you're trying to explain, to me feels like at best groping around at random.
Speaker 2
14:27
Now, you've kind of talked about this bridging of the gap between psychology and neuroscience, but do you think it's possible, like my love is, like I fell in love with psychology and psychiatry in general with Freud and when I was really young,
Speaker 3
14:41
and
Speaker 2
14:41
I hope to understand the mind. And for me, understanding the mind, at least at a young age before I discovered AI and even neuroscience was to, is psychology. And do you think it's possible to understand the mind without getting into all the messy details of neuroscience?
Speaker 2
14:59
Like you kind of mentioned, to you it's appealing to try to understand the mechanisms at the lowest level, but do you think that's needed, that's required, to understand how the mind works?
Speaker 3
15:11
That's an important part of the whole picture, But I would be the last person on earth to suggest that that reality renders psychology in its own right unproductive. I trained as a psychologist. I am fond of saying that I have learned much more from psychology than I have from neuroscience.
Speaker 3
15:38
To me, psychology is a hugely important discipline. And 1 thing that warms my heart is that ways of investigating behavior that have been native to cognitive psychology since its dawn in the 60s are starting to become, they're starting to become interesting to AI researchers for a variety of reasons. And that's been exciting for me to see.
Speaker 2
16:11
Can you maybe talk a little bit about what you see as beautiful aspects of psychology, maybe limiting aspects of psychology. I mean, maybe just started off as a science, as a field. To me, it was when I understood what psychology is, analytical psychology, like the way it's actually carried out, it's really disappointing to see 2 aspects.
Speaker 2
16:36
1 is how small the N is, how small the number of subject is in the studies. And 2, it was disappointing to see how controlled the entire, how much it was in the lab, how it wasn't studying humans in the wild. There was no mechanism for studying humans in the wild. So that's where I became a little bit disillusioned to psychology and then the modern world of the internet is so exciting to me, the Twitter data or YouTube data, data of human behavior on the internet becomes exciting because the N grows and then in the wild grows.
Speaker 2
17:11
But that's just my narrow sense. Do you have a optimistic or pessimistic, cynical view of psychology, how do you see the field broadly?
Speaker 3
17:21
When I was in graduate school, it was early enough that there was still a thrill in seeing that there were ways of doing experimental science that provided insight to the structure of the mind. 1 thing that impressed me most when I was at that stage in my education was neuropsychology, looking at, looking at, analyzing the behavior of populations who had brain damage of different kinds and trying to understand what the specific deficits were that arose from a lesion in a particular part of the brain. And the kind of experimentation that was done and that's still being done to get answers in that context was so creative and it was so deliberate.
Speaker 3
18:19
It was good science. An experiment answered 1 question but raised another, and somebody would do an experiment that answered that question, and you really felt like you were narrowing in on some kind of approximate understanding of what this part of the brain was for.
Speaker 2
18:34
Do you have an example from memory of what kind of aspects of the mind could be studied in this kind of way?
Speaker 3
18:41
Oh, sure. I mean, the very detailed neuropsychological studies of language function, looking at production and reception and the relationship between visual function, reading and auditory and semantic. And there were these, and still are, these beautiful models that came out of that kind of research that really made you feel like you understood something that you hadn't understood before about how, you know, language processing is organized in the brain.
Speaker 3
19:15
But having said all that, you know, I think you are, I mean, I agree with you that the cost of doing highly controlled experiments is that you, by construction, miss out on the richness and complexity of the real world. 1 thing that, so I was drawn into science by what in those days was called connectionism, which is of course the, you know, what we now call deep learning. And at that point in history, neural networks were primarily being used in order to model human cognition. They weren't yet really useful for industrial applications.
Speaker 2
20:00
So you always found neural networks in biological form beautiful.
Speaker 3
20:04
Oh, neural networks were very concretely the thing that drew me into science. I was handed, are you familiar with the PDP books from the 80s? I went to medical school before I went into science.
Speaker 3
20:19
Really? Interesting. Wow. I also did a graduate degree in art history, so I kind of explored.
Speaker 2
20:26
Well, art history I understand. That's just a curious, creative mind, but medical school, the dream of what, if we take that slight tangent? Did you want to be a surgeon?
Speaker 3
20:39
I actually was quite interested in surgery. I was interested in surgery and psychiatry, and I thought I must be the only person on the planet who was torn between those 2 fields. And I said exactly that to my advisor in medical school, who turned out, I found out later, to be a famous psychoanalyst.
Speaker 3
21:02
And he said to me, no, no, it's actually not so uncommon to be interested in surgery and psychiatry. And he conjectured that the reason that people develop these 2 interests is that both fields are about going beneath the surface and kind of getting into the kind of secret. I mean, maybe you understand this as someone who was interested in psychoanalysis in the United States. There's sort of a, you know, there's a cliche phrase that people use now on, you know, like in NPR, the secret life of blankety blank, right?
Speaker 3
21:31
And that was part of the thrill of surgery, was seeing the secret activity that's inside everybody's abdomen and thorax.
Speaker 2
21:40
That's a very poetic way to connect it to disciplines that are very practically speaking different from each other. That's for sure, that's for sure,
Speaker 3
21:48
yes.
Speaker 2
21:48
So how do we get onto medical school?
Speaker 3
21:52
So I was in medical school and I was doing a psychiatry rotation and my kind of advisor in that rotation asked me what I was interested in. And I said, well, maybe psychiatry. He said, why?
Speaker 3
22:09
And I said, well, I've always been interested in how the brain works. I'm pretty sure that nobody's doing scientific research that addresses my interests, which are, I didn't have a word for it then, but I would have said about cognition. And he said, well, you know, I'm not sure that's true. You might be interested in these books.
Speaker 3
22:29
And he pulled down the PDB books from his shelf and they were still shrink-wrapped. He hadn't read them, but he handed them to me. He said, you feel free to borrow these. And that was, you know, I went back to my dorm room and I just, you know, read them cover to cover.
Speaker 3
22:43
What's PDP? Parallel distributed processing, which was 1 of the original names for deep learning.
Speaker 2
22:50
And so, I apologize for the romanticized question, but what idea in the space of neuroscience, in the space of the human brain, is to you the most beautiful, mysterious, surprising?
Speaker 3
23:04
What had always fascinated me, even when I was a pretty young kid, I think, was the paradox that lies in the fact that the brain is so mysterious and seems so distant, but at the same time, it's responsible for the full transparency of everyday life. The brain is literally what makes everything obvious and familiar. And there's always 1 in the room with you.
Speaker 3
23:48
I used to teach, when I taught at Princeton, I used to teach a cognitive neuroscience course. And the very last thing I would say to the students was, When people think of scientific inspiration, the metaphor is often, well, look to the stars. The stars will inspire you to wonder at the universe and think about your place in it and how things work. I'm all for looking at the stars, but I've always been much more inspired, and my sense of wonder comes not from the distant, mysterious stars, but from the extremely intimately close brain.
Speaker 3
24:34
There's something just endlessly fascinating to
Speaker 2
24:38
me about that. Like Jessica said, the 1 that's close and yet distant in terms of our understanding of it. Do you, are you also captivated by the fact that this very conversation is happening because 2 brains are communicating?
Speaker 2
24:57
So the, I guess what I mean is the subjective nature of the experience, if we can take a small tangent into the mystical of it, the consciousness, or when you're saying you're captivated by the idea of the brain, are you talking about specifically the mechanism of cognition? Or are you also just, like at least for me, it's almost like paralyzing the beauty and the mystery of the fact that it creates the entirety of the experience, not just the reasoning capability, but the experience.
Speaker 3
25:32
Well, I definitely resonate with that latter thought. And I often find discussions of artificial intelligence to be disappointingly narrow. Speaking as someone who has always had an interest in art.
Speaker 2
25:55
Right, I was just gonna go there because it sounds like somebody who has an interest in art.
Speaker 3
26:00
Yeah, I mean, there are many layers to full bore human experience. And in some ways, it's not enough to say, oh, well, don't worry, we're talking about cognition, but we'll add emotion. There's an incredible scope to what humans go through in every moment.
Speaker 3
26:30
And yes, so that's part of what fascinates me, is that our brains are producing that. But at the same time, it's so mysterious to us, how? We literally, our brains are literally in our heads producing this experience. And yet there's, it's so mysterious to us.
Speaker 3
26:55
And so, and the scientific challenge of getting at the actual explanation for that is so overwhelming. That's just, I don't know. Certain people have fixations on particular questions and that's always, that's just always been mine.
Speaker 2
27:11
Yeah, I would say the poetry of that is fascinating. And I'm really interested in natural language as well. And when you look at artificial intelligence community, it always saddens me how much, when you try to create a benchmark for the community to gather around, how much of the magic of language is lost when you create that benchmark.
Speaker 2
27:33
That there's something, we talk about experience, the music, the language, the wit, the something that makes a rich experience, something that would be required to pass the spirit of the Turing test is lost in these benchmarks. And I wonder how to get it back in because it's very difficult. The moment you try to do like real good rigorous science, you lose some of that magic. When you try to study cognition in a rigorous scientific way, it feels like you're losing some of the magic.
Speaker 2
28:05
Seeing cognition in a mechanistic way that AI at this stage in our history. Okay.
Speaker 3
28:10
I agree with you, but at the same time, 1 thing that I found really exciting about that first wave of deep learning models in cognition was there was the fact that the people who were building these models were focused on the richness and complexity of human cognition. So an early debate in cognitive science, which I sort of witnessed as a grad student, was about something that sounds very dry, which is the formation of the past tense. But there were these 2 camps.
Speaker 3
28:49
1 said, well, the mind encodes certain rules, and it also has a list of exceptions, because of course, you know, the rule is add E D, but that's not always what you do, so you have to have a list of exceptions. And then there were the connectionists who evolved into the deep learning people, who said, well, if you look carefully at the data, if you actually look at corpora, like language corpora, it turns out to be very rich because yes, there are most verbs that, and you just tack on ed, and then there are exceptions, but there are also rules that, the exceptions aren't just random. There are certain clues to which verbs should be exceptional. And then there are exceptions to the exceptions.
Speaker 3
29:44
And there was a word that was kind of deployed in order to capture this, which was quasi-regular. In other words, there are rules, but it's messy, and there's structure even among the exceptions. And it would be, yeah, you could try to write down the structure in some sort of closed form, but really the right way to understand how the brain is handling all this, and by the way, producing all of this, is to build a deep neural network and train it on this data and see how it ends up representing all of this richness. So the way that deep learning was deployed in cognitive psychology was, that was the spirit of it.
Speaker 3
30:25
It was about that richness. And that's something that I always found very compelling. Still do.
Speaker 2
30:33
Is there something especially interesting and profound to you in terms of our current deep learning neural network, artificial neural network approaches, and whatever we do understand about the biological neural networks in our brain. Is there, there's quite a few differences. Are some of them to you either interesting or perhaps profound in terms of, in terms of the gap we might want to try to close in trying to create a human level intelligence?
Speaker 3
31:07
What I would say here is something that a lot of people are saying, which is that 1 seeming limitation of the systems that we're building now is that they lack the kind of flexibility, the readiness to sort of turn on a dime when the context calls for it, that is so characteristic of human behavior. So is
Speaker 2
31:32
that connected to you to the, like which aspect of the neural networks in our brain is that connected to? Is that closer to the cognitive science level of, now again, see like my natural inclination is to separate into 3 disciplines of neuroscience, cognitive science, and psychology. And you've already kind of shut that down by saying you're trying to see them as separate.
Speaker 2
31:58
But just to look at those layers, I guess, where is there something about the lowest layer of the way the neurons interact that is profound to you in terms of its difference to the artificial neuron that works? Or is all the key differences at a higher level of abstraction?
Speaker 3
32:20
1 thing I often think about is that, you know, if you take an introductory computer science course and they are introducing you to the notion of Turing machines, 1 way of articulating what the significance of a Turing machine is, is that it's a machine emulator. It can emulate any other machine. And that to me, that way of looking at a Turing machine really sticks with me.
Speaker 3
32:57
I think of humans as maybe sharing in some of that character. We're capacity limited, we're not Turing machines obviously, but we have the ability to adapt behaviors that are very much unlike anything we've done before, but there's some basic mechanism that's implemented in our brain that allows us to run software.
Speaker 2
33:22
But just on that point, you mentioned the Turing machine, but nevertheless, it's fundamentally our brains are just computational devices in your view? Is that what you're getting at? It was a little bit unclear to this line you drew.
Speaker 2
33:36
Is there any magic in there, or is it just basic computation? I'm happy to think
Speaker 3
33:41
of it as just basic computation, but mind you, I won't be satisfied until somebody explains to me what the basic computations are that are leading to the full richness of human cognition. It's not gonna be enough for me to understand what the computations are that allow people to do arithmetic or play chess. I want the
Speaker 2
34:04
whole thing. And a small tangent, because you kind of mentioned coronavirus, there's group behavior.
Speaker 3
34:12
Oh,
Speaker 2
34:12
sure. Is there something interesting to your search of understanding the human mind where behavior of large groups or just behavior of groups is interesting. Seeing that as a collective mind, as a collective intelligence, perhaps seeing the groups of people as a single intelligent organism, especially looking at the reinforcement learning work
Speaker 3
34:34
you've done recently. Well, yeah, I can't, I mean, I have the honor of working with a lot of incredibly smart people and I wouldn't want to take any credit for leading the way on the multi-agent work that's come out of my group or DeepMind lately, but I do find it fascinating. And I mean, I think there, you know, I think it can't be debated, you know?
Speaker 3
35:01
Human behavior arises within communities. That just seems to me self-evident.
Speaker 2
35:08
But to me, it is self-evident, but that seems to be a profound aspects of something that created. That was like, if you look at like 2001 Space Odyssey when the monkeys touched the, like that's the magical moment I think Yuval Harari argues that the ability of our large numbers of humans to hold an idea, to converge towards idea together, like you said, shaking hands versus bumping elbows, somehow converge without even, without being in a room altogether, just kind of this distributed convergence towards an idea over a particular period of time seems to be fundamental to just every aspect of our cognition, of our intelligence, because humans, and we'll talk about reward, but it seems like we don't really have a clear objective function under which we operate, but we all kind of converge towards 1 somehow. And that to me has always been a mystery that I think is somehow productive for also understanding AI systems.
Speaker 2
36:13
But I guess that's the next step. The first step is try to understand the mind.
Speaker 3
36:18
Well, I don't know. I mean, I think there's something to the argument that that kind of bottom, like strictly bottom-up approach is wrong-headed. In other words, there are basic phenomena, basic aspects of human intelligence that can only be understood in the context of groups.
Speaker 3
36:43
I'm perfectly open to that. I've never been particularly convinced by the notion that we should be, we should consider intelligence to in here at the level of communities. III don't know why I just, I'm sort of stuck on the notion that the basic unit that we want to understand is individual humans. And if we have to understand that in the context of other humans, fine.
Speaker 3
37:08
But for me, intelligence is just, I stubbornly define it as something that is an aspect of an individual human. That's just my, I don't
Speaker 2
37:19
know, that's my own take. I'm with you, but that could be the reductionist dream of a scientist, because you can understand a single human. It also is very possible that intelligence can only arise when there's multiple intelligences.
Speaker 2
37:32
When there's multiple sort of, it's a sad thing, if that's true, because it's very difficult to study, but if it's just 1 human, that 1 human will not be, homo sapien would not become that intelligent. That's a possibility.
Speaker 3
37:49
I'm with you. 1 thing I will say along these lines is that I think a serious effort to understand human intelligence and maybe to build a human-like intelligence, needs to pay just as much attention to the structure of the environment as to the structure of the cognizing system, whether it's a brain or an AI system. That's 1 thing I took away actually from my early studies with the pioneers of, neural network research, people like Jay McClelland and John Cohen, you know, the structure of cognition is really, it's only partly a function of the, you know, the architecture of the brain and the learning algorithms that it implements.
Speaker 3
38:46
What it's really a function, what really shapes it is the interaction of those things with the structure of the world in which those things are embedded, right?
Speaker 2
38:56
And that's especially important for, that's made most clear in reinforcement learning where a simulated environment is, you can only learn as much as you can simulate. And that's what made, what DeepMind made very clear with the other aspect of the environment, which is the self-play mechanism of the other agent, of the competitive behavior, which the other agent becomes the environment essentially. And that's, I mean, 1 of the most exciting ideas in AI is the self-play mechanism that's able to learn successfully.
Speaker 2
39:27
So there you go. There's a thing where competition is essential for learning, at least in that context. So if we can step back into another beautiful world, which is the actual mechanics, the dirty mess of it of the human brain, Is there something for people who might not know, is there something you can comment on or describe the key parts of the brain that are important for intelligence, or just in general, what are the different parts of the brain that you're curious about, that you've studied, and that are just good to know about when you're thinking about cognition?
Speaker 3
40:06
Well, my area of expertise, if I have 1, is prefrontal cortex. So, you know.
Speaker 2
40:15
What's that? Where do we?
Speaker 3
40:18
It depends on who you ask. The technical definition is anatomical. There are parts of your brain that are responsible for motor behavior, and they're very easy to identify.
Speaker 3
40:35
And the region of your cerebral cortex, the sort of outer crust of your brain, that lies in front of those, is defined as the prefrontal cortex.
Speaker 2
40:49
And when you say anatomical, sorry to interrupt. So that's referring to sort of the geographic region, as opposed to some kind of functional definition.
Speaker 3
41:00
Exactly, So this is kind of the coward's way out. I'm telling you what the prefrontal cortex is just in terms of what part of the real estate it occupies.
Speaker 2
41:09
The thing in the front of the brain.
Speaker 3
41:10
Yeah, exactly. And in fact, the early history of neuroscientific investigation of what this front part of the brain does is sort of funny to read because it was really World War I that started people down this road of trying to figure out what different parts of the brain, the human brain do in the sense that there were a lot of people with brain damage who came back from the war with brain damage. And that provided, as tragic as that was, it provided an opportunity for scientists to try to identify the functions of different brain regions.
Speaker 3
41:53
And that was actually incredibly productive. But 1 of the frustrations that neuropsychologists faced was they couldn't really identify exactly what the deficit was that arose from damage to these most, you know, kind of frontal parts of the brain. It was just a very difficult thing to, you know, to, you know, to pin down. There were a couple of neuropsychologists who identified through a large amount of clinical experience and close observation, they started to put their finger on a syndrome that was associated with frontal damage.
Speaker 3
42:27
Actually, 1 of them was a Russian neuropsychologist named Luria, who students of cognitive psychology still read. And what he started to figure out was that the frontal cortex was somehow involved in flexibility, in guiding behaviors that required someone to override a habit, or to do something unusual, or to change what they were doing in a
Speaker 2
43:00
very flexible way from 1 moment to another. So focused on like new experiences. And so the way your brain processes and acts in new experiences.
Speaker 2
43:10
Yeah,
Speaker 3
43:11
what later helped bring this function into better focus was a distinction between controlled and automatic behavior, or to, in other literatures, this is referred to as habitual behavior versus goal-directed behavior. So it's very, very clear that the human brain has pathways that are dedicated to habits, to things that you do all the time. And they need to be automatized so that they don't require you to concentrate too much.
Speaker 3
43:45
So that leaves your cognitive capacity free to do other things. Just think about the difference between driving when you're learning to drive versus driving after you're fairly expert. There are brain pathways that slowly absorb those frequently performed behaviors so that they can be habits, so that they can be automatic.
Speaker 2
44:12
That's kind of like the purest form of learning, I guess, is happening there, which is why, I mean, this is kind of jumping ahead, which is why that perhaps is the most useful for us to focusing on and trying to see how artificial intelligence systems can learn. Is that the way you think?
Speaker 3
44:27
It's interesting. I do think about this distinction between controlled and automatic or goal-directed and habitual behavior a lot in thinking about where we are in AI research. But just to finish the kind of dissertation here, the role of the prefrontal cortex is generally understood these days sort of in contradistinction to that habitual domain.
Speaker 3
45:00
In other words, the prefrontal cortex is what helps you override those habits. It's what allows you to say, well, what I usually do in this situation is X, but given the context, I probably should do Y. I mean, The elbow bump is a great example, right? If, you know, reaching out and shaking hands is probably a habitual behavior, and it's the prefrontal cortex that allows us to bear in mind that there's something unusual going on right now, and in this situation, I need to not do the usual thing.
Speaker 3
45:35
The kind of behaviors that Luria reported, and he built tests for detecting these kinds of things, were exactly like this. So in other words, When I stick out my hand, I want you instead to present your elbow. A patient with frontal damage would have great deal of trouble with that. You know, somebody proffering their hand would elicit, you know, a handshake.
Speaker 3
45:58
The prefrontal cortex is what allows us to say, hold on, that's the usual thing, but I have the ability to bear in mind even very unusual contexts and to reason about what behavior is appropriate there.
Speaker 2
46:13
Just to get a sense, is us humans special in the presence of the prefrontal cortex? Do mice have a prefrontal cortex? Do other mammals that we can study?
Speaker 2
46:26
If no, then how do they integrate new experiences?
Speaker 3
46:30
Yeah, that's a really tricky question and a very timely question because we have revolutionary new technologies for monitoring, measuring, and also causally influencing neural behavior in mice and fruit flies. And these techniques are not fully available even for studying brain function in monkeys, let alone humans. And so it's a very, sort of, for me at least, a very urgent question whether the kinds of things that we want to understand about human intelligence can be pursued in these other organisms.
Speaker 3
47:21
And to put it briefly, there's disagreement. People who study fruit flies will often tell you, hey, fruit flies are smarter than you think. And they'll point to experiments where fruit flies were able to learn new behaviors, were able to generalize from 1 stimulus to another in a way that suggests that they have abstractions that guide their generalization. I've had many conversations in which I will start by observing, you know, recounting some observation about mouse behavior, where it seemed like mice were taking an awfully long time to learn a task that for a human would be profoundly trivial.
Speaker 3
48:14
And I will conclude from that, that mice really don't have the cognitive flexibility that we want to explain. And then a mouse researcher will say to me, well, you know, hold on. That experiment may not have worked because you asked a mouse to deal with stimuli and behaviors that were very unnatural for the mouse. If instead you kept the logic of the experiment the same, but put, you know, kind of put it in a, you know, presented it, the information in a way that aligns with what mice are used to dealing with in their natural habitats, you might find that a mouse actually has more intelligence than you think.
Speaker 3
48:52
And then they'll go on to show you videos of mice doing things in their natural habitat, which seem strikingly intelligent, dealing with physical problems. I have to drag this piece of food back to my lair, but there's something in my way, and how do I get rid of that thing? So I think these are open questions to sum that up.
Speaker 2
49:15
And then taking a small step back, so related to that, is you kind of mentioned we're taking a little shortcut by saying it's a geographic part of the prefrontal cortex is a region of the brain. But if we, what's your sense, in a bigger philosophical view, prefrontal cortex and the brain in general, do you have a sense that it's a set of subsystems in the way we've kind of implied that are pretty distinct? Or to what degree is it that?
Speaker 2
49:46
Or to what degree is it a giant interconnected mess where everything kind of does everything and it's impossible to disentangle them?
Speaker 3
49:54
I think there's overwhelming evidence that there's functional differentiation, that it's clearly not the case, that all parts of the brain are doing the same thing. This follows immediately from the kinds of studies of brain damage that we were chatting about before. It's obvious from what you see if you stick an electrode in the brain and measure what's going on at the level of neural activity.
Speaker 3
50:25
Having said that, there are 2 other things to add, which kind of, I don't know, maybe tug in the other direction. 1 is that, it's when you look carefully at functional differentiation in the brain, what you usually end up concluding, at least this is my observation of the literature, is that the differences between regions are graded rather than being discrete. So it doesn't seem like it's easy to divide the brain up into true modules that have clear boundaries and that have clear channels of communication between them.
Speaker 2
51:15
And This applies to the prefrontal cortex? Yeah.
Speaker 3
51:17
Oh, yeah. Yeah. The prefrontal cortex is made up of a bunch of different sub-regions, the functions of which are not clearly defined and the borders of which seem to be quite vague.
Speaker 3
51:32
And then there's another thing that's popping up in very recent research, which involves application of these new techniques. There are a number of studies that suggest that parts of the brain that we would have previously thought were quite focused in their function are actually carrying signals that we wouldn't have thought would be there. For example, looking in the primary visual cortex, which is classically thought of as basically the first cortical way station for processing visual information. Basically what it should care about is, you know, where are the edges in this scene that I'm viewing?
Speaker 3
52:17
It turns out that if you have enough data, you can recover information from primary visual cortex about all sorts of things like, you know, what, what behavior the animal is engaged in right now and what, what, how much reward is on offer in the, in the task that it's pursuing. So it's clear that even regions whose function is pretty well defined at a core screen are nonetheless carrying some information about information from very different domains. So the history of neuroscience is sort of this oscillation between the 2 views that you articulated, you know, the kind of modular view and then the big, you know, mush view. And, you know, I think, I guess we're gonna end up somewhere in the middle, which is unfortunate for our understanding because there's something about our conceptual system that finds it's easy to think about a modularized system and easy to think about a completely undifferentiated system, but something that kind of lies in between is confusing, but we're gonna have to get used to it, I think.
Speaker 2
53:21
Unless we can understand deeply the lower level mechanism of neuronal communication and so on. So on that topic, you kind of mentioned information. Just to get a sense, I imagine something that there's still mystery and disagreement on is how does the brain carry information and signal?
Speaker 2
53:38
Like what in your sense is the basic mechanism of communication in the brain?
Speaker 3
53:46
Well, I guess I'm old fashioned in that. I consider, the networks that we use in deep learning research to be a reasonable approximation to, you know, the, the mechanisms that carry information in the brain. So the usual way of articulating that is to say, what really matters is a rate code.
Speaker 3
54:08
What matters is how quickly is an individual neuron spiking? You know, what's the frequency at which it's spiking? Is it right?
Speaker 2
54:16
So the timing of the spike.
Speaker 3
54:17
Yeah, is it firing fast or slow? Let's put a number on that. And that number is enough to capture what neurons are doing.
Speaker 3
54:27
There's still uncertainty about whether that's an adequate description of how information is transmitted within the brain. There are studies that suggest that the precise timing of spikes matters. There are studies that suggest that there are computations that go on within the dendritic tree, within a neuron, that are quite rich and structured, and that really don't equate to anything that we're doing in our artificial neural networks. Having said that, I feel like we're getting somewhere by sticking to this high level of abstraction.
Speaker 2
55:11
Just the rate, and by the way, we're talking about the electrical signal. I remember reading some vague paper somewhere recently where the mechanical signal, like the vibrations or something of the neurons also communicates information. I haven't seen that, but.
Speaker 2
55:30
There's somebody who was arguing that the electrical signal, this is in the Nature paper, something like that, where the electrical signal is actually a side effect of the mechanical signal. But I don't think that changes the story. But it's almost an interesting idea that there could be a deeper, it's always like in physics with quantum mechanics, there's always a deeper story that could be underlying the whole thing. But you think it's basically the rate of spiking that gets us, that's like the lowest hanging fruit that can get us really far.
Speaker 3
56:05
This is a classical view. I mean, this is not, the only way in which this stance would be controversial is in the sense that there are members of the neuroscience community who are interested in alternatives, but this is really a very mainstream view. The way that neurons communicate is that neurotransmitters arrive, they wash up on a neuron, The neuron has receptors for those transmitters.
Speaker 3
56:37
The meeting of the transmitter with these receptors changes the voltage of the neuron. And if enough voltage change occurs, then a spike occurs, right? 1 of these like discrete events. And it's that spike that is conducted down the axon and leads to neurotransmitter release.
Speaker 3
56:54
This is just like neuroscience 101. This is like the way the brain is supposed to work. Now, what we do when we build artificial neural networks of the kind that are now popular in the AI community is that we don't worry about those individual spikes. We just worry about the frequency at which those spikes are being generated.
Speaker 3
57:16
And we consider, people talk about that as the activity
Speaker 2
57:20
of
Speaker 3
57:21
a neuron. And so the activity of units in a deep learning system is broadly analogous to the spike rate of a neuron. There are people who believe that there are other forms of communication in the brain.
Speaker 3
57:38
In fact, I've been involved in some research recently that suggests that the voltage fluctuations that occur in populations of neurons that aren't, that are sort of below the level of spike production may be important for communication. But I'm still pretty old school in the sense that I think that the things that we're building in AI research constitute reasonable models of how a brain would work.
Speaker 2
58:10
Let me ask just for fun a crazy question, because I can. Do you think it's possible we're completely wrong about the way this basic mechanism of neuronal communication, that the information is stored in some very different kind of way in the brain.
Speaker 3
58:26
Oh, heck yes. I mean, look, I wouldn't be a scientist if I didn't think there was any chance we were wrong. But I mean, if you look at the history of deep learning research as it's been applied to neuroscience, of course, the vast majority of deep learning research these days isn't about neuroscience.
Speaker 3
58:46
But if you go back to the 1980s, there's an unbroken chain of research in which a particular strategy is taken, which is, hey, let's train a deep learning system. Let's train a multilayer neural network on this task that we trained our rat on or our monkey on or this human being on. And then let's look at what the units deep in the system are doing. And let's ask whether what they're doing resembles what we know about what neurons deep in the brain are doing.
Speaker 3
59:24
And over and over and over and over, that strategy works in the sense that the learning algorithms that we have access to, which typically center on back propagation, they give rise to patterns of activity, patterns of response, patterns of neuronal behavior in these artificial models that look hauntingly similar to what you see in the brain. And you know, is that a coincidence? At a
Speaker 2
59:57
certain point it starts looking like something.
Omnivision Solutions Ltd