3 hours 12 minutes 21 seconds
🇬🇧 English
Speaker 1
00:00
The following is a conversation with Yosha Bach, his second time on the podcast. Yosha is 1 of
Speaker 2
00:05
the most fascinating minds in the world, exploring the nature of intelligence, cognition, computation, and consciousness. To support this podcast, please check out our sponsors, Coinbase, Codecademy, Linode, NetSuite, and ExpressVPN. Their links are in the description.
Speaker 2
00:26
This is the Lex Friedman Podcast, and here is my conversation with Yosha Bach.
Speaker 3
00:33
Thank you for once again coming on to this particular Russian program and sticking to the theme of a Russian program. Let's start with the darkest of topics. So this is inspired by 1 of your tweets.
Speaker 3
00:48
You wrote that, quote, "'When life feels unbearable, "'I remind myself that I'm not a person. "'I'm a piece of software running on the brain "'of a random ape for a few decades. "'It's not the worst brain to run on. Have you experienced low points in your life?
Speaker 3
01:07
Have you experienced depression?
Speaker 4
01:09
Of course we all experience low points in our life and we get appalled by the things, by the ugliness of stuff around us, we might get desperate about our lack of self-regulation. And sometimes life is hard, and I suspect you don't get to your life, nobody does to get to their life without low points and without moments where they're despairing. And I thought that let's capture this state and how to deal with that state.
Speaker 4
01:40
And I found that very often you realize that when you stop taking things personally, when you realize that this notion of a person is a fiction. Similar as it is in Westworld, where the robots realize that their memories and desires are the stuff that keeps them in the loop. And they don't have to act on those memories and desires, that our memories and expectations is what make us unhappy. And the present rarely does.
Speaker 4
02:04
The day in which we are, for the most part, it's okay, right? When we are sitting here, right here, right now, we can choose how we feel. And the thing that affects us is the expectation that something is going to be different from what we want it to be, or the memory that something was different from what you wanted it to be. And once we basically zoom out from all this, what's left is not a person.
Speaker 4
02:29
What's left is this state of being conscious, which is a software state. And software doesn't have an identity, it's a physical law. And it's a law that acts in all of us and it's embedded in a suitable substrate. And we didn't pick that substrate, right?
Speaker 4
02:43
We are mostly randomly instantiated on it. And there are all these individuals and everybody has to be 1 of them. And eventually you're stuck on 1 of them and have to deal with that.
Speaker 3
02:56
So you're like a leaf floating down the river. You just have to accept that there's a river and you just float. You don't have to do that.
Speaker 4
03:04
The thing is that the illusion that you are an agent is a construct. What part of that is actually under your control. And I think that our consciousness is largely a control model for our own attention.
Speaker 4
03:18
So we notice where we are looking and we can influence what we are looking, how we are disambiguating things, how we put things together in our mind. And the whole system that runs us is this big cybernetic motivational system. So we're basically like a little monkey sitting on top of an elephant and we can prod this elephant here and there to go this way or that way. And we might have the illusion that we are the elephant or that we are telling it what to do.
Speaker 4
03:43
And sometimes we notice that it walks into a completely different direction. And we didn't set this thing up. It just is the situation that we find ourselves in. How much prodding can
Speaker 3
03:53
we actually do of the elephant?
Speaker 4
03:56
A lot, but I think that our consciousness cannot create the motive force.
Speaker 3
04:02
Is the elephant consciousness in this metaphor?
Speaker 4
04:05
No, the monkey is the consciousness. The monkey is the attentional system that is observing things. There's a large perceptual system combined with a motivational system that is actually providing the interface to everything and our own consciousness, I think, is a tool that directs the attention of that system, which means it singles out features and performs conditional operations for which it needs an index memory.
Speaker 4
04:28
But this index Memory is what we perceive as our stream of consciousness, but the consciousness is not in charge. That's an illusion.
Speaker 3
04:35
So everything outside of that consciousness is the elephant. So it's the physics of the universe, but it's also society that's outside of your...
Speaker 4
04:46
I would say the elephant is the agent. So there is an environment to which the agent is stomping and you are influencing a little part of that agent.
Speaker 3
04:55
So can you, is the agent a single human being? What's, what, which object has agency?
Speaker 4
05:02
That's an interesting question. I think a way to think about an agent is that it's a controller with a set point generator. The notion of a controller comes from cybernetics and control theory.
Speaker 4
05:14
Control system consists out of a system that is regulating some value and the deviation of that value from a set point. And it has a sensor that measures the system's deviation from that set point and an effector that can be parameterized by the controller. So the controller tells the effector to do a certain thing. And the goal is to reduce the distance between the set point and the current value of the system.
Speaker 4
05:40
And there's an environment which disturbs the regulated system, which brings it away from that set point. So the simplest case is a thermostat. The thermostat is really simple because it doesn't have a model. The thermostat is only trying to minimize the set point deviation in the next moment.
Speaker 4
05:55
And if you want to minimize the set point deviation over a longer time span, You need to integrate it, you need to model what is going to happen. So for instance, when you think about that your set point is to be comfortable in life, maybe you need to make yourself uncomfortable first. So you need to make a model of what's going to happen when. And this is task of the controller is to use its sensors to measure the state of the environment and the system that is being regulated and figure out what to do.
Speaker 4
06:25
And if the task is complex enough, the set points are complicated enough, And if the controller has enough capacity and enough sensor feedback, then the task of the controller is to make a model of the entire universe that it's in, the conditions under which it exists, and of itself. And this is a very complex agent, and we are in that category. And an agent is not necessarily a thing in the universe. It's a class of models that we use to interpret aspects of the universe.
Speaker 4
06:54
And when we notice the environment around us, a lot of things only make sense at the level that we're entangled with them if we interpret them as control systems that make models of the world and try to minimize their own set points. So, but the models are the agents. The agent is a class of model. And we notice that we are an agent ourself.
Speaker 4
07:14
We are the agent that is using our own control model to perform actions. We notice we produce a change in the model and things in the world change. And this is how we discover the idea that we have a body, that we are situated environment, and that we have a first person perspective.
Speaker 3
07:31
Still don't understand what's the best way to think of which object has agency with respect to human beings. Is it the body? Is it the brain?
Speaker 3
07:43
Is it the contents of the brain that has agency? Like what's the actuators that you're referring to? What is the controller and where does it reside? Or is it these impossible things?
Speaker 3
07:54
Because I keep trying to ground it to space-time, the three-dimensional space and the 1 dimension of time. What's the agent in that for humans?
Speaker 4
08:04
There is not just 1. It depends on the way in which you're looking at the thing in which you're framing it. Imagine that you are, say, Angela Merkel, and you are acting on behalf of Germany.
Speaker 4
08:16
Then you could say that Germany is the agent. And in the mind of Angela Merkel, she is Germany to some extent, because in the way in which she acts, the destiny of Germany changes. There are things that she can change that basically affect the behavior of that nation state.
Speaker 3
08:33
Okay. So it's hierarchies of, to go to another 1 of your tweets with, I think you were playfully mocking Jeff Hawkins with saying his brain's all the way down.
Speaker 2
08:45
So it's like, it's agents all
Speaker 3
08:47
the way down. It's agents made up of agents made up of agents. Like if Angela Merkel is Germany, and Germany is made up of a bunch of people, and the people are themselves agents in some kind of context, And then people are made up of cells, each individual.
Speaker 3
09:04
So is it agents all the way down?
Speaker 4
09:07
I suspect that has to be like this in a world where things are self-organizing. Most of the complexity that we are looking at, everything in life is about self-organization. Yeah.
Speaker 4
09:18
So I think up from the level of life, you have agents. And below life, you rarely have agents because sometimes you have control systems that emerge randomly in nature and try to achieve a set point, but they're not that interesting agents that make models. And because to make an interesting model of the world, you typically need a system that is Turing complete.
Speaker 3
09:42
Can I ask you a personal question? What's the line between life and non-life? It's personal because you're a life form.
Speaker 3
09:52
So what do you think in this emerging complexity, at which point does a thing start being living and have agency?
Speaker 4
10:00
Personally, I think that the simplest answer is that life is cells.
Speaker 3
10:03
Because... Life is what? Cells. Cells.
Speaker 4
10:06
Biological cells. So it's a particular kind of principle that we have discovered to exist in nature. It's modular stuff that consists out of basically this DNA tape with a redried head on top of it that is able to perform arbitrary computations and state transitions within the cell.
Speaker 4
10:25
And it's combined with a membrane that insulates the cell from its environment. And there are chemical reactions inside of the cell that are in disequilibrium. And the cell is running in such a way that this disequilibrium doesn't disappear. And the cell goes, if the cell goes into an equilibrium state, it dies.
Speaker 4
10:46
And it requires some thing like an neck entropy extractor to maintain this disequilibrium. So it's able to harvest neck entropy from its environment and keep itself running.
Speaker 3
10:58
Yeah, so there's information and there's a wall to protect, to maintain this equilibrium. But isn't this very earth-centric? Like what you're referring to as- I'm
Speaker 4
11:09
not making a normative notion. You could say that there are probably other things in the universe that are cell-like and life-like, and you could also call them life, but eventually it's just a willingness to find an agreement of how to use the terms. I like cells because it's completely coextensional with the way that we use the word even before we knew about cells.
Speaker 4
11:30
So people were pointing at some stuff and saying this is somehow animate and this is very different from the non-animate stuff and what's the difference between the living and the dead stuff. And it's mostly whether the cells are working or not. And also this boundary of life where we say that, for instance, a virus is basically an information packet that is subverting the cell and not life by itself. That makes sense to me and it's somewhat arbitrary.
Speaker 4
11:56
You could of course say that systems that permanently maintain a disequilibrium and can self-replicate are always life. And maybe that's a useful definition too, but this is eventually just how you want to use the word.
Speaker 3
12:10
Is it so useful for conversation, but is it somehow fundamental to the universe? Do you think there's a actual line to eventually be drawn between life and non-life, or is it all a kind of continuum?
Speaker 4
12:24
I don't think it's a continuum, but there's nothing magical that is happening. Living systems are a certain type of machine.
Speaker 3
12:31
What about non-living systems? Is it also a machine?
Speaker 4
12:34
There are non-living machines, but the question is at which point is a system able to perform arbitrary state transitions to make representations? And living things can do this. And of course, we can also build non-living things that can do this.
Speaker 4
12:50
But we don't know anything in nature that is not a cell and is not created by cellular life that is able to do that.
Speaker 3
13:00
Not only do we not know, I don't think we have the tools to see otherwise. I always worry that we look at the world too narrowly. Like we have, there could be life of a very different kind right under our noses that we're just not seeing because we're not either limitations of our cognitive capacity or we're just not open-minded enough, either with the tools of science or just the tools of our mind.
Speaker 4
13:32
Yeah, that's possible. I find this thought very fascinating. And I suspect that many of us ask ourselves since childhood, what are the things that we are missing?
Speaker 4
13:40
What kind of systems and interconnections exist that are outside of our gaze? But we are looking for it. And physics doesn't have much room at the moment for opening up something that would not violate the conservation of information as we know it.
Speaker 3
14:03
Yeah, but I wonder about time scale and scale, spatial scale, whether we just need to open up our idea of what, like how life presents itself. It could be operating at a much slower time scale, a much faster time scale. And it's almost sad to think that there's all this life around us that we're not seeing because we're just not thinking in terms of the right scale, both time and space.
Speaker 4
14:34
What is your definition of life? What do you understand as life?
Speaker 3
14:40
Entities of sufficiently high complexity that are full of surprises. I don't know. I don't have a free will, so that just came out of my mouth.
Speaker 3
14:55
I'm not sure that even makes sense. There's certain characteristics. So complexity seems to be a necessary property of life. And I almost want to say it has ability to do something unexpected.
Speaker 4
15:13
It seems to me that life is the main source of complexity on earth. Yes. And complexity is basically a bridge head that order builds into chaos by modeling, by processing information in such a way that you can perform reactions that would not be possible for dump systems.
Speaker 4
15:33
And this means that you can harvest like entropy that dump systems cannot harvest. And this is what complexity is mostly about. In some sense, the purpose of life is to create complexity.
Speaker 3
15:45
Yeah. Increasing, I mean, there's, there seems to be some kind of universal drive towards increasing pockets of complexity. I don't know what that is. That seems to be like a fundamental, I don't know if it's a property of the universe or it's just a consequence of the way the universe works, but there seems to be this small pockets of emergent complexity that builds on top of each other and starts having like greater and greater complexity by having like a hierarchy of complexity.
Speaker 3
16:17
Little organisms building up a little society that then operates almost as an individual organism itself. And all of a sudden you have Germany and Merkel.
Speaker 4
16:27
But that's not obvious to me. Everything that goes up has to come down at some point. So if you see this big exponential curve somewhere, it's usually the beginning of an S-curve, where something eventually reaches saturation.
Speaker 4
16:41
And the S-curve is the beginning of some kind of bump that goes down again. And there is just this thing that when you are inside of an evolution of life, you are on top of a puddle of negentropy that is being sucked dry by life. And during that happening, you see an increase in complexity. Because life forms are competing with each other to get more and more and finer and finer corner of that entropy extraction.
Speaker 3
17:11
But that, I feel like that's a gradual, beautiful process like that's almost, you know, follows a process akin to evolution. And the way it comes down is not the same way it came up The way it comes down is usually harshly and quickly So usually there's some kind of catastrophic event.
Speaker 4
17:30
Well, the Roman Empire took a long time.
Speaker 3
17:34
But would you classify that as a decrease in complexity, though?
Speaker 4
17:39
Yes. I think that this size of the cities that could be fed has decreased dramatically. And you could see that the quality of the art decreased and it did so gradually. And maybe future generations, when they look at the history of the United States in the 21st century, will also talk about the gradual decline, not something that suddenly happens.
Speaker 3
18:05
Do you have a sense of where we are? Are we on the exponential rise? Are we at the peak?
Speaker 3
18:11
Or are we at the downslope of the United States empire?
Speaker 4
18:15
It's very hard to say from a single human perspective, but it seems to me that we are probably at the peak.
Speaker 3
18:25
I think that's probably the definition of like optimism and cynicism. So My nature of optimism is I think we're on the rise. I think this is just all a matter of perspective.
Speaker 3
18:38
Nobody knows, but I do think that erring on the side of optimism, like you need a sufficient number, You need a minimum number of optimists in order to make that up thing actually work. And so I tend to be on the side of the optimists.
Speaker 4
18:53
I think that we are basically a species of grasshoppers that have turned into locusts. And when you are in that Locust mode, you see an amazing rise of population numbers and of the local complexity of the interactions between the individuals. But it's ultimately the question is, is it sustainable?
Speaker 3
19:12
See, I think we're a bunch of lions and tigers that have become domesticated cats, to use a different metaphor. And so I'm not exactly sure we're so destructive, we're just softer and nicer and lazier.
Speaker 4
19:27
I think we have monkeys and not the cats. And If you look at the monkeys, they are very busy.
Speaker 3
19:33
The ones that have a lot of sex, those monkeys?
Speaker 4
19:35
Not just the bonobos. I think that all the monkeys are basically a discontent species that always needs to meddle.
Speaker 3
19:42
Well, the gorillas seem to have a little bit more of a structure, but it's a different part of the tree. Okay, you mentioned the elephant and the monkey riding the elephant, and consciousness is the monkey, and there's some prodding that the monkey gets to do, and sometimes the elephant listens. I heard you got into some contentious, maybe you can correct me, but I heard you got into some contentious free will discussions.
Speaker 3
20:13
Is this with Sam Harris or something like that? Not that
Speaker 4
20:16
I know of. Some people on
Speaker 3
20:19
Clubhouse told me you made a bunch of big debate points about free will. Well, let me just then ask you, where in terms of the monkey and the elephant, do you think we land in terms of the illusion of free will? How much control does the monkey have?
Speaker 4
20:38
We have to think about what the free will is in the first place. We are not the machine. We are not the thing that is making the decisions.
Speaker 4
20:46
We are a model of that decision-making process. And there is a difference between making your own decisions and predicting your own decisions. And that difference is the first-person perspective. And what basically makes decision-making under conditions of free will distinct from just automatically doing the best thing is that we often don't know what the best thing is.
Speaker 4
21:13
We make decisions under uncertainty. We make informed bets using a betting algorithm that we don't yet understand because we haven't reverse engineered our own minds sufficiently. We don't know the expected rewards. We don't know the mechanism by which we estimate the rewards and so on.
Speaker 3
21:27
But there is an algorithm. Only we
Speaker 4
21:28
observe ourselves performing where we see that we weigh facts and factors and the future, and then some kind of possibility, some motive gets raised to an intention. And that's informed bet that the system is making. And that making of the informed bet, the representation of that is what we call free will.
Speaker 4
21:49
And it seems to be paradoxical because we think that the crucial thing is that it's somehow indeterministic. And yet, if it was indeterministic, it would be random. And it cannot be random because if it was random, if just dice were being thrown in the universe randomly forces you to do things, it would be meaningless. So the important part of the decisions is always the deterministic stuff.
Speaker 4
22:12
But it appears to be indeterministic to you because it's unpredictable. Because if it was predictable, you wouldn't experience it as a free will decision. You would experience it as just doing the necessary right thing. And you see this continuum between the free will and the execution of automatic behavior when you're observing other people.
Speaker 4
22:33
So for instance, when you are observing your own children, if you don't understand them, you will use this agent model where you have an agent with a set point generator, and the agent is doing the best it can to minimize the difference to the set point. And it might be confused and sometimes impulsive or whatever, but it's acting on its own free will. And when you understand what's happens in the mind of the child, you see that it's automatic and you can outmodel the child. You can build things around the child that will lead the child to making exactly the decision that you are predicting.
Speaker 4
23:06
And under these circumstances, like when you are a stage musician or somebody who is dealing with people that you sell a car to, And you completely understand the psychology and the impulses and the space of thoughts that this individual can have at that moment. Under these circumstances, it makes no sense to attribute free will because it's no longer decision-making under uncertainty. You are already certain. For them, there's uncertainty, but you already know what they're doing.
Speaker 3
23:33
But what about for you? So is this akin to systems like cellular automata where it's deterministic, but when you squint your eyes a little bit, it starts to look like there's agents making decisions at the higher, sort of when you zoom out and look at the entities that are composed by the individual cells. Even though there's underlying simple rules that make the system evolve in deterministic ways.
Speaker 3
24:08
It looks like there's organisms making decisions. Is that where the illusion of free will emerges? That jump in scale?
Speaker 4
24:17
It's a particular type of model, but this jump in scale is crucial. The jump in scale happens whenever you have too many parts to count and you cannot make a model at that level. And you try to find some higher level regularity.
Speaker 4
24:29
And the higher level regularity is a pattern that you project into the world to make sense of it. And agency is 1 of these patterns, right? You have all these cells that interact with each other and the cells in our body are set up in such a way that they benefit if their behavior is coherent, which means that they act as if they were serving a common goal. And that means that they will evolve regulation mechanisms that act as if they were serving a common goal.
Speaker 4
24:55
And now you can make sense of all these cells by projecting the common goal into them.
Speaker 3
25:00
Right, so for you then, free will is an illusion.
Speaker 4
25:03
No, it's a model and it's a construct. It's basically a model that the system is making of its own behavior. And it's the best model that it can come up with under the circumstances.
Speaker 4
25:13
And it can get replaced by a different model, which is automatic behavior. Then you fully understand the mechanism under which you are acting.
Speaker 3
25:19
Yeah, but another word for model is what? Story. So it's the story you're telling.
Speaker 3
25:25
I mean, do you actually have control? Is there such a thing as a you? And is there such a thing as you having control? So like, are you manifesting your evolution as an entity?
Speaker 4
25:41
In some sense, the you is the model of the system that is in control. It's a story that the system tells itself about somebody who is in control. And the contents of that model are being used to inform the behavior of the system.
Speaker 4
25:57
So the system is completely mechanical And the system creates that story like a loom, and then it uses the contents of that story to inform its actions and writes the results of that actions into the story.
Speaker 3
26:11
So how's that not an illusion? The story is written then, or rather we're not the writers of the story.
Speaker 4
26:21
Yes, but we always knew that.
Speaker 3
26:23
No, we don't know that. When did we know that?
Speaker 4
26:26
I think that's mostly a confusion about concepts. The conceptual illusion in our culture comes from the idea that we live in physical reality and that we experience physical reality and that we have ideas about it. And then you have this dualist interpretation where you have 2 substances, res extensa, the world that you can touch and that is made of extended things, and res cogitans, which is the world of ideas.
Speaker 4
26:51
And in fact, both of them are mental representations. 1 is the representations of the world as a game engine that your mind generates to make sense of the perceptual data.
Speaker 3
27:00
And the other one's...
Speaker 4
27:01
That's the physical world? Yes, that's what we perceive as the physical world. But we already know that the physical world is nothing like that, right?
Speaker 4
27:06
Quantum mechanics is very different from what you and me perceive as the world. The world that you and me perceive is a game engine. And there are no colors and sounds in the physical world. They only exist in the game engine generated by your brain.
Speaker 4
27:19
And then you have ideas that cannot be mapped onto extended regions, right? So the objects that have a spatial extension in the game engine are res extensa, and the objects that don't have a physical extension in the game engine are ideas. And they both interact in our mind to produce models of the world.
Speaker 3
27:38
Yeah, but when you play video games, I understand that what's actually happening is zeros and ones inside of a computer, inside of a CPU and a GPU, but you're still seeing the rendering of that. And you're still making decisions whether to shoot, to turn left, or to turn right, if you're playing a shooter or, every time I start thinking about Skyrim and Elder Scrolls and walking around in beautiful nature and swinging a sword, but it feels like you're making decisions inside that video game. So even though you don't have direct access in terms of perception to the bits, to the zeros and ones, it still feels like you're making decisions and your decisions are actually, feels like they're being applied all the way down to the zeros and ones.
Speaker 3
28:31
Yes. It feels like you have control, even though you don't direct access to reality.
Speaker 4
28:36
So there is basically a special character in the video game that is being created by the video game engine. And this character is serving the aesthetics of the video game. And that is you.
Speaker 4
28:47
Yes, but
Speaker 3
28:47
I feel like I have control inside the video game. Like all those like 12 year olds that kick my ass on the internet.
Speaker 4
28:55
So when you play the video game, it doesn't really matter that there are zeros and ones, right? You don't care about the width of the bus, you don't care about the nature of the CPU that it runs on. What you care about are the properties of the game that you're playing.
Speaker 4
29:07
And you hope that the CPU is good enough. Yes. And a similar thing happens when we interact with physics. The world that you and me are in is not the physical world.
Speaker 4
29:15
The world that you and me are in is a dream world.
Speaker 3
29:19
How close is it to the real world though?
Speaker 4
29:23
We know that it's not very close, but we know that the dynamics of the dream world match the dynamics of the physical world to a certain degree of resolution. Right. But the causal structure of the dream world is different.
Speaker 4
29:35
So you see, for instance, waves crashing on your feet, right? But there are no waves in the ocean. There's only water molecules that have tangents between the molecules that are the result of electrons in the molecules interacting with each other. Aren't they like very consistent?
Speaker 3
29:52
We're just seeing a very crude approximation. Isn't our dream world very consistent? Like to the point of being mapped directly one-to-one to the actual physical world as opposed to us being completely tricked.
Speaker 3
30:07
Is this is like where you have like Donald-
Speaker 4
30:09
It's not a trick, that's my point. It's not an illusion. It's a form of data compression.
Speaker 4
30:13
Yeah, yeah. It's an attempt to deal with the dynamics of too many parts to count at the level at which we're entangled with the best model that you can find.
Speaker 3
30:20
Yeah, so we can act in that dream world and our actions have impact in the real world, in the physical world. Yes. To which we don't have access.
Speaker 4
30:28
Yes, but it's basically like accepting the fact that the software that we live in, the dream that you live in, is generated by something outside of this world that you and me are in.
Speaker 3
30:37
So is the software deterministic and do we not have any control? Do we have, so free will is having a conscious being. Free will is the monkey being able to steer the elephant.
Speaker 4
30:55
No, it's slightly different. Basically in the same way as you are modeling the water molecules in the ocean that engulf your feet when you are walking on the beach as waves, and there are no waves, but only the atoms and more complicated stuff underneath the atoms and so on. You know that, right?
Speaker 4
31:13
You would accept, yes, there is a certain abstraction that happens here. It's a simplification of what happens and a simplification that is designed in such a way that your brain can deal with it, temporarily and spatially in terms of resources and tuned for the predictive value. So you can predict with some accuracy whether your feet are going to get wet or not.
Speaker 3
31:33
But it's a really good interface and approximation. Yes. It's like E equals mc squared is a good equation.
Speaker 3
31:40
They're a good approximation for what they're much better approximation. So to me, waves is a really nice approximation of what's all the complexity that's happening underneath.
Speaker 4
31:51
Basically, it's a machine learning model that is constantly tuned to minimize surprises. So it basically tries to predict as well as it can what you're going to perceive next.
Speaker 3
31:59
Are we talking about, which is the machine learning, our perception system or the dream world?
Speaker 4
32:05
Or both? The machine world, dream world is the result of the machine learning process of the perceptual system.
Speaker 3
32:11
That's doing the compression.
Speaker 4
32:12
Yes. And the model of you as an agent is not a different type of model or it's a different type, but not, not different as in its model like nature from the model of the ocean, right? Some things are oceans, some things are agents. And 1 of these agents is using your own control model, the output of your model, the things that you perceive yourself as doing.
Speaker 3
32:36
And that is you. What about the fact that like when you're standing with the water on your feet and you're looking out into the vast, like open water of the ocean and then there's a beautiful sunset. And it, well, the fact that it's beautiful and then maybe you have like friends or a loved 1 with you and like you feel love.
Speaker 3
33:00
What is that? Is that the dream world or what is that?
Speaker 4
33:02
Yes, it's all happening inside of the dream.
Speaker 3
33:05
Okay, but see, the word dream makes it seem like it's not real.
Speaker 4
33:11
No, of course it's not real. The physical universe is real, but the physical universe is incomprehensible and it doesn't have any feeling of realness. The feeling of realness that you experience gets attached to certain representations where your brain assesses, this is the best model of reality that I have.
Speaker 3
33:28
So the only thing that's real to you is the thing that's happening at the very base of reality, like the...
Speaker 4
33:36
Yeah, for something to be real, it needs to be implemented. So the model that you have of reality is real in as far as it is a model, right? It's an appropriate description of the world to say that there are models that are being experienced.
Speaker 4
33:51
But the world that you experience is not necessarily implemented. There is a difference between a reality, a simulation, and a simulacrum. The reality that we're talking about is something that fully emerges over a causally closed lowest layer. And the idea of physicalism is that we are in that layer, that basically our world emerges over that.
Speaker 4
34:13
Every alternative to physicalism is a simulation theory which basically says that we are in some kind of simulation universe and the real world needs to be in a parent universe of that, where the actual causal structure is, right? And when you look at the ocean in your own mind, you are looking at a simulation that explains what you're going to see next.
Speaker 3
34:31
And- So we are living in a simulation.
Speaker 4
34:32
Yes, but the simulation generated by our own brains. Yeah. And this simulation is different from the physical reality because the causal structure that is being produced, what you are seeing is different from the causal structure of physics.
Speaker 4
34:44
But consistent. Hopefully, if not, then you are going to end up in some kind of institution where people will take care of you because your behavior will be inconsistent, right? Your behavior needs to work in such a way that it's interacting with an accurately predictive model of reality. And if your brain is unable to make your model of reality predictive,
Speaker 3
35:05
you will need help. So what do you think about Donald Hoffman's argument that it doesn't have to be consistent, the dream world to what he calls the interface to the actual physical reality, where there could be evolution. I think he makes an evolutionary argument, which is like, it could be an evolutionary advantage to have the dream world drift away from physical reality.
Speaker 4
35:30
I think that only works if you have tenure. As long as you're still interacting with the ground truth, your model needs to be somewhat predictive.
Speaker 3
35:39
Well, in some sense, humans have achieved a kind of tenure in the animal kingdom. Yeah.
Speaker 4
35:45
And at some point we became too big to fail, so we became postmodernists.
Speaker 3
35:51
It all makes sense now.
Speaker 4
35:52
It's a version of reality that we like.
Speaker 3
35:55
Oh man. Okay.
Speaker 4
35:57
Yeah, but basically you can do magic. You can change your assessment of reality, but eventually reality is going to come bite you in the ass if it's not predictive.
Speaker 3
36:06
Do you have a sense of what is that base layer of physical reality? You have these attempts at the theories of everything, the very, very small of like string theory or what Stephen Wolfram talks about with the hypergraphs, these are these tiny, tiny, tiny, tiny objects. And then there is more like quantum mechanics that's talking about objects that are much larger, but still very, very, very tiny.
Speaker 3
36:36
Do you have a sense of where the tiniest thing is that is like at the lowest level? The turtle at the very bottom. Do you have a sense what
Speaker 4
36:46
that turtle is?
Speaker 3
36:46
I don't think that
Speaker 4
36:46
you can talk about where it is because space is emergent over the activity of these things. So, space coordinates only exist in relation to the other things. And so, you could in some sense abstract it into locations that can hold information and trajectories that the information can take between the different locations.
Speaker 4
37:06
And this is how we construct our notion of space. LRWL And physicists usually have a notion of space that is continuous. And this is a point where I tend to agree with people like Stephen Wolfram, who are very skeptical of the geometric notions. I think that geometry is the dynamics of too many parts to count.
Speaker 4
37:28
And when there are no infinities, If there were 2 infinities, you would be running into contradictions, which is in some sense what Gödel and Turing discovered in response to Hilbert's call. LRT There
Speaker 3
37:39
are no infinities. KS There
Speaker 4
37:41
are no infinities. LRT Infinity
Speaker 3
37:42
is fake.
Speaker 4
37:42
KS There is unboundedness, but if you have a language that talks about infinity, at some point the language is going to contradict itself, which means it's no longer valid. In order to deal with infinities in mathematics, you have to postulate their existence initially. You cannot construct the infinities.
Speaker 4
37:59
And that's an issue, right? You cannot build up an infinity from 0, but in practice, you never do this, right? When you perform calculations, you only look at the dynamics of too many parts to count. And usually these numbers are not that large.
Speaker 4
38:13
They're not googles or something. The infinities that we are dealing with in our universe are mathematically speaking relatively small integers. And still, what we're looking at is dynamics where a trillion things behave similar to a hundred trillion things or something that is very, very large because they're converging. And these convergent dynamics, these operators, this is what we deal with when we are doing the geometry.
Speaker 4
38:44
Right? Geometry is stuff where we can pretend that it's continuous, because if we subdivide the space sufficiently fine-grained, these things approach a certain dynamic. And this approach dynamic, that is what we mean by it. But I don't think that infinity would work, so to speak, that you would know the last digit of pi and that you have a physical process that rests on knowing the last digit of pi.
Speaker 3
39:09
Yeah, that could be just a peculiar quirk of human cognition that we like discrete. Discrete makes sense to us. Infinity doesn't, so in terms of our intuitions.
Speaker 4
39:19
No, the issue is that everything that we think about needs to be expressed in some kind of mental language, not necessarily a natural language, but some kind of mathematical language that your neurons can speak that refers to something in the world. And what we have discovered is that we cannot construct a notion of infinity without running into contradictions, which means that such a language is no longer valid. And I suspect this is what made Pythagoras so unhappy when somebody came up with the notion of irrational numbers before it was time, right?
Speaker 4
39:50
There's this myth that he had this person killed when he blabbed out the secret that not everything can be expressed as a ratio between 2 numbers, but there are numbers between the ratios. The world was not ready for this, And I think he was right. That has confused mathematicians very seriously because these numbers are not values, they are functions. And so you can calculate these functions to a certain degree of approximation, but you cannot pretend that pi has actually a value.
Speaker 4
40:16
Pi is a function that would approach this value to some degree, but nothing in the world rests on knowing pi.
Speaker 3
40:26
How important is this distinction between discrete and continuous for you to get to the bottom. Because there's a, I mean, in discussion of your favorite flavor of the theory of everything, there's a few on the table. So there's string theory, there's particular, There's a loop quantum gravity, which focuses on 1 particular unification.
Speaker 3
40:53
There's just a bunch of favorite flavors of different people trying to propose a theory of everything. Eric Weinstein and a bunch of people throughout history. And then of course, Stephen Wolfram, who I think is 1 of the only people doing a discreet.
Speaker 4
41:10
No, no, there's a bunch of physicists who do this right now.
Speaker 3
41:15
And like
Speaker 4
41:16
Toffoli and Tomasello. And digital physics is something that is, I think, growing in popularity.
Speaker 3
41:28
But the
Speaker 4
41:28
main reason why this is interesting is because it's important sometimes to settle disagreements. I don't think that you need infinities at all and you never needed them. You can always deal with very large numbers and you can deal with limits, right?
Speaker 4
41:43
You are fine with doing that. You don't need any kind of infinity. You can build your computer algebra systems just as well without believing infinity in the first place. So
Speaker 3
41:51
you're okay with limits?
Speaker 4
41:52
Yeah. So basically a limit means that something is behaving pretty much the same if you make the number larger, Right? Because it's converging to a certain value and at some point the difference becomes negligible and you can no longer measure it. And in this sense, you have things that if you have an N-gon which has enough corners, then it's going to behave like a circle at some point, right?
Speaker 4
42:15
And it's only going to be in some kind of esoteric thing that cannot exist in a physical universe that you would be talking about this perfect circle. And now it turns out that it also wouldn't work in mathematics because you cannot construct mathematics that has infinite resolution without running into contradictions. So that is itself not that important because we never did that, right? It's just a thing that some people thought we could.
Speaker 4
42:38
And this leads to confusion. So for instance, Roger Penrose uses this as an argument to say that there are certain things that mathematicians can do dealing with infinities and by extension our mind can do, that computers cannot do.
Speaker 3
42:55
Yeah, he talks about that there's, the human mind can do certain mathematical things that the computer, as defined by the universal Turing machine cannot. So that has to do with infinity.
Speaker 4
43:08
Yes, it's 1 of the things. So he is basically pointing at the fact that there are things that are possible in the mathematical mind and in pure mathematics that are not possible in machines that can be constructed in the physical universe. And because he's an honest guy, he thinks this means that present physics cannot explain operations that happen in our mind.
Speaker 3
43:34
Do you think he's right on the... So let's leave his discussion of consciousness aside for the moment. Do you think he's right about just what he's basically referring to as intelligence?
Speaker 3
43:46
So is the human mind fundamentally more capable as a thinking machine than a universal Turing machine? No. But, so he's suggesting that, right?
Speaker 4
43:58
So our mind is actually less than a Turing machine. There can be no Turing machine because it's defined as having an infinite tape and we always only have a finite tape. Our minds can only perform finitely many operations.
Speaker 4
44:09
Yeah, he
Speaker 3
44:10
thinks so. He's saying it can do the kind of computation the Turing machine cannot. And
Speaker 4
44:14
that's because he thinks that our minds can do operations that have infinite resolution in some sense. And I don't think that's the case. Our minds are just able to discover these limit operators over too many parts to count.
Speaker 3
44:30
What about his idea that consciousness is more than a computation, so it's more than something that a Turing machine can do. So again, saying that there's something special about our mind that cannot be replicated in the machine.
Speaker 4
44:49
The issue is that I don't even know how to construct a language to express this statement correctly.
Speaker 3
45:00
Well, the basic statement is, there's a human experience that includes intelligence, that includes self-awareness, that includes the hard problem of consciousness. And the question is, can that be fully simulated in the computer, in the mathematical model of the computer as we understand it today. Rajapenro says no.
Speaker 3
45:25
So the the universatory machine cannot simulate the universe.
Speaker 4
45:32
So the interesting question is, and you have to ask him this, is why not? What is the specific thing that cannot be modeled? And when I looked at his writings, and I haven't read all of it, but when I read, For instance, the section that he writes in the introduction to Road to Infinity, the thing that he specifically refers to is the way in which human minds deal with infinities.
Speaker 4
45:57
And that itself can, I think, easily be deconstructed? A lot of people feel that our experience cannot be explained in a mechanical way and therefore it needs to be different. And I concur, our experience is not mechanical. Our experience is simulated.
Speaker 4
46:16
It exists only in a simulation. Only a simulation can be conscious. Physical systems cannot be conscious because they're only mechanical. Cells cannot be conscious.
Speaker 4
46:24
Neurons cannot be conscious. Brains cannot be conscious. People cannot be conscious, as far as you understand them as physical systems. What can be conscious is the story of the system in the world where you write all these things into the story.
Speaker 4
46:39
You have experiences for the same reason that a character in a novel has experiences because it's written into the story. And now the system is acting on that story. And it's not a story that is written in a natural language, it's written in a perceptual language, in this multimedia language of the game engine. And in there, you write in what kind of experience you have and what this means for the behavior of the system, for your behavior tendencies, for your focus, for your attention, for your experience of valence and so on.
Speaker 4
47:06
And this is being used to inform the behavior of the system in the next step. And then the story updates with the reactions of the system and the changes in the world and so on. And you live inside of that model. You don't live inside of the physical reality.
Speaker 3
47:23
And I mean, just to linger on it, like you see, okay. Yeah, it's in the perceptual language, the multimodal perceptual language. That's the experience.
Speaker 3
47:34
That's what consciousness is within that model, within that story. But do you have agency? When you play a video game, you can turn left and you can turn right in that story. So in that dream world, how much control do you have?
Speaker 3
47:54
Is there such a thing as you in that story? Like, is it right to say the main character, you know, everybody's NPCs and then there's the main character and you're controlling the main character or is that an illusion? Is there a main character that you're controlling? I'm getting to the point of like the free will point.
Speaker 4
48:14
Imagine that you are building a robot that plays soccer. And you've been to MIT computer science, you basically know how to do that. And so you would say the robot is an agent that solves a control problem,
Speaker 3
48:27
how
Speaker 4
48:27
to get the ball into the goal. And it needs to perceive the world and the world is disturbing him in trying to do this, right? So he has to control many variables to make that happen and to project itself and the ball into the future and understand its position on the field relative to the ball and so on, and the position of its limbs in the space around it and so on.
Speaker 4
48:46
So it needs to have an adequate model that abstracting reality in a useful way. And you could say that this robot does have agency over what it's doing in some sense. And the model is going to be a control model. And inside of that control model, you can possibly get to a point where this thing is sufficiently abstract to discover its own agency.
Speaker 4
49:09
Our current robots don't do that. They don't have a unified model of the universe, but there's not a reason why we shouldn't be getting there at some point in the not-too-distant future. And once that happens, you will notice that the robot tells a story about a robot playing soccer. So the robot will experience itself playing soccer in a simulation of the world that it uses to construct a model of the locations of its legs and limbs in space on the field with relationship to the ball.
Speaker 4
49:39
And it's not going to be at the level of the molecules, it will be an abstraction that is exactly at the level that is most suitable for past planning of the movements of the robot. Right, it's going to be a high level abstraction, but a very useful 1 that is as predictive as you can make it. And in that side of that story, there is a model of the agency of that system. So this model can accurately predict that the contents of the model are going to be driving the behavior of the robot in the immediate future.
Speaker 3
50:08
But there's the hard problem of consciousness, which I would also, there's a subjective experience of free will as well, that I'm not sure where the robot gets that, where that little leap is. Because for me right now, everything I imagine with that robot, as it gets more and more and more sophisticated, the agency comes from the programmer of the robot still, of what was programmed in.
Speaker 4
50:35
Just- You could probably do an end-to-end learning system. You maybe need to give it a few priors so you nudge the architecture in the right direction that it converges more quickly, but ultimately discovering the suitable hyper parameters of the architecture is also only a search process, right? And as the search process was evolution, that has informed our brain architecture so we can converge in a single lifetime on useful interaction with the world and the formation of a self-model.
Speaker 3
51:00
The problem is if we define hyperparameters broadly, so it's not just the parameters that control this end-to-end learning system, but the entirety of the design of the robot. You have to remove the human completely from the picture, and then in order to build the robot you have to create an entire universe. Because you have to go, you can't just shortcut evolution, you have to go from the very beginning.
Speaker 3
51:24
In order for it to have, because I feel like there's always a human pulling the strings. And that makes it seem like the robot is cheating, is giving a shortcut to consciousness.
Speaker 4
51:35
And you are looking at the current Boston Dynamics robots, it doesn't look as if there is somebody pulling the strings, it doesn't look like cheating anymore.
Speaker 3
51:42
Okay, so let's go there, because I got to talk to you about this. So Obviously with the case of Boston Dynamics, as you may or may not know, it's always either hard-coded or remote-controlled. There's no intelligence.
Speaker 4
51:55
I don't know how the current generation of Boston Dynamics robots works, but what I've been told about the previous ones was that it's basically all cybernetic control, which means you still have feedback mechanisms and so on, but it's not deep learning for the most part as it's currently done. It's for the most part just identifying a control hierarchy that is congruent to the limbs that exist and the parameters that need to be optimized for the movement of these limbs. And then there is a convergence progress.
Speaker 4
52:24
So it's basically just regression that you would need to control this. But again, I don't know whether that's true. That's just what I've been told about how that works.
Speaker 3
52:31
We have to separate several levels of discussions here. So the only thing they do is pretty sophisticated control with no machine learning
Speaker 4
52:40
in
Speaker 3
52:41
order to maintain balance or to right itself. It's a control problem in terms of using the actuators to when it's pushed or when it steps on a thing that's uneven, how to always maintain balance. And there's a tricky set of heuristics around that, but that's the only goal.
Speaker 3
53:00
Everything you see Boston Dynamics doing in terms of that to us humans is compelling, which is any kind of higher order movement, like turning, wiggling its butt, like jumping back on its 2 feet, dancing. Dancing is even worse because dancing is hard coded in. It's choreographed by humans, it's choreography software. So like there is no, of all that high level movement, there's no anything that you can call, certainly can't call AI, but there's no even like basic heuristics.
Speaker 3
53:39
It's all hard coded in. And yet we humans immediately project agency onto them, which is fascinating.
Speaker 4
53:48
So the gap here doesn't necessarily have agency. What it has is cybernetic control. And the cybernetic control means you have a hierarchy of feedback loops that keep the behavior in certain boundaries so the robot doesn't fall over and it's able to perform the movements.
Speaker 4
54:04
And the choreography cannot really happen with motion capture because the robot would fall over because the physics of the robot, the weight distribution and so on, is different from the weight distribution in the human body. So if you were using the directly motion-captured movements of a human body to project it into this robot, it wouldn't work. You can do this with a computer animation, it will look a little bit off, but who cares? But if you want to correct for the physics, you need to basically tell the robot where it should move its limbs, and then the control algorithm is going to approximate a solution that makes it possible within the physics of the robot.
Speaker 4
54:40
And you have to find the basic solution for making that happen, and there's probably going to be some regression necessary to get the control architecture to make these movements. But those
Speaker 3
54:51
2 layers are separate. Yes. So the thing, the higher level instruction of how you should move and where you should move is a higher level.
Speaker 3
54:59
Yes, I
Speaker 4
54:59
expect that the control level of these robots at some level is dumb. This is just the physical control movement, the motor architecture. But it's a relatively smart motor architecture.
Speaker 4
55:10
It's just that there is no high-level deliberation about what decisions to make necessarily. Right?
Speaker 3
55:14
But see, it doesn't feel like free will or consciousness.
Speaker 4
55:17
No, no, that was not where I was trying to get to. I think that in our own body, we have that too. So we have a certain thing that is basically just a cybernetic control architecture that is moving our limbs.
Speaker 4
55:31
And deep learning can help in discovering such an architecture if you don't have it in the first place. If you already know your hardware, you can maybe handcraft it. But if you don't know your hardware, you can search for such an architecture. And this work already existed in the 80s and 90s.
Speaker 4
55:46
People were starting to search for control architectures by motor babbling and so on, and just use reinforcement learning architectures to discover such a thing. And now imagine that you have the cybernetic control architecture already inside of you, and you extend this a little bit. So you are seeking out food, for instance, or rest, and so on. And you get to have a baby at some point.
Speaker 4
56:11
And now you add more and more control layers to this. And the system is reverse engineering its own control architecture and builds a high-level model to synchronize the pursuit of very different conflicting goals.
Speaker 3
56:26
And
Speaker 4
56:26
this is how I think you get to purposes. Purposes are models of your goals. Your goals may be intrinsic as a result of the different set point violations that you have, hunger and thirst for very different things, and rest and pain avoidance and so on.
Speaker 4
56:39
And you put all these things together and eventually you need to come up with a strategy to synchronize them all. And you don't need just to do this alone by yourself because we are state-building organisms. We cannot function as isolation the way that Homo sapiens is set up. So our own behavior only makes sense when you zoom out very far into a society or even into ecosystemic intelligence on the planet and our place in it.
Speaker 4
57:06
So the individual behavior only makes sense in these larger contexts. And we have a number of priors built into us, so we are behaving as if we were acting on these high-level goals pretty much right from the start. And eventually in the course of our life, you can reverse engineer the goals that we're acting on. What actually are our higher level purposes.
Speaker 3
57:25
And the
Speaker 4
57:25
more we understand that, the more our behavior makes sense. But this is all at this point, complex stories within stories that are driving our behavior.
Speaker 3
57:34
Yeah, I just don't know how big of a leap it is to start create a system that's able to tell stories within stories. Like how big of a leap that is from where currently Boston Dynamics is, or any robot that's operating in the physical space. And that leap might be big if it requires to solve the hard problem of consciousness, which is telling a hell of a good story.
Speaker 4
58:01
I suspect that consciousness itself is relatively simple. What's hard is perception and the interface between perception and reasoning. There's, for instance, the idea of the consciousness prior that would be built into such a system by Joshua Bengio.
Speaker 4
58:18
And what he describes, and I think that's accurate, is that our own model of the world can be described through something like an energy function. The energy function is modeling the contradictions that exist within the model at any given point. And you try to minimize these contradictions, the tangents in the model. And to do this, you need to sometimes test things.
Speaker 4
58:41
You need to conditionally disambiguate figure and ground. You need to distinguish whether this is true or that is true, and so on. Eventually you get to an interpretation, but you will need to manually depress a few points in your model to let it snap into a state that makes sense. And this function that tries to get the biggest dip in the energy function in your model, according to Joshua Bengio, is related to consciousness.
Speaker 4
59:02
It's a low dimensional discrete function that tries to maximize this dip in the energy function.
Speaker 3
59:09
Yeah, I think I would need to dig into details because I think the way he uses the word consciousness is more akin to like self-awareness, like modeling yourself within the world, as opposed to the subjective experience, the hard problem.
Speaker 4
59:24
No, it's not even the self is in the world. The self is the agent, and you don't need to be aware of yourself in order to be conscious. The self is just a particular content that you can have, but you don't have to have.
Speaker 4
59:35
But you can be conscious in, for instance, a dream at night or during a meditation state, but you don't have a self. Right. You're just aware of the fact that you are aware. And what we mean by consciousness in the colloquial sense is largely this reflexive self-awareness that we become aware of the fact that we are paying attention, that we are the thing that pays attention.
Speaker 4
59:59
We
Speaker 3
59:59
are the thing that pays attention. We are the thing that pays attention.
Omnivision Solutions Ltd