37 minutes 17 seconds
🇬🇧 English
Speaker 1
00:00
Today we have Sterling Anderson. He's the co-founder of Aurora, an exciting new self-driving car company. Previously, he was the head of the Tesla Autopilot team that brought both the first and second generation Autopilot to life. Before that, he did his PhD at MIT working on shared human machine control of ground vehicles.
Speaker 1
00:21
The very thing I've been harping on over and over in this class. And now he's back at MIT to talk with us. Please give him a warm welcome. Thank you.
Speaker 1
00:31
Thank you. Thank you. Thank you. Thank you.
Speaker 1
00:34
Thank you.
Speaker 2
00:35
Thank you. It's good to be here. I was telling Lex just before, I think it's been a little while since I've been back to the Institute, and it's great to be here.
Speaker 2
00:44
I want to apologize in advance. I've just landed this afternoon from Korea via Germany, where I've been spending the last week. And so I may speak a little slower than normal, please bear with me. If I become incoherent or slur my speech, Somebody flag it to me and we'll try to make corrections.
Speaker 2
01:03
So tonight I thought I'd chat with you a little bit about my journey over the last decade. It's been just over 10 years since I was at MIT. A lot has changed, a lot has changed for the better in the self-driving community. And I've been privileged to be a part of many of those changes and so I wanted to talk with you a little bit about some of the things that I've learned, some of the things that I've experienced.
Speaker 2
01:24
And then maybe end by talking about sort of where we go from here and what the next steps are both for the industry at large, but also for the company that we're building, that as Lex mentioned is called Aurora. To start out with, there are a few sort of key phases or transitions in my journey over the last 10 years. As Lex mentioned, when I started at MIT, I worked with Carl Ianniema, Emilio Fusoli, John Leonard, a few others on some of these sort of shared adaptive automation approaches. I'll talk a little bit about those.
Speaker 2
02:03
From there I spent some time at Tesla where I first led the Model X program as we both finished the development and ultimately launched it. I took over the Autopilot program where we introduced a number of new, both active safety, but also sort of, you know, enhanced convenience features from auto steer to adaptive cruise control that we're able to refine in a few unique ways and we'll talk a little bit about that. And then from there, in December of last year, of 2016, I guess now, we started a new company called Aurora. And I'll tell you a little bit about that.
Speaker 2
02:42
So to start out with, when I came to MIT, it was 2007. The DARPA Urban Challenges were well underway at that stage. And 1 of the things that we wanted to do is find a way to address some of the safety issues in human driving earlier than potentially full self-driving could do. And so we developed what became known as the Intelligent Co-Pilot.
Speaker 2
03:05
What you see here is a simulation of that operating. I'll tell you a little bit more about that in just a second. But to explain a little bit about the methodology, the innovation, the key approach that we took that was slightly different from what traditional planning control theory we're doing was instead of designing in path space for the robot, we instead found a way to identify, plan, optimize, and design a controller subject to a set of constraints rather than paths. And so what we were doing is looking for homotopies through an environment.
Speaker 2
03:41
So imagine for a moment an environment that's pockmarked by objects, by other vehicles, by pedestrians, etc. If you were to create the Voronoi diagram through that environment you would have a set of each unique set of paths or homotopies, continuously deformable paths, that will take you from 1 location to another through it. If you then turn that into its dual, which is the Delaunay triangulation of said environment, presuming that you've got convex obstacles, you can then tile those together rather trivially to create a set of homotopies and transitions across which those paths can stake out a given set of options for the human. It turns out humans tend to, this tends to be a more intuitive way of imposing certain constraints on human operation rather than enforcing that the ego vehicle stick to some arbitrary position within some distance of a safe path.
Speaker 2
04:42
You instead look to enforce only that the state of the vehicle remain within a constraint bounded n-dimensional tube in state space. Those constraints being spatial, imagine for a moment edges of the roadway or circumventing various objects in the roadway. Imagine them also being dynamic, so limits of tire friction imposed limits on side slip angles. Using that, what we did is found a way to create those homotopies, forward simulate the trajectory of the vehicle, given its current state and some optimal set of control inputs that would optimize its stability through that.
Speaker 2
05:24
We use model predictive control in that work. And then taking that forward simulated trajectory, computing some metric of threat. For instance, if the objective function for that minimized the, or maximized stability, or minimized some of these parameters like wheel side slip, then wheel side slip is a fairly good indication of how threatening that optimal maneuver is becoming. And so what we did is then use that in a modulation of control between the human and the car such that should the car ever find itself in a state where that forward simulated optimal trajectory is very near the limits of what the vehicle can actually handle, we will have transitioned control fully to the vehicle, to the automated system so that it can avoid an accident and then it transitions back in some manner.
Speaker 2
06:16
We played with a number of different methods of transitioning this control to ensure that we didn't throw off the human mental model, which was 1 of the key concerns. We also wanted to make sure that we were able to arrest accidents before they happened. What you see here is a simulation that was fairly faithful to the behavior we saw in test drivers up in Dearborn, Michigan. Ford provided us with a Jaguar S-Type to test this on.
Speaker 2
06:50
And what we did, so what you see here is there is a blue vehicle and a grey vehicle. In both cases we have a poorly tuned driver model. In this case a pure pursuit controller with a fairly short look ahead. Shorter than would be appropriate given this scenario and these dynamics.
Speaker 2
07:07
The grey vehicle is without the intelligent copilot in the loop. You'll notice that obviously the driver becomes unstable, loses control and leaves the safe roadway. The co-pilot, remember, is interested not in following any given path. It doesn't care where the vehicle lands on this roadway provided it remains inside the road.
Speaker 2
07:31
In the blue vehicle's case, it's the exact same human driver model, now with the co-pilot in the loop. You'll notice that as this scenario continues, what you see here on the left in this green bar, is the portion of available control authority that's being taken by the automated system. You'll notice that it never exceeds half of the available control, which is to say that the steering inputs received by the vehicle end up being a blend of what the human and what the automation are providing. And what results is a path for the blue vehicle that actually better tracks the humans intended trajectory than even the copilot understood.
Speaker 2
08:15
Again, the copilot is keeping the vehicle stable, is keeping it on the road. The human is hewing to the center line of that roadway. So there were some very interesting things that came out of this. We did a lot of work in understanding what kind of feedback was most natural to provide to a human.
Speaker 2
08:32
Our biggest concern was if you throw off a human's mental model by causing the vehicle's behaviors to deviate from what they expect it to do in response to various control inputs, that could be a problem. So we tried various things from adjusting, for instance, 1 of the key questions that we had early on was, if we couple the computer control and the human control via planetary gear and allow the human to feel actually a backwards torque to what the vehicle is doing. So the car starts to turn right, human will feel the wheel turn left, they'll see it start to turn left. Is that more confusing or less confusing to a human?
Speaker 2
09:11
And it turns out it depends on how experienced that human is. Some drivers will modulate their inputs based on the torque feedback that they feel through the wheel. For instance, a very experienced driver expects to feel the wheel pull left when they're turning right. However, less experienced drivers, in response to seeing the wheel turning opposite to what the car is supposed to be doing.
Speaker 2
09:33
That's a rather confusing experience. So there were a lot of really interesting human interface challenges that we were dealing with here. We ended up working through a lot of that, developing a number of micro applications for it. 1 of those, at the time, Gil Pratt was leading a DARPA program focused on what they called at the time, maximum mobility manipulation.
Speaker 2
10:02
We decided to see what this system could do in application to unmanned ground vehicles. So in this case what you see is a human driver sitting at a remote console as 1 would when operating an unmanned vehicle for instance in the military. What you see on the top left is the top-down view of what the vehicle sees. I should have played this in repeat mode.
Speaker 2
10:30
With bounding boxes bounding various cones. And what we did is we set up about 20 drivers, 20 test subjects, looking at this control screen and operating the vehicle through this track. And we set this up as a race with prizes for the winners as 1 would expect and penalize them for every barrel they hit. If they knocked over the barrel I think they got a 5 second penalty.
Speaker 2
10:57
If they brushed a barrel they got a 1 second penalty and they were to cross the field as fast as possible. They had no line of sight connection to the vehicle. And we played with some things on their interface. We caused it to drop out occasionally.
Speaker 2
11:10
We delayed it as 1 would realistically expect in the field. And then we either engaged or didn't engage the co-pilot to try to understand what effect that had on their performance and their experience. And what we found was not surprisingly the incidence of collisions declined. It declined by about 72% when the co-pilot was engaged versus when it was not.
Speaker 2
11:32
We also found that even with that 72% decline in collisions, the speed increased by, I'm blanking on the amount, but it was 20 to 30%-ish. Finally, And perhaps most interesting to me, after every run I would ask the driver, and again these were blind tests, they didn't know if the copilot was active or not, and I would ask them, how much control did you feel like you had over the vehicle? And I found that there was a statistically significant increase of about 12% when the co-pilot was engaged. And that is to say, drivers reported feeling more control of the vehicle 12% more of the time when the co-pilot was engaged than when it wasn't.
Speaker 2
12:13
And then I looked at the statistics, it turns out they actually, the average level of control that the copilot was taking was 43%. So they were reporting that they felt more in control when in fact they were 43% less in control. Which was interesting and I think bears a little bit on the human psyche in terms of, they were reporting the vehicle was doing what I wanted it to do, maybe not what I told it to do, which was kind of a fun observation. I think the most enjoyable part of this was getting together with the whole group at the end of the study and presenting some of this and seeing some of the reactions.
Speaker 2
12:54
So from there, we looked at a few other areas. Carl Yanima and I looked at a few different opportunities to commercialize this. Again, this was years ago and the industry was in a very different place than it is today. We started a company first called Gimlet, then another called Ride.
Speaker 2
13:15
This is the logo, it may look familiar to you. We turned that into, at the time it intended to roll this out across various automakers in their operations. At the time, Very few saw self-driving as a technology that was really going to impact their business going forward. In fact, even ride-sharing at the time was a fairly new concept that was, I think, to a large degree viewed as unproven.
Speaker 2
13:49
So as I mentioned, December of last year, I co-founded Aurora with a couple of folks who have been making significant progress in this space for many years. Chris Hermsen, who formerly led Google's self-driving car group, Drew Bagnell is a professor at Carnegie Mellon University, exceptional in applied machine learning, was 1 of the founding members of Uber's self-driving car team and led autonomy and perception there. We felt like we had a unique opportunity at the convergence of a few things. 1, the automotive world has really come into the full-on realization that self-driving and particularly self-driving and ride sharing and vehicle electrification are 3 vectors that will change the industry.
Speaker 2
14:38
That was something that didn't exist 10 years ago. 2, significant advances have been made in some of these machine learning techniques, in particular deep learning and other neural network approaches in the computers that run them and the availability of low power GPU and TPU options to really do that well in sensing technologies, in high resolution radar, and a lot of the LIDAR development. So it's really a unique time in the self-driving world. A lot of these things are really coming together now.
Speaker 2
15:15
And we felt like by bringing together an experienced team, we had an interesting opportunity to build from a clean sheet, a new platform, a new self-driving architecture that leveraged the latest advances in applied machine learning together with our, together with our experience of where some of the pitfalls tend to be down the road as you develop these systems. Because you don't tend to see them early on, they tend to express themselves as you get into the long tail of corner cases that you end up needing to resolve. So we've built that team. We have offices in Palo Alto, California and Pittsburgh, Pennsylvania.
Speaker 2
15:54
We've got fleets of vehicles operating in both Palo Alto and Pennsylvania. A couple of weeks ago we announced that Volkswagen Group, 1 of the largest automakers in the world, Hyundai Motor Company, also 1 of the largest automakers in the world, have both partnered with Aurora. We will be developing and are developing with them a set of platforms and ultimately will scale our technology on their vehicles across the world. And 1 of the important elements of building Lex, I asked Lex before coming out here what this group would be most interested in hearing.
Speaker 2
16:28
1 of the things that he mentioned was what does it take to build a self-driving, you know, build a new company in a space like this. 1 of the things that we found very important was a business model that was non-threatening to others. We recognize that our strengths and our experience over the last, in my case a decade, in Chris's case almost 2, really lies in the development of the self-driving systems. Not in building vehicles, though I have had some experience there, but in developing the self-driving.
Speaker 2
16:59
And so Our feeling was if our mission is to get this technology to market as quickly, as broadly, and as safely as possible, that mission is best served by playing our position and working well with others who can play theirs, which is why you see the model that we've adopted and is now, you'll start to see some of the fruits of that through these partnerships with some of these automakers. So at the end of the day, our aspiration and our hope is that this technology that is so important in the world in increasing safety, in improving access to transportation, in improving efficiency and the utilization of our roadways and our cities. This is maybe the first talk I've ever given where I didn't start by rattling off statistics about safety and all these other things. If you haven't heard them yet, you should look them up.
Speaker 2
17:47
They're stark. The fact that most vehicles in the United States today have an average, on average, 3 parking spaces allocated to them. The amount of land that's taken up across the world in housing vehicles that are used less than 5% of the time. The number of people, I think in the United States the estimate has been somewhere between 6 and 15 million people don't have access to the transportation they need, because they're elderly or disabled or 1 of many other factors.
Speaker 2
18:21
And so this technology is potentially 1 of the most impactful for our society in the coming years. It's a tremendously exciting technological challenge. And at the confluence of those 2 things, I think is a really unique opportunity for engineers and others who are not engineers who really want to get involved to play a role in changing our world going forward. So with that, maybe I'll stop with this and we can go to questions.
Speaker 1
18:52
Let's give Daryl in a warm hand.
Speaker 2
18:57
Hi, I'm Wayne, hello, thanks for coming.
Speaker 3
18:59
I have
Speaker 2
18:59
a question. A lot of self-driving car companies are making extensive use of LiDAR, but you don't see a lot of that with Tesla. I wanted to know if you had any thoughts about that.
Speaker 2
19:10
Yeah, I don't want to talk about Tesla too much in terms of our specific, anything that wasn't public information, I'm not gonna get into. I will say that for Aurora, we believe that the right approach is getting to market quickly and doing so safely. And you get to market most quickly and safely if you leverage multiple modalities including LiAR. These are all just to clarify what's running in the background.
Speaker 2
19:34
These are all just Aurora videos of our cars driving on various test routes.
Speaker 4
19:40
Hi, I'm Luke from the Sloan School. A lot of customers have visceral-type connections to their automobile. I was wondering how you see that market, the car enthusiast market being affected by AVs and then vice versa, how the AVs will be designed around those type of customers.
Speaker 2
19:56
Yeah, that's a good question. Thanks for asking that. I am 1 of those enthusiasts.
Speaker 2
20:00
I very much appreciate being able to drive a car in certain settings. I very much don't appreciate driving in others. I remember distinctly several evenings, almost literally pounding my steering wheel, sitting in Boston traffic, on my way to somewhere. I do the same in San Francisco.
Speaker 2
20:27
I think the opportunity really is to turn that, turn sort of personal vehicle ownership and driving into more of a sport and something you do for leisure. I see it, a gentleman some time ago asked me to talk, Hey, don't you think this is a problem for the country, I think you meant the world, if people don't learn how to drive? That's just something a human should know how to do. My perspective is It's as much of a problem as people not intrinsically knowing how to ride a horse today.
Speaker 2
21:04
If you want to know how to ride a horse, go ride a horse. If you want to race a car, go to a racetrack, or go out to a mountain road that's been allocated for it. Ultimately, I think there is an important place for that, because I certainly agree with you. I'm very much a vehicle enthusiast myself.
Speaker 2
21:22
But I think there is so much opportunity here in alleviating some of these other problems, particularly in places where it's not fun to drive, that I think there's a place for both. Yeah.
Speaker 5
21:37
Hi, can you hear or do I need to get? Yeah. Congratulations on the partnership that was announced recently, I think.
Speaker 3
21:45
So I
Speaker 5
21:46
have a 2 part question. The first 1 is, so we heard last week from, I think it was a gentleman from Waymo, talking about how long they've been working on this autonomous car technology. And you seem to have rammed up extremely fast.
Speaker 5
22:00
So is there a licensing model that you've taken? I mean, how are you able to commercialize the technology in 1 year?
Speaker 2
22:10
So just to be clear, we're not actually commercializing. Just to distinguish, we are partnering and developing vehicles and will ultimately be running pilots as we announced a week or 2 ago with the Moya shuttles. We are however, I will distinguish that from broad commercialization of the technology.
Speaker 2
22:31
And I don't want to get too much into the nuances of that business model. I will say that it is 1 that's done in very close partnership with our automotive partners. Because at the end of the day, they understand their cars, they understand their customers, they have distribution networks. They are, our automotive partners are fairly well positioned provided they have the right support in developing the self-driving technology, they're fairly well positioned to roll it out at scale.
Speaker 5
23:05
So the second part of my question is, again, looking at this pace of adoption and the maturity of technology, do you see an open source model for autonomous cars as they become more and more?
Speaker 2
23:19
Unclear. I'm not convinced that an open source model is what gets to market most quickly. In the long run, it's not clear to me what will happen. I think there will be a handful of successful self-driving stacks that will make it.
Speaker 2
23:41
Nowhere near the number of self-driving companies today, but a handful, I think.
Speaker 6
23:50
2 questions, 1 is invariably a new product development, there's typically 2 types of bottlenecks. There's a technological bottleneck and an economic bottleneck, right? So technological bottleneck might be, hey, the sensors aren't good enough or the machine learning algorithms aren't good enough and so on.
Speaker 6
24:08
I'd be interested to hear, and it'll shift obviously over time. So I'd be interested to know what you would say is the current thing that if, hey, if this part of the architecture was 10 times better, we would, and then on the economic side, I'd be interested to know, gee, if sensors were 100 times cheaper, then, so I'd be interested to hear your perspective on both.
Speaker 2
24:28
That's a great question. Let me start with the economic side of it, and just to get that out of the way because it's a little bit quicker answer. The economics of operating a self-driving vehicle in a shared network today would close, that business case closes, even with high cost of sensors.
Speaker 2
24:49
That is not what's stopping us. And that's part of why the gentleman earlier who asked, you know, should you use LiDAR or not? If your target is to initially deploy these in fleets, you would be wise to start at the top end of the market, develop and deploy a system that's as capable as possible, as quickly as possible, and then cost it down over time. And you can do that as computer vision, precision recall increase.
Speaker 2
25:16
Today, they're not good enough, right? And so economically, depending on your model of going to market, and we believe that the right model is through mobility services, you can cost down, you'll cost down the center. Inevitably, there's no unobtainium in LIDAR units today. There's no reason fundamentally that a should cost of a LIDAR unit will lead you to a $70,000 price point.
Speaker 2
25:46
However, if you build anything in low enough volumes, it's going to be expensive. Many of these things will work their way into the standard automotive process. They'll work their way into tier 1 suppliers. And when they do, the automotive community has shown themselves to be exceptional at driving those costs down and so I expect them to come way down.
Speaker 2
26:05
To your other question, technological bottlenecks and challenges. 1 of the key challenges of self-driving is and remains that of forecasting the intent and future behaviours of other actors, both in response to 1 another but also in response to your own decisions in motion. That's a perception problem but it's something more than a perception problem. It's also a prediction and there are a number of different things that have to come together to solve this.
Speaker 2
26:41
We're excited about some of the tools that we're using and interleaving various modern machine learning techniques throughout the system to do things like project our own behaviors that were learned for the ego vehicle on others and assume that they'll behave as we would had we been in that situation.
Speaker 6
26:59
Like an expert system kind of approach, right?
Speaker 2
27:01
Yeah. You assume nominal behavior and you guard against off-nominal, right? But it's very much, it's not a solves problem, I wouldn't say. It's very much as you get into that really long tail of development, When you're no longer putting out demonstration videos but you're instead just putting your head down and eking out those final nines, that's the kind of problem you tend to deal with.
Speaker 7
27:32
So This question isn't necessarily about the development of self-driving cars, but more of an ethics question. When you're putting human lives into the hands of software, isn't there always the possibility for outside agents with malicious intent to use it for their own gain. And how do you guys, if you do have a plan, how do you intend to protect against attacks like that?
Speaker 2
27:57
So security is a very real aspect of this that has to be solved. It's a constant game of cat and mouse, and so I think it just requires a very good team and a concerted effort over time. I don't think you solve it once and I certainly wouldn't pretend to have a plan that solves it and is done with it.
Speaker 2
28:25
We try to leverage best practices where we can in the fundamental architecture of the system to make it less exposed in particular key parts of the system, less exposed to nefarious actions of others. But at the end of the day, it's just a constant development effort.
Speaker 8
28:45
Thank you for being here. So I had a question about what opportunities self-driving cars open up. Since driving has been designed around a human being at the center since the beginning, if you put a computer at the center, what society-wide differences and maybe even within individual car differences that open up like, you know, could cars go 150 miles an hour on the highway and get places much faster?
Speaker 8
29:09
Would cars be like, look differently when a human doesn't need to be paying attention and stuff like that?
Speaker 2
29:14
Yeah, I think the answer is yes. And that's something that's very exciting. So 1 of the unique opportunities that automakers in particular have when self-driving technology gets incorporated into their vehicles is they can do things like differentiate the user experience.
Speaker 2
29:33
They can provide services, augmented reality services, or location services, many other sort of... It opens a new window into an entirely new market that automakers haven't historically played in. And it allows them to change the very vehicles themselves. As you mentioned, the interior can change As we validate some of these self-driving systems and confirm that they do in fact reduce the rate of collisions as we hope they will, you can start to pull out a lot of the extra mass and other things that we've added to vehicles to make them more passively safe, right?
Speaker 2
30:19
Roll cages, crumple zones, airbags, a lot of these things, presumably in a world where we don't crash, there is much less need for passive safety systems. So yes.
Speaker 3
30:35
Hi, I have a question about the go or no go test that you conduct for certain features, like you mentioned the throttle control where you slow down the throttle, assuming that the driver has pressed the wrong pedal. When you test, when you decide to launch that feature, how do you know it's definitely gonna work in all scenarios because your data set might not
Speaker 2
30:55
be tested? It's a statistical evaluation in every case, right? You're right, You will, this is part of the art of self-driving vehicle development, is you will never have comprehensively captured every case, every scenario.
Speaker 2
31:15
That is, Some of you may want to correct me on this. I think that's an unbounded set. It may in fact be bounded at some point, but I think it's unbounded. And so you'll never actually have characterized everything.
Speaker 2
31:26
What you will have done, hopefully, if you do it right, is you will have established with a reasonable degree of confidence that you can perform at a level of safety that's better than the average human driver. And once you've reached that threshold and you're confident that you've reached that threshold, I think the opportunity to launch is real and you should seriously consider it.
Speaker 9
31:49
So thank you for your talk today first. And my question is self driving seems to be able to ultimately take over the world to some extent. But just like other technologies today that open up new opportunities but also bring in adverse effects.
Speaker 9
32:07
So how do you respond to fear and negative effects that may come in 1 day? And specifically, what do you see as the positive and negative implications of future day self-driving?
Speaker 2
32:19
Positive and negative implications. So the positive ones I kind of listed and go find your favorite press article and they'll list them as well. The negative ones in the near term, I do worry a little bit about the displacement of jobs.
Speaker 2
32:41
Not a little bit, this will happen. It happens with every technology like this. I think it's incumbent on us to find a good way of transitioning those who are employed in some of the transportation sectors that will be affected into better work. There are a few opportunities that are interesting in that regard, but I think it's an important thing to start discussing now because it's going to take a few years.
Speaker 2
33:10
And by the time we've got these self-driving systems on the roads really starting to place that labor, I'd really like to have a new home for it.
Speaker 10
33:19
Hi, I'm Kasia from the Sloan School. My question was more about your business model, again, with partnering with both VW and Hyundai, and your perspective on how
Speaker 11
33:30
you were able to effectively do that. Did not 1 of them want
Speaker 10
33:34
to go sort of exclusive with you?
Speaker 11
33:37
And what was your sort of thought process about that?
Speaker 2
33:39
Yeah, so our mission, as I mentioned, is to get the technology to market broadly and quickly and safely. We have been and remain convinced that the right way to do that is by providing it to as much of the industry as possible. To every automaker who shares our vision and our approach.
Speaker 2
34:02
We were pleased to see that both Volkswagen Group and I'm assuming you all know the scope of Volkswagen right? This is a massive automaker. Hyundai Motor also very large across Hyundai, Kia and Genesis. They both shared our vision of how we should do this, which was important to us.
Speaker 2
34:24
They both shared a keen interest in making a difference at scale through their platforms. Volkswagen has, I think, a very admirable set of initiatives around vehicle electrification and a few other things. Hyundai is doing similar things. And so, for us, it was important that we enable everyone, And that was kind of what Aurora started to do.
Speaker 12
34:49
Hi, I had a question. Now that I see a lot of companies are coming up with self-driving cars, right? So most of the cars are pretty much, all the technology is bound only to the car.
Speaker 12
35:00
So would we see something like an open network where car communicate with each other regardless of which company they come from? And would this in any way increase the safety or the performance of vehicles and stuff like that?
Speaker 2
35:12
Yeah, I think you're getting it vehicle to vehicle, vehicle infrastructure type communication. There are efforts ongoing in that, and it's certainly, it's only positive, right? Having that information available to you can only make things better.
Speaker 2
35:27
The challenge has historically been with vehicle to vehicle, and in particular, vehicle to infrastructure or vice versa, that it doesn't scale well, 1, and 2, it's been slow. It's been much slower in coming than our development. And so when we develop these systems, we develop them without the expectation that those communication protocol are available to us. We'll certainly protect for them and it will certainly be a benefit once they're here.
Speaker 2
35:56
But until then, many of the hard problems that I would have welcomed 10 years ago, to have a beacon on every traffic light that just told me it's state rather than having to perceive it. I would have certainly used those 10 years ago. Now they're less significant because we've kind of worked our way through a lot of the problems they would have solved.
Speaker 13
36:15
Thank you for your talk. My question is, what's your opinion about the cooperation of self-driving vehicles? So maybe I think if you can control a group of self-driving vehicles at the same time, you can achieve a lot of benefits to the traffic.
Speaker 2
36:31
Yes, That is where a lot of the benefits come from in infrastructure utilization, right? Is in ride sharing with autonomous vehicles. And specifically, the better we understand demand patterns, people movement, goods movement, the better we can sort of optimally allocate these vehicles at locations where they're needed.
Speaker 2
36:54
So yes, that's certainly that coordination. This is where, as I mentioned, these 3 vectors of vehicle electrification, ride sharing, and autonomy, or mobility as a service and autonomy, really come together with a unique value proposition.
Speaker 13
37:10
Okay, thank you.
Speaker 1
37:13
Thank you so much for a great talk and being here. Thank you. Thank you.
Speaker 1
37:16
Thank you.
Speaker 3
37:16
Thank you. Thank you.
Omnivision Solutions Ltd