See all Y Combinator transcripts on Youtube

youtube thumbnail

At the Intersection of AI, Governments, and Google - Tim Hwang

40 minutes 5 seconds

🇬🇧 English

S1

Speaker 1

00:00

All right, everyone. So today we have Tim Wong. And we are live from Tim Wong's apartment in San Francisco. All right, man.

S1

Speaker 1

00:08

I think the easiest way to do this is just to introduce yourself.

S2

Speaker 2

00:11

OK, cool. So thanks for having me on the show, Craig. My name is Tim Wong.

S2

Speaker 2

00:15

I'm a global public policy lead on AI and machine learning for Google.

S1

Speaker 1

00:20

And so what do you do for your job?

S2

Speaker 2

00:22

So public policy is a pretty fun job. It's a combination of a couple of different things. On 1 hand, I work a lot with sort of governments and regulators and civil society, trying to figure out actually what Google's position should be on a whole range of issues, everything from whether or not machine learning is going to take all the jobs to whether or not we can make sure that these systems are fair and non-discriminatory.

S2

Speaker 2

00:47

And then internally, I work with product teams and researchers to kind of keep them apprised of what's happening on the political scene worldwide.

S1

Speaker 1

00:54

Okay. And so what does that mean? Does that mean traveling around and meeting with people? How do you find

S2

Speaker 2

00:58

that out? Yeah, it's a lot of meeting with people, actually. So we end up kind of talking with people from a whole range of different kind of sectors and a whole range of different backgrounds Particularly because AI is you know, this kind of emerging technology a Lot of what we're doing is kind of just trying to assess like how different parts of society are thinking about it

S1

Speaker 1

01:15

Mm-hmm And so I think the with AI you and then policy on AI You've kind of like nested to obscure things that people don't really know what you're talking about. So could you just back up a little bit and explain what doing policy for Google actually means in the context of AI?

S2

Speaker 2

01:32

Sure, definitely. So I think the really interesting thing about AI is basically that a lot of the modern techniques in artificial intelligence, if you even asked people a decade ago, they would have told you, this is never going to be a thing. It's a complete dead end.

S2

Speaker 2

01:46

Why are you doing this research? And it really has kind of exploded in a completely unexpected way in the last few years. And so really a lot of the challenge has been like, OK, everybody's kind of wrapping their heads around even what the business impact of the technology is going to be. But there's increasingly a lot of people trying to figure out what the social impact of the technology will be.

S2

Speaker 2

02:04

And I would say policy really sits at that interface between these really cool technological capabilities that are coming about and then what society in general is going to do about it.

S1

Speaker 1

02:12

And so what would be a tangible example at Google of a policy that you guys have worked on to figure out?

S2

Speaker 2

02:19

Sure, so there's a couple of really interesting problems that we've been working on very closely. So 1 of them is this question about fairness in machine learning systems. And for example, to give you 1 really concrete challenge we've been thinking a lot about is, in order to de-bias a system, once a machine learning system is behaving in a biased way, 1 way of trying to deal with it is collecting more diverse data.

S1

Speaker 1

02:41

But 1

S2

Speaker 2

02:42

of the big problems is when you do that, you end up collecting lots and lots of data about minorities, which raises all these really interesting questions around privacy and then what have you. And that ends up being a really interesting problem because it's both a technical challenge, which is like, can you collect an adequately diverse data set? But on the other hand, also this policy question, which is, what is society comfortable with you collecting?

S2

Speaker 2

03:00

And what are the practices? And that ends up being a really interesting trade-off that you have to navigate if you're interested in these problems.

S1

Speaker 1

03:06

And so what do you actually have to do? Are you going doing user interviews with people or is it just guessing?

S2

Speaker 2

03:12

Yeah, part of it's user interviews. Part of it's actually working with people who know. It turns out that issues of privacy, particularly minority privacy, are like not new problems.

S2

Speaker 2

03:21

And so a lot of our work is actually like talking with people who are like experts in that space, right? People have worked on bias and discrimination questions on the past and a lot of data scientists and trying to get them to talk to 1 another. Because I think right now what we're really trying to do is kind of bridge these sort of human values on 1 hand with a lot of what's happening on the technological side.

S1

Speaker 1

03:40

And so if I'm a company and I'm like, I can't afford a policy guy like Tim, and I will be dealing with large amounts of data that may or may not discriminate against people. Are there any obvious no-go's that you would tell someone?

S2

Speaker 2

03:54

Well, I think it's to be sure that you're interrogating the data. I think that's 1 important place to start.

S1

Speaker 1

04:00

Now,

S2

Speaker 2

04:00

I think 1 of the interesting things about machine learning is that there's lots of potential points of failure. I think every single interesting point of failure is being investigated right now. 1 of the most common problems is just that you don't adequately think through your data.

S2

Speaker 2

04:15

The machine does what the machine does, which is trying to optimize against your objective function that you give it. And it will often maximize in ways that you don't expect. And that is, in fact, part of the problem. 1 of the examples that I always think about is, we have this project that we released, it was called Deep Dream.

S2

Speaker 2

04:32

And 1 of the problems in computer vision is trying to figure out what the computer actually thinks it sees when it looks at an image. And so you go through this process and you basically, the whole process, you show it an image, you ask it, what do I have to do to this image to make it look more like what you think, for example, a sandwich looks like. And you edit the image slightly and you keep repeating this process until you show out what is the ideal thing that the computer thinks it is. It turns out that when you ask it to reveal what it thinks a barbell looks like, barbells always show up with human arms attached to them.

S2

Speaker 2

05:03

Oh, wow. Right? Yeah. And so that's a really interesting problem, because you've trained barbells on photos that always have someone holding the barbell.

S2

Speaker 2

05:10

And so it ends up learning this completely bad representation. And what do you got to do? I mean, a big part of it is just like the consciousness around like, oh, that can happen.

S1

Speaker 1

05:18

Right.

S2

Speaker 2

05:18

And like, how do you interrogate your data set to make sure it doesn't have those problems.

S1

Speaker 1

05:21

And you guys are doing some interesting stuff around adversarial data, right?

S2

Speaker 2

05:25

Yeah, that's right. So I mean, I think adversarial examples and generative adversarial networks are like some of the hottest points in the research right now. It's almost become a joke that there's so many, what they call GANs out there right now.

S2

Speaker 2

05:37

This is like everybody has a GAN.

S1

Speaker 1

05:39

So what does that mean? What does that stand for?

S2

Speaker 2

05:40

So a general adversarial network. So it's a very particular way of setting up machine learning. But adversarial examples lead to these really fascinating results where you know You can take a picture of a panda and that's a classic example And you edit a couple of the pixels and it like basically like the computer will be like, yep That's definitely a giraffe and it still looks like a panda to humans, right?

S2

Speaker 2

06:00

It was the really fascinating thing.

S1

Speaker 1

06:01

And so what data are you seeding into that image to make it think it's giraffe?

S2

Speaker 2

06:05

Well, a lot of it I think is basically you're editing particular pixels within the image that we know will set off the machine to behave in certain ways. So because it turns out basically that We always assume that a computer will see the same thing that we do Just based on the visuals, but how we process is actually completely different from from machines This researcher David Weinberger did this awesome article recently, which is basically trying to argue that like, you know machine learning It's it's generating knowledge but 1 of the most interesting things about it is that it's generating knowledge in maybe a way that is completely different from the way our human brains work. And that ends up being a really interesting challenge, is how do you understand the knowledge that you're getting, and how do you understand the reasoning behind the knowledge that you're getting from machine learning systems.

S1

Speaker 1

06:47

Well, maybe that's a sensible segue into like how people are investigating the impact of AI as it relates to like automation and what humans are good at doing and what computers are good at doing.

S2

Speaker 2

06:57

Yeah, right.

S1

Speaker 1

06:58

And so when you travel around, you meet with people, you meet with different countries. How are people gauging the effects of automation and AI right now and its effects over the next decade?

S2

Speaker 2

07:08

Yeah, I think, so it's an evolving picture, right? And I think right now, I think everybody is just surprised at all of the things that machines can do that we thought that humans were going to be good at for the foreseeable future. So like Go is the canonical example, but there's all sorts of really interesting kind of like reasoning and other things that like machines are engaging in now.

S2

Speaker 2

07:30

And so 1 thing I always tell people is basically that everybody always wants to think about AI as if it were like this huge meteor just crashing into the earth where they're like, what do we do when the AI arrives, right? And it just like, it doesn't, it just then turns out that it doesn't work like that, right? And in fact, like what we really need to get to is like thinking about like how particular, you know, technical capabilities will map onto the economy. And that's what a lot of the work is happening on right now.

S1

Speaker 1

07:53

OK. And so, yeah, let's go into some examples.

S2

Speaker 2

07:55

Yeah, sure. So for example, 1 really interesting question is this adversarial examples, which is basically, everybody always assumes that, OK, if it can be automated, it definitely will be automated, right? But that's like a fallacy because in certain cases, like you may really worry about the security of your systems, right?

S2

Speaker 2

08:10

So if someone, for example, can like hold up a photo and cause like a security camera to be like, oh, it's definitely Tim, open the door. That ends up being a real reason why you would not necessarily want to implement a machine learning system for access control, for instance. So that's actually really interesting because that means that if we don't solve that research problem, that means that we will be limited in the kind of domains that machine learning enters into. And I think that's what we're really interested in right now is like, what are these kind of gateway research questions that if we got through, would like totally change the nature of like who, when and why someone would implement this stuff.

S1

Speaker 1

08:46

And so are those things collecting the interest and the momentum of the research community? Because I can see a certain direction where it becomes incredibly product focused, right? Where I'm a researcher, I'm incredibly talented.

S1

Speaker 1

08:57

Figuring out if the security camera is going to work with an adversarial network maybe might not be of highest interest

S2

Speaker 2

09:04

to me.

S1

Speaker 1

09:05

Is that blocking people or is the general concept enough?

S2

Speaker 2

09:08

I mean, I think right now it's a little bit unevenly divided. It turns out that research interest is not necessarily policy relevant interest. And so in some cases they're overlapping, right?

S2

Speaker 2

09:19

So I think there's a lot of interest in adversarial examples There's a lot of interest in like what are attacks? Essentially that you can put on these machines to get them to behave in ways that you don't expect That seems to be a place where like security which is like very much a policy interest Mm-hmm will map on quite nicely to security as a research interest. But for example, things like fairness. I was talking to a machine learning researcher the other day who was basically like, look, I could not, in good faith, advise a grad student to work on machine learning fairness issues.

S2

Speaker 2

09:46

Because it's just not considered a serious problem in the field. Right. And like, that's just like, that has less to do with like the field and more like the norms of the field. Right.

S2

Speaker 2

09:55

And then that, that ends up being a big issue, right. We don't have coverage on certain types of things in practice may actually really limit, like where these technologies are implemented.

S1

Speaker 1

10:03

Well, I think it's a material issue right now. There's a gap between product understanding and actual deep research.

S2

Speaker 2

10:09

Yeah, that's right. And I say this to a lot of people. Everybody's always like, so what skills do we need to teach people in the future because of machine learning?

S2

Speaker 2

10:16

And I think 1 enormous skill will be domain knowledge, because coming up with a technical capability is just 1 part of this huge picture, which is just like, okay, so then how do we actually introduce automation in a way that makes sense to people? And that's a huge task. And so my personal prediction is that interface and how we effectively collaborate with machines, particularly with these 2 new types of models, how that effectively done is still a big open question and will seem to be increasingly in higher demand as you suddenly have access to these capabilities.

S1

Speaker 1

10:49

So what I've been wondering then is, does, for example, TensorFlow or any 1 of the machine learning APIs, does that become the new AWS for products or do people have to build their own to create like a defensible company?

S2

Speaker 2

11:06

I mean, I think there's still like, so I mean, like cloud services will have the same impact on the economy that they always have. Right. And I think this is 1 interesting thing is all these companies are now competing for offering cloud ML services.

S2

Speaker 2

11:19

And the upshot of that basically is that the amount of, you don't need a PhD in machine learning to get all the benefits from machine learning. And I think that will shape the space for sure.

S1

Speaker 1

11:29

So then what are the other areas, aside from the first 1 we talked about, for automation and work? Where are other people interested?

S2

Speaker 2

11:38

Well, so I think the other thing we're really interested in, and I'm really interested in, is is it possible to pull off machine learning with less and less data. And so there's a couple examples of that, but 1 of them is one-shot learning, where people are basically working on the ability to teach machines, but a much smaller number of examples. Now, that actually has a really big impact on the game, because that means that you can implement machine learning effectively in situations where it's really expensive to collect lots of data.

S2

Speaker 2

12:05

There's also 1 really cool interface between VR and AI that's happening right now, where the whole idea is there's a project called Universe from OpenAI and another project called DeepMindLab, Which basically, imagine you need to teach a robot to get through a maze. Well, you could have it physically run through that maze millions of times, or you could just have a virtual 3D environment that you cause a computer to run through, and it learns how to do that in virtual space, and then basically you put it into, in practice, in a real robot. And so that's a really another exciting way we don't necessarily need like an expensive physical setup to collect the data that you need to accomplish tasks in the real world.

S1

Speaker 1

12:41

Okay. So then what like I guess what I'm curious about then is like how are these countries like preparing for this like, you know, again, not a meteor strike, but like perhaps a gradual shift over 20 years, 30 years to a very different world than what we have right now.

S2

Speaker 2

12:57

Yeah, and I think you're right now, you're seeing like a bunch of different ideas out in the space. You know, some, for example, like basic income, right? Universal basic income, which would like fundamentally reshape, you know, like the social contract and how we think about doing, for example, like welfare in a whole number of countries.

S2

Speaker 2

13:16

And like, So you see proposals like that. I think you see a number of proposals that are more focusing on education. So what are skills that people would need in the space? That ranges everything from everybody needs to be a programmer to, oh, well, we need to really encourage computational thinking, which is the ability to work effectively with data.

S2

Speaker 2

13:35

And so there's a couple of different options out there. Some of the more interesting ones that I've heard of that are a little bit more obscure. So some people have said, oh, well, maybe we need automation insurance.

S1

Speaker 1

13:45

So

S2

Speaker 2

13:45

in the future, your employer will provide you with a contract that says, if your job turns out to be replaced by AI at some point in the future, we'll pay out at some kind of rate. So people are experimenting with lots of options right now. I think what we actually need in this space is more experimentation.

S2

Speaker 2

13:59

So for even proponents of basic income, a lot of them will tell you, like, we actually don't know in practice what this would look like if it were actually rolled out at any level of scale. And so, like, I mean, it's cool seeing YCR and a couple other places, like, experiment with this.

S1

Speaker 1

14:12

And so where is the traction happening, then, with all of these experiments? It seems very limited, but is it all in Northern Europe? Or I know there's a basic income study in India at this point.

S1

Speaker 1

14:24

Who seems to be focusing most on this area?

S2

Speaker 2

14:26

Yeah, we're seeing a lot of different countries engage in this. I think Northern Europe is kind of leading the way in terms of their willingness to kind of experiment with some of these models. And I think they've got a couple of things going for them.

S2

Speaker 2

14:38

On 1 hand, I think they have a skilled labor force that is relatively expensive. So I think that they are seeking and excited about AI in large part because it's a prospect to bring, for example, manufacturing back to the country, because it allows them to compete on the same footing as other countries that have offered labor for much lower costs. That's 1 thing that's good for them. I think the other thing that's also encouraging a lot of experiments is that they have a lot more coordination between government, industry, and labor, which is making it more possible to experiment with these sorts of things.

S2

Speaker 2

15:11

So I think in a really interesting case, it turns out that maybe Northern Europe is actually a little bit ahead in its ability to kind of experiment, understand some of these programs.

S1

Speaker 1

15:20

And then as a like Google or Alphabet as like this international institution at this point, how are you guys thinking about interacting with different countries as this happens?

S2

Speaker 2

15:29

Yeah, so We're investigating at the moment, right? So the question is who on the research side should we be working with? And what are kind of programs that we could support that will help us give a better handle on this picture?

S2

Speaker 2

15:40

Right, because I think like, look, ultimately it's a technology company, right? And So we know that we don't have all the talents necessary to evaluate what is a proper social welfare program. But on the other hand, we do think it's actually really important that we encourage a better societal understanding of how to deal with these technologies. And so I think we're very much in the mode of like, how can we support this?

S2

Speaker 2

16:04

And I think that's partially through potentially resources, but also potentially like expertise as well. Right? Like, if you want to know anything about machine learning, we got people who can tell you about that. Now we have to marry that up with people who have a good understanding of how this will impact society either through economics or otherwise.

S1

Speaker 1

16:19

And do you ever feel like the information you're disseminating is like guiding the conversation and guiding the future? Or like people are like playing into the game like it's intentional? Or is just like opened up?

S2

Speaker 2

16:31

I mean, I think it's very open, right? I mean, I think like, you know, I think it's easy, particularly in the Valley, to be like, oh my God, these big companies. But we're only 1 part of a much larger, larger picture about what's happening in the economy.

S2

Speaker 2

16:43

I totally think that's the case. We talk about AI and automation, but we might also want to talk about demographic shifts happening in the economy. What's it mean that we have an aging workforce? Or what's it mean that we have falling workforce participation in the United States?

S2

Speaker 2

16:57

Those are actually trends that are almost as large as what someone comes up with in a lab and presents at a machine learning conference. And so I think it's actually really important that we look at this all in a bigger perspective.

S1

Speaker 1

17:08

Okay. And so what do you guys do to keep that in mind? I imagine you just have a whole policy team to manage that

S2

Speaker 2

17:14

sort of thing?

S1

Speaker 1

17:15

Yeah, that's

S2

Speaker 2

17:15

Kind of what we're responsible for is keeping track of a lot of this stuff and getting a better understanding of who is researching in this space. Because as I said, I think we're still really early on in this technology. Again if you had asked someone 10 years ago whether or not neural nets were going to be a thing, they'd be like, yeah, I don't know, it probably wouldn't work, right?

S2

Speaker 2

17:32

But if we're at a phase right now where suddenly it has become technically real, I think now that understanding is just starting to percolate out to a bunch of other fields who are like, okay, well, I guess now we've got to assess what's going on.

S1

Speaker 1

17:43

And so do you see companies and organizations and countries locking their gates because they're scared, because it feels new, it's obviously massively hyped, but there's also some reality behind it. Has there been a negative reaction?

S2

Speaker 2

17:56

Yeah, I wouldn't say so. I mean, I think by and large, what we're seeing is that a lot of governments are just really curious. They actually want a better understanding of what's going on.

S2

Speaker 2

18:03

So in many cases, I think what we're seeing is people asking, what is happening in the technology? So I think the phase of what to do about it is still on its way.

S1

Speaker 1

18:14

Right. So you give them the PowerPoint deck and they're like, oh, okay I kind of get how this works and then they go home, you know, whatever to like Japan and they're like, okay No, you think about it?

S2

Speaker 2

18:24

Yeah, I think so I mean this is how government progresses right is like I think like they ask questions to get information and there's like a long process figuring out what you do around it. But that isn't to say there isn't laws and other regulations being passed that have relevance for machine learning. So 1 of the most interesting aspects of the GDPR, which is a new privacy regulation in Europe, is the potential for what they call a right to explanation.

S2

Speaker 2

18:49

So the idea is for certain kinds of automated decision making, it might be so significant as to require or give citizens the right for that system to be able to produce some kind of human understandable explanation for what it's doing. And like that raises all sorts of interesting challenges about like how you actually pull that off. And so like I would say that I don't want to make it sound like no governments are taking action, but I think like that's that's the beginning part of it, right? Like and I think like by and large, the stance of most governments have been to like understand what's going on

S1

Speaker 1

19:21

Do you think someone's doing it particularly well now?

S2

Speaker 2

19:25

Yeah, I mean I was really excited by some of the stuff happening out of the UK So last year they actually did a report that was on kind of like giving a an account of like the risks and opportunities From artificial intelligence and I think there's like a really good account of that So and then last year with the under the Obama administration, there was a really good report that he did as well on the topic

S1

Speaker 1

19:45

okay, and so like Can you go some specific on that?

S2

Speaker 2

19:48

Yeah, sure So, I mean I think the what we at least have it in the, what we least had in the US case, right, was basically a report that like really focused in on like, okay, what are the real concrete risks here? Yeah. And part of the idea was to pivot away from discussions that were just like, okay, the main thing we've got to talk about here is whether or not robots are gonna destroy us, right, like decide to take over, right?

S2

Speaker 2

20:09

Which I agree is like kind of an interesting scenario to consider, but like, you're right, like there's a lot of like core near term problems that need to be dealt with. And I think that was 1 thing that he did that was very useful. So

S1

Speaker 1

20:21

aside from the stuff we've talked about, what do you find to be particularly exciting both like here, like at a local Bay Area level as far as like research and then at global international research level moving this stuff forward?

S2

Speaker 2

20:36

So I think there's 2 things that I find really interesting right now. 1 of them is the intersection of machine learning and art. So largely, this is a technology we've been using to solve pretty pragmatic things, which is how do we ensure that we can adequately recognize like cats in photos?

S2

Speaker 2

20:51

But like, what's really interesting is a bunch of people are kind of playing around right now with the intersection between like, oh, could I use this for like artistic purposes? So there's a really fun project. Google has this project called AI Experiments, which is a lot of small things like this which demonstrate the artistic possibilities of technology. We also have another program called Magenta, which is looking into machine learning in music and whether or not there's ways of creating better creative collaboration between humans and machines on that front.

S1

Speaker 1

21:18

And have you experimented with it personally?

S2

Speaker 2

21:21

Yeah, some of it's really fun. There's 1 project which is basically like a melody generator. Like you play some notes on a piano and then the computer will play alongside

S1

Speaker 1

21:28

you. Like harmonize with you?

S2

Speaker 2

21:30

Yeah, exactly. Right, right. And so you kind of like improvise with the computer, which is super cool.

S2

Speaker 2

21:34

There's another project called Marodercam, which

S1

Speaker 1

21:36

you get on your phone,

S2

Speaker 2

21:37

which is you take a couple of photos of things in the room and it produces this boppin' electronic dance hit that uses the words of the objects in the room as like a rhyming, you know, set of lyrics. Super cool. Yeah.

S2

Speaker 2

21:49

And a great example of like how the technology is becoming like really accessible. Because again, if you wanted to do that like 10 years ago, it would have required like a huge amount of money and like, you know, a bunch of PhDs to try to work on this problem.

S1

Speaker 1

22:01

Right. Yeah. I've been fascinated with that, like how it's become distributed, just even in the past like year. Like I told you about all the speech to text stuff that I'm working on.

S1

Speaker 1

22:09

Yeah. Man, like the fidelity of it is shocking.

S2

Speaker 2

22:12

Yeah. That's in like 1 year. Right. Right.

S2

Speaker 2

22:14

And so it's gotten like way better, which I think is super interesting. I think the other thing is also trying to figure out, there's these really unexpected things that emerge too. So the other thing that I think is really cool right now is there's a paper that came out from DeepMind, I think earlier this year, that was kind of like, if you get 2 machines to talk to 1 another, they will eventually, and you can set up another computer to basically say, oh, I can read what you're saying, I can't read what you're saying. You can basically train these 2 systems to come up with the rudiments of encryption without even necessarily needing to program encryption into the computers, which is also super cool as well.

S2

Speaker 2

22:45

They learn how to accomplish that task and it's not like very good encryption, but like the basics are basically Learned by these systems so long as you give them good reinforcement on like, okay, that's still Cognizable I can still understand what you're saying versus like a third party being like, Oh, I can't do that.

S1

Speaker 1

23:02

Oh man. And so do you have thoughts on like how this will become distributed in such a way that any day like we'll be interacting with it in our everyday lives as just like fun projects? Like will it be existing in the art space?

S1

Speaker 1

23:14

Will like be, you know, like training new programming languages for folks to work on when they're younger?

S2

Speaker 2

23:20

Yeah. I mean, I think like there's, there's, you know, I was talking to Peter Norvig, who is like 1 of the researchers we have is like 1 of the, you know, founding fathers of AI. And he had this really interesting thought, which is basically that like we may be approaching the period where we actually have to entirely rethink how we teach computer science, because machine learning is such a powerful tool, and also cognitively, it works in a way that's totally counterintuitive. So I do less software than I used to, but definitely when I was in the trenches doing coding work, it was very much like, okay, let's get a bunch of smart people in the room, let's come up with a bunch of rules, and then let's get those rules into the machine, versus this much different kind of mode of thought, which is basically like, let's present the machine with a bunch of examples and then verify whether or not the machine has learned the proper lesson.

S2

Speaker 2

24:06

And so his idea is like, actually, we may actually really want to think about how we think about CS from the very first moment you step into a classroom, which I think is a super compelling idea, because it was always thought of like, oh, machine learning is just going to become this complement to how you do programming. But I wonder whether that software in the future will actually look more and more machine learning focused. And you actually change your entire approach to programming systems.

S1

Speaker 1

24:30

Oh, man, that's fascinating. It's already kind of gone that way in that many CS programs are so technical you actually never build a web app. Yeah, that's right.

S1

Speaker 1

24:38

You can go through Stanford CS and never build a web app.

S2

Speaker 2

24:40

Yeah, and I think it's a very natural trend that we're getting to higher and higher levels of abstraction. So in some ways, machine learning is this ultimate level of abstraction where it's like, even if you wanted to understand what's happening in a neural net, it might be actually kind of difficult to do so. Right?

S1

Speaker 1

24:54

Yeah. I mean, I guess so, but I see it becoming like, there's just new ways of thinking about how you ought to be programming, right? How you structure the code, because at a certain point, things will just become abstracted and you won't have to do it anymore. Like I think about it in the context of like, you know, parse creating an API, right?

S1

Speaker 1

25:10

Like that will exist for many things. Like I could see a like a Squarespace type thing, but for like a proper web app, right? And You just drag your database in and you never even think about it. So ironically, programmers might lose their jobs way sooner than they think.

S2

Speaker 2

25:27

Well, and particularly interesting because we actually, this emerging research right now which is using machine learning to train machine learning systems. Whereas it's like this meta level, where like right now there's a lot of handwork that goes into building a model so it learns the right representations. But like if a machine can do that in the future, it gets even more abstracted, where you may not even need to be a specialist because in some ways the machine kind of codes itself.

S1

Speaker 1

25:52

So I think 1 thing that a lot of people are curious about is how you're actually going to build a business around AI. So just for like, we can start broad and then go more narrow. Do you think AI will be like dominated by massive companies like Google, Facebook, or will, you know, there'll be very successful AI products on the small scale?

S2

Speaker 2

26:13

Yeah, so I actually think that there's actually like a ton of room for competition here. And it'd be interesting to see how all the various companies find their niches in the space. I think there's 2 really interesting trends right now.

S2

Speaker 2

26:24

I think 1 of them is the emergence of cloud platforms, where basically all the companies have said, There's a long tail of uses that we would never be able to take advantage of, but we may be able to provide the services that power those services. For example, Google is offering CloudML right now. I think it's a really interesting development in this space, which I think creates a lot of opportunity. Because it means that there's all these industries that might not necessarily be AI industries that might be able to seize the benefit from the technology.

S2

Speaker 2

26:56

So that seems like a pretty huge thing to me. I think a second 1 which is really interesting is some of the one-shot learning stuff we talked about earlier, right? Which is basically that the amount of data you need to pull off certain types of machine learning applications is going down over time. And what that tells me is that there might not be necessarily a first mover advantage in this space, where you may actually have collected a bunch of data, but if it's not the relevant data, and also the amount of data you need is going down over time, then the real big challenge is less data and actually more your ability to build good interfaces and good experiences around the technology.

S1

Speaker 1

27:28

Yeah, I've been wondering about that as I play around with it and build like tiny little web apps and stuff like how much of this is just entirely reliant on the product as like it's all plug and play. And so to a certain extent like folks can almost guess which techniques you're implementing which API's you're using and if they're faster with better engineers, and then they have the magic touch of the product person, I don't see any reason why they can't just jump ahead.

S2

Speaker 2

27:55

Yeah, right, right. And I think we're maybe fooled by the nature of the field right now, where it's like, ah, we got to get like the most researchers to go and compete on this thing. And like, that is like a big important part of it because they're producing a lot of like the breakthroughs in the space.

S2

Speaker 2

28:09

But it is, I think, important to consider too that like, there's still like this big open question of like how this actually becomes like effectively part of product.

S1

Speaker 1

28:17

Oh, well, I'm so I mean, we did an interview at Baidu and may or may not come out before yours. So we might do a fourth wall jump. But they explicitly are focusing on things for over 100 million people.

S1

Speaker 1

28:29

And you're like, oh, OK, well, I can build plenty of successful startups or businesses for less than 100 million, maybe even a million. And so yeah, I think there are just all these fantastic opportunities for people, and yet folks seem to be focusing on very similar implementations, whether it's like Chatbot or like customer service, which I guess is effectively the same thing. Why do you think that is? Is it they just like follow what seems to be like the market leader or these like the most obvious?

S2

Speaker 2

28:58

Yeah, I think people are also still trying to figure it out. Right. Like, and I can't I think we can't avoid like that AI is like a technology.

S2

Speaker 2

29:06

But AI is also like, like a position. It's a marketing position, which I think is actually a really key part of the picture. Why do we think about Siri or the Google Assistant as AIs, but we don't necessarily think about like the Facebook News Feed as an AI. These are all systems that are all powered by machine learning, but there's something about like its representation as like, oh yeah, this is a machine that talks to you,

S1

Speaker 1

29:32

that

S2

Speaker 2

29:32

makes our brain snap immediately to pop culture equals AI. And then that ends up being a really big part of it too, is that there's a lot of incentives to correspond to what we think of as AI, Even though some of the most powerful AI applications may not even come in the form of a personified personality.

S1

Speaker 1

29:52

Well, I think that's a super interesting angle. Out here, seemingly, it makes sense to raise your money as an AI business. But when you look at Facebook, Facebook, if you log in, doesn't say AI anywhere.

S1

Speaker 1

30:07

And clearly they have a lot of people using it. So I wonder if it is like a massive positioning thing that many companies do end up missing because you just have to get the nerdy people interested in it to sell it, to raise the money, if you're gonna do venture backed or whatever. But then your end user is like, why am I paying all this money for this chatbot?

S2

Speaker 2

30:28

I mean, for example, yeah, if you wanna talk about 1 of the most critical applications of machine learning to date, it's like spam filters, right? Spam is this incredibly huge systemic problem on the internet. It is largely contended with by machine learning right now.

S2

Speaker 2

30:43

That's largely the tools that we use to deal with it. And like, that's like an application that we never think about, right? Like, so I mean, like many, with many technologies, the most important applications will be some of the least visible.

S1

Speaker 1

30:55

So what, what are you excited about? What are you going to build? What are you going to build

S2

Speaker 2

30:58

with AI? I got to think about it some more. I mean, I'm really interested in these kind of small-scale machine learning projects.

S2

Speaker 2

31:07

I think we might have talked about it earlier, but we had this really crazy story where it turned out that there was this cucumber farm in Japan that was using machine learning to build a really cheap machine learning robot

S1

Speaker 1

31:18

that

S2

Speaker 2

31:18

would sort cucumbers. It turns out cucumber sorting is a really big problem in the cucumber farming space. And that was basically just trained using 3,000 or 4,000 photos of cucumbers.

S2

Speaker 2

31:30

And that was sufficient to train a model to do, but a pretty good job at sorting cucumbers. And so I'm really interested in this kind of artisanal machine learning, where it's like, what are these very specific daily problems that I have? And it's a good way of, I think, wrapping my head around, okay, what are actually gonna be like the practical uses, not necessarily like the like Cadillac uses that I think we're being, we're seeing right now, which are like the demonstration uses of the technology. And then

S1

Speaker 1

31:55

you can open up like Tim's general store online. Yeah, that's right. And people like download like Tim's cucumber app.

S1

Speaker 1

32:00

Right. Yeah, I mean, I cracked my iPhone earlier and was getting it fixed this morning. And the guy had an entire box of assorted iPhone screws from literally like an iPhone, you know, iPhone 1 to an iPhone 7 now. And these are just like, he's got like a side hustle Buying and selling iPhones that are like broken online.

S1

Speaker 1

32:22

And if they're totally damaged he just like Strips all the components, but he spent like half an hour like trying to figure out what screw would fit There you go. You can like use like Tim's screw identifier, right? It's super handy stuff

S2

Speaker 2

32:37

We like a lot of small things like that. And what's particularly interesting is, going back to a little bit of what we were talking about earlier, what is the cost of solving a problem through machine learning? And what is the cost of solving a problem through traditional coding?

S2

Speaker 2

32:50

That's actually maybe 1 way of thinking about the problem. For example, for computer vision, now the economies are way in favor of machine learning. It's just way easier to design an effective machine learning image recognition system with ML than it is with traditional coding techniques.

S1

Speaker 1

33:07

And I

S2

Speaker 2

33:08

think that's actually 1 really interesting way of thinking about it is for a given task, how long until machine learning is the preferred way of solving this problem with a computer?

S1

Speaker 1

33:16

It totally makes sense as new kinds of entrepreneurs pop up in these very small niche things that are essentially 1 developer projects that previously might have even seemed way too laborious to spend your time engineering. You're never going to pay someone to do it. You're not going to do it yourself.

S1

Speaker 1

33:33

But you start plugging into these Cloud ML things, and all of a sudden you have this app. As far as distribution, I don't know. I've heard more and more people talking about localizing certain things to the device, which makes them amazing. Have you experimented with that yet?

S1

Speaker 1

33:50

Yeah, so we're actually working on

S2

Speaker 2

33:51

a little bit of research around that. I haven't played around with it myself, but for example, there's a couple of papers around what they call federated learning,

S1

Speaker 1

33:57

where

S2

Speaker 2

33:57

we're just exactly working on this premise, which is the bet is, okay, well, what happens in the future where the edges of our network like the phones like have way more powerful processing power like is it possible for us to basically do the majority of the training for these systems like on device and with like basically a lot less data kind of like flowing into the cloud and the idea was basically like the Local model would update and it would share its learnings with all the other devices in the network. And it's a really interesting way of thinking about how you actually do this, because what you ideally want to have is models that are loaded on the device and can also train on the device as well. Because right now, 1 of the ironies is that there's a big disparity between training, which is computationally intensive, data intensive, and then actual execution, which can be actually pretty low computational intensive.

S1

Speaker 1

34:49

It also creates a giant latency problem with everything that's like in big quotes AI right now. Like, you know, most people if you give them Siri, they're like, oh, it's constantly broken. But if you could communicate with it in a way that's like, Hey, you didn't understand.

S1

Speaker 1

35:02

Let me go again immediately afterward. All of a sudden the experience is

S2

Speaker 2

35:05

entirely different. Yeah. And latency ends up being really key, not just for like conversational interfaces, but you think about like, you know, for example, like, how do we deal with like using this in medical, right?

S2

Speaker 2

35:14

Where you need, may need a response really soon if you're gonna use it for diagnosis or whatever.

S1

Speaker 1

35:19

Totally, like if this thing turns into a robot surgeon arm and I move it to the Amazon, I can't rely on my hotspot to connect it.

S2

Speaker 2

35:28

That's right, yeah. And So yeah, I think again, we're talking about implementation, which ends up being this really big piece of the AI picture, which is still being worked out. We know we can get machines to do these remarkable things.

S2

Speaker 2

35:38

The question is, what do people actually want out of it?

S1

Speaker 1

35:41

So I guess 1 of the last questions I have for you is, people are interested in AI machine learning across the board, or at least people paying attention to this are into it. If someone wants to get more into it, and they're thinking about like, how do I position myself? Like, what should I pay attention to?

S1

Speaker 1

35:58

Where should I focus? Because like, you know, now, 10s of 1000s of people are checking it out. What would you say? What would you focus on?

S2

Speaker 2

36:06

I think there's 2 really interesting problems in the space right now that desperately need more people to get involved in and more people to organize events around. So 1 of them is, I think, this security thing, where in the traditional computer security space we've got events like Capture the Flag, where people can show their mettle in their ability to secure and compromise systems. I actually think we really need that in the machine learning space, And I would be really excited to see that, which is like, so imagine a game where you have to train a machine learning model on a set of data.

S2

Speaker 2

36:37

And then people will take turns trying to get past your computer vision system.

S1

Speaker 1

36:42

Oh, cool.

S2

Speaker 2

36:43

Which I think would be super cool to do. And I think that's 1 big piece of it that I think would be really cool for people to work on. I think the second thing that's about to be in really strong demand is thinking about the visual dimension of this, which happens on a couple levels.

S2

Speaker 2

36:57

That's both the interface of how you work with machine learning systems, but also just visually how you represent a neural net. If you've read the technical papers, 1 of the things that you'll see is just that it's largely written by machine learning experts. And so they don't really have a good sense of how do you visually portray what a neural net is doing. And that stuff ends up being incredibly important for people to like both understand the technology and also be able to like use it effectively.

S2

Speaker 2

37:23

And so I think that's another thing that's about to come on the way is like basically a really high demand for people who understand this research and could give it good voice in terms of representing it visually.

S1

Speaker 1

37:33

And then if someone isn't into machine learning yet, what would you recommend they read, study, watch? What should they check out?

S2

Speaker 2

37:43

So I mean, I think it's really nice because we're now living in a world where there's a lot more resources for how to learn about machine learning. So I'm a huge fan of Ian Goodfellow's textbook on deep learning. It was really funny.

S2

Speaker 2

37:55

I was in Cambridge picking up a physical copy of this textbook because MIT Press is the publishers, and the guy selling me the book was like, this is like the Harry Potter of technical guides because they had been like flying off shelves so aggressively. So it's really good though. Its reputation is very well deserved. 1 of the things I've been thinking a lot about is kind of like the history of all this, right?

S2

Speaker 2

38:15

It's important to recognize that AI has been through this hype cycle before, and there have been long AI winters where this technology has totally oversold itself. And it's important to understand those dynamics. So 2 books I'll mention, 1 of them is John Markoff's Machines of Loving Grace, which is all about the history of AI, and particularly its competition with the notion of IA, intelligence augmentation,

S1

Speaker 1

38:36

which

S2

Speaker 2

38:36

I think is a really interesting battle that we're having right now in terms of what this technology is really about and what it should be used for. A second book that's just great, which is also by MIT Press, is Cybernetic Revolutionaries, which talks about basically the Chilean Allende government, so it's basically the socialist government during the mid-20th century. And they tried to basically set up a project called Project Cybersyn, where they were like, let's automate the entire economy.

S2

Speaker 2

39:01

So all factories will have to produce data links that will connect to a single central command center where we will actively control the economy. And it's a great initial example, another example of kind of like, oh, the history of cybernetics, but also it's like implications for what people tried to do back then that I think useful for like, you know Making sure we understand what the limitations of the technology are today.

S1

Speaker 1

39:22

That's very neat. I haven't read that. I will absolutely check it out Cool, man.

S1

Speaker 1

39:26

So if anyone wants to follow you online, where do they go?

S2

Speaker 2

39:28

Oh sure. I'm I'm on my website is that Tim Wong TIM HW a and g dot or G I'm not the Korean pop star of the same name And I'm also on Twitter at Tim Wong so at TIM HW a and G

S1

Speaker 1

39:42

very cool. All right. Thanks, dude Yeah,

S2

Speaker 2

39:44

thanks for having me Craig you you