Andrew Ng: Artificial Intelligence is the New Electricity

In this AI video ...

Good afternoon. Welcome to the future forum. A series of discussions where we are exploring trends that are changing the future. This series is presented by the Sloan Fellows from the Stanford MSX program. My name is Ravi Karan Gopalim. I am an engineer by training with over 10 years of experience. I have been fortunate to design and develop products for some of the leading high tech companies here in the US. Currently, as a Sloan Fellow, I am privileged to spend a year in Silicon Valley and at the Stanford Graduate School of Business participating in the evolution of technology and learning from some of the brightest minds in business. The MSX program is a full-time on campus, one-year management degree, specifically designed for accomplished and experienced professionals from around the world. My classmates on average have over 13 years of experience come from over 40 different industries and have been leaders in driving change. Today, I have the honor of introducing Professor Andrew Inck. Andrew is one of the leading thinkers in artificial intelligence with research focusing on deep learning. He has taught machine learning for over 100,000 students through his online course at Coursera. He founded and led the Google Brain Project, which developed massive scale deep learning algorithms. He is currently the VP and Chief Scientist of BIDU, the co-chairman and co-founder of Coursera, and last but not the least, an agent professor right here at Stanford University. Please join me and the 2017 Sloan Fellows in welcoming Professor Andrew Inck. Thank you. Thank you, Ravi. So, what I want to do today is talk to you about AI. As Ravi mentioned, right now, I lead a large AI team at BIDU, about 1,300 scientists and engineers and so on. So, I’ve been fortunate to see a lot of AI applications, a lot of research in AI, as was a lot of users in AI in many industries and many different products. So, as I was preparing for this presentation, I asked myself what I thought would be most useful to you. And what I thought I’d talk about is four things. What to share of you, what I think are the major trends in AI, because I guess the title of this talk was, you know, AI is a new electricity. Just as electricity transformed industry after industry 100 years ago, I think AI will now do the same. So, I share of you some of these exciting AI trends that I and many of my friends are seeing. One discuss of you, some of the impact of AI on business, you know, whether to the, I guess to the GSBCs and to the Sloan Fellows, whether you go on to start your own company after you’ve de-standard or whether you join a large enterprise, I think that there’s a good chance that AI will affect your work. So, I’ll share of you some of the trends for that. And then talk a little bit about the process of working with AI, to sort of kind of practical advice, so how to think about not just how it affects businesses, but how AI affects specific products and how to go about growing those products. And then, finally, I think for the sign up for this event, there was a space for some of you to ask some questions. And quite a lot of you asked questions about the societal impact of AI, so I’ll talk a little bit about that as well. So, you know, the title of this talk, that was projected, no, I guess not. All right. And I think on the website, the title was listed as AI is a new electricity. So, is this analogy that we’ve been making for, I don’t know, over half a year or something? And about a hundred years ago, we started to electrify the United States, very grow up electric power. And that transform transportation, the transform manufacturing, you know, using electric power instead of steam power, transform agriculture, right? I think refrigeration was a really color app, transform healthcare, and so on and so on. And I think that AI is now a position to have a equally large transformation on many industries. The IT industry, which I work in, is already transformed by AI. So, today, by-do, web search, you know, advertising, all part by AI, the way we decide whether or not to approve a consumer loan, really, that’s AI. The way we, when someone orders take-out through a by-do on demand through the reservice, you know, AI helps us with the logistics to route the driver to your door, helps us estimate to tell you how long we think it’ll take to get to your doors. So, we up and down, both the major services, many other products in the IT industry are now powered by AI, just really the possible of AI. But we’re starting to see this transformation of AI technology in other industries as well. So, I think FinTech, you know, is well on this way to being totally transformed by AI. We’re seeing the beginnings of this in other industries as well. I think logistics is, you know, part way through this transformation. I think healthcare is just at the very beginnings with their huge opportunities there. Everyone tells about self-driving cars. I think that will come as well. A little bit, you know, that’ll take a little bit of time to land, but that’s another huge transformation. But I think that we live in a world where, just as electricity transform almost everything a hundred years ago, today I actually have a hard time thinking of an industry that I don’t think AI would transform in the next several years. And maybe, and maybe throughout this presentation, maybe at the end, if during Q&A, if you can think of an industry that AI won’t transform. Like a major industry, not a minor one, raise your hand at the end and let me know. I can just tell you now my best answer to that. So, you know, I once, when my friends and I, sometimes my friends and I actually challenge each other to name an industry that we don’t think would be transformed by AI, my personal best example is hairdressing, right? Cutting hair. I don’t know how to build a robot, you know, to replace my hairdresser. Although I once said this same statement on stage and one of my friends, who is a robotics professor, was in the audience. And so my friends stood up and she pointed at my head and she said, Andrew, for most people hair styles, I would agree you can’t build a robot. But for your hairstyle, Andrew, I could all right. So, you know, despite all this hype about AI, what is AI doing, right? What can AI really do? Is driving tremendous economic value, you know, easily billions, maybe actually at least tens of billions, maybe hundreds of billions of dollars worth of market cap. But what exactly is AI doing? It turns out that almost all this ridiculously huge amounts of value of AI, at least today, and the future may be different, but at least today, almost all this massive economic value of AI is driven by one type of AI, by one idea. And this technical term is that it’s called supervised learning. And what that means is using AI to figure out a relatively simple A to B mapping or A to B response, input to relatively simple A to B or input to response mapping. So, for example, you know, given a piece of email, if I input that, and I ask you to tell me if this is spam or not, all right? So, given email, I’ll put zero one to tell me if this is spam or not. Yes or no. This is an example of a problem where you have an input A with an email and you want a system to give your response B, zero one. And this today is done with supervised learning. Or, you know, given an image, tell me what is the object in this image? And maybe a thousand objects or ten thousand objects, you try to recognize it. You input a picture and output a number from say one to one thousand to tell you what object is this. This AI can do. Some more interesting examples, you know, given an audio clip, maybe you want to output the transcript. So, this is a speech recognition, right? Input an audio clip and output detects transcript of all the sets. So, that’s speech recognition. And the way that a lot of AI is built today is by having a piece of software learn, I’ll say exactly in a second what I mean by the word learn, what it means for a computer to learn. But a lot of the value of AI today is having a machine learn these, you know, input to response mappings. Or, you know, given a piece of English text, output the French translation. Or I talked about going from audio to text, or maybe you want to go from text and have a machine read out the text in a very natural soundly voice, right? So, it turns out that the idea of supervised learning is that when you have a lot of data of both a and b both, today a lot of the time we have very good techniques for automating, for automatically learning a way to map from a to b. So, for example, if you have a giant database of emails as well as annotations of what is spam, what is in spam, you could probably learn a pretty good spam filter. Or I guess I’ve done a lot of work on speech recognition. If you have, you know, let’s say, 50,000 hours of audio, and if you have the transcript of all 50,000 hours of audio, then you could do a pretty good job having a machine, figure out what is the mapping between audio and text, right? So, the reason I want to go into this level of detail is because despite all the hype and excitement about AI is still extremely limited today relative to what human intelligence is, right? And clearly, you and I, every one of us can do way more than figure out input to response mappings, but this is driving incredible amounts of economic value today. Just one example, given some information about an ad and about a user, can you tell me what the user click on this ad, right? So, leading internet companies have a ton of data about this because we’ve shown people, you know, some number of ads, and we saw whether they clicked on it or not. So, we have incredibly good models for predicting whether a given user will click on a particular ad. And by showing users the most relevant ads, this is actually good for users because you see more relevant ads and this is incredibly lucrative for many of the, you know, online internet advertising companies, right? So, this is, this is certainly one of the most lucrative applications of AI today, possibly the most lucrative, I don’t know, right? Now, one of the, you know, when ad, ad, ad, by do, you know, what’s of a lot of product managers? And one question I got from a lot of product managers is, you know, you’re trying to design a product and you want to know how can you fit AI in some bigger product, right? So, do you want to use this with spam filter? Do you want to use this to maybe tag your friends’ faces or do you want to use this? I don’t know, where do you want to build speech recognition to your app, but can AI do other things as well? Where can you fit AI into, you know, a bigger product, a bigger application? So, so product managers, I was working with, we’re struggling to understand why can AI do and why can’t AI do. So, oh, I’m curious, how many of you know what a product manager is and what a product manager does? Okay, good, like half of you, right? Is that right? Okay, cool. I asked the same question in the academic AI conference and I think only about one-fifth of the hands went up, which is interesting. But just to summarize quickly, in the workflow, a lot of tech companies, is the product managers responsibility to work with users, look at data, to figure out what is a product that uses desire to design, you know, the features and sometimes also the marketing and the pricing and so on. But let me just say, design the features and figure out what does the products supposed to do? Are you a, for example, I don’t know, should you have a like button or not? Do you let users vote this up or down? Do you try to have a speech recognition feature or not? So, it’s really designed a product and then you give the product’s specter engineering, which is their response for building it, right? There’s a common division of labor in technology companies between product managers and engineers. So, so product managers, as well as working with Australia, they understand where, where can AI do? So, there’s this raw thumb that I gave, many product managers, which is that, you know, anything that a typical human can do with, you know, at most one second of thought, right? We can probably now assume auto-made. Okay? Okay. And this is an imperfect rule. There are, you know, false positives and false negatives with this heuristics of this rule is imperfect. But we found this rule to be quite helpful. So, today, actually, I buy you with some product managers running around looking for tasks that they could do in less than a second and thinking about how to automate that. I have to say, before we came up with this rule, they were given a different rule by someone else. And before I gave this purest day, someone else told them product managers, assume AI can do anything. And that actually turned out to be useful. Some progress was made with that heuristic, but I think this one was a bit better. But a lot of these, you know, these things on the left, you could, you could do less than a second of thought, right? So, one of the patterns we see is that there are a lot of things that AI can do, but AI progress tends to be fastest if you’re trying to do something that a human can do. So, for example, you know, I don’t know, build a cell driving car, right? Humans can drive pretty well. So, AI is making actually pretty decent progress on that. Or that knows medical images. If a human radiologist can read an image, the odds of AI being able to do that in the next, you know, several years is actually pretty good. There are some examples of tasks that humans cannot do. For example, I don’t think, well, very few humans can predict how the stock market will change, right? Possibly no human can. And so, it’s much harder to get an AI to do that as well. And there are a few reasons for that. First is that if a human can do it, then first, you know, you’re at least guaranteed that it’s feasible, right? Even a human can’t do it, like predict the stock market, maybe it’s just impossible, I don’t know, right? A second reason is that a human can do it. You could usually get data out of humans. So, we have doctors that are pretty good at reading radiological images. And so, if A is an image, and B is a diagnosis, then you can get these doctors to give you a lot of data. Give you a lot of examples of both A and B, right? And if it, so things that humans can do, you could usually, you know, pay people, hire people or something and get them to provide a lot of data most of the time. And then finally, you know, if a human can do it, you could use human insight to drive a lot of progress. So, if a AI makes a mistake, diagnosing a certain radiology image, like an X-ray scan, like an X-ray image, then the AI makes a mistake, then if a human can diagnose this type of disease, you can usually talk to the human and get some insights about why they think this patient has lung cancer or whatever, and try to co-data into your AI. So, one of the patterns you see across the AI industry is that progress tends to be faster when we’re trying to automate tasks that humans can do. And there are definitely many exceptions, but I see so many dozens of AI projects, I’m trying to summarize trends I see, right? There’s not 100% true, but kind of, you know, 80 or 90% true. But so, for a lot of projects, you find that if the horizontal axis is time, and this is human performance, you know, in terms of how accurately you can diagnose X-ray scans or how accurately you can classify spam emails or whatever, you find that over time, the AI will tend to make rapid progress until you get up to human level performance, and if you ever surpass it, very often, you know, your progress slows down because of these reasons. And one of the, so, so, this is great because we can, this gives AI a lot of space to automate a lot of things. The downside of this is the jobs and vocation, right? Especially good at doing whatever humans can do, then I think AI software will be in direct competition with a lot of people for a lot of jobs. I would say Prairie already a little bit now, but even more so in the in the future. And I’ll say a little bit about that later as well. The fact that we’re just very good at automating things people can do, and we’re actually less good at doing things people also can’t do. That actually makes the competition between AI and people for jobs even worse. So, all right, let me come back to the AI trends. And one of these, although I don’t know a bit deeper into AI trends is I bet some of you will be asked by, you know, your friends afterwards, right? What’s going on in AI? And I hope to give you some answers that let you speak intelligently as well to others about AI. It turns out a lot of the ideas about AI have been around for many years, frankly, several decades. But there’s only the last several years, maybe the last five years that AI has really taken off. So, why is this? When I’m asked this question, why is AI only now taking off, there’s one picture that I always draw. So, I’m going to draw that picture for you now, which is that if on the horizontal axis, I plot the amount of data. And on the vertical axis, I plot the performance, you know, of a AI system. It turns out that several years ago, maybe ten years ago, we’re using earlier generations of AI software or earlier generations of what’s called machine learning algorithms to learn these A to B mappings. And for the earlier generations of, so this is an earlier, you know, machine learning. Actually, sorry, the name is called this traditional machine learning algorithms, right? It turns out that for the earlier generations of machine learning algorithms, even as we fed it more data, as performance did not keep on getting better. It was as if, you know, beyond a certain point, it just didn’t know what to do with all the additional data. You’re now giving it. And here by data, I mean the amount of A, B data, right? We’re both D input A, as well as the target B that you want it to output. And what happened over the last several years is because of Moore’s Law and also GPUs, maybe especially GPU computing, we’ve finally been able to build machine learning pieces of software that are big enough to absorb these huge data sets that we have. So what we saw was that if you feed your data to a small neural network, see a little bit later what a neural network is, but an example of machine learning technology. If you’ve heard the term deep learning, which is, you know, working really well, but also a bit over height, neural network and deep learning of roughly synonyms, then with a small neural network, your performance looks like that. If you build a slightly larger neural net, the performance looks like that. And it’s only if you have the computational power, you know, to build a very large neural net that your performance kind of keeps on going up, right? Sorry, I think this line should be strictly above the others, something like that, right? And so what this means is that in today’s world, to get the best possible performance in order to get up here, you need two things. First, you need a ton of data, right? And second, you need the ability to build a very large neural network. And large is relative, but because of this, I think the the the breeding age of AI research, the breeding age of neural net research is today shifting to supercomputers or HPC, high performance computers or supercomputers. So in fact, today, you know, the leading AI teams tend to have this org structure where you have an AI team, and you have sort of machine learning researchers, right? abbreviate ML, and you have HPC, high performance computing or supercomputing researchers are working together to build a giant, to build a big eye and to build the really giant computers that you need in order to hit the levels of today’s performance. I’m seeing more and more teams kind of have an org structure like this. Oh, and and the org structure is organized like this because frankly, a lot of the things we do it by do, for example, it requires such specializing expertise in machine learning and such specializing expertise in HPC that there’s no one human on this planet that knows both subjects of the levels of expertise needed frankly, right? So let’s see. So let me go even further into, so be brave. In the questions that some of you asked on the website, sign up for this event, you know, some of you asked about what evil AI killer robots taking over humanity and so on. So no, people do worry about that. So to kind of address that, I actually want to, you know, get just slightly technical and tell you what is a new and network, right? So, you know, a new network, loosely inspired by the human brain, right? And so a new and network is a little bit like a human brain, right? So that analogy that I just made is so easy for people like me, right, to make to the media that this analogy tends to make people think we’re building artificial brains like just like the human brain. The reality is that today, frankly, we have almost no idea how the human brain works. So we have even less idea of how the building computer that works just like the human brain. And even though we like to say new and networks a little bit like the brain, they are so different that I think we’ve gone past the point where that analogy is still that useful, right? It’s just that maybe we don’t have a better analogy right now to explain it. But so let me actually tell you what a new and network is and I think you’d be surprised at how simple it is, right? So let me show you an example of the simplest machine learning problem, which is let’s say you have a data set where you want to predict the price of a house, right? So you have a data set where the horizontal axis is the size of the house and the vertical axis is the price of the house, you know, square feet dollars. So you have some data set like this, right? And so well, what do you do? You fill a straight line to this, right? So this can be represented via a simple neural network where you input the size and you output the price. And so just straight line function is represented via, you know, a neuron which I’m going to draw in pictures as a as a low circle like that, okay? And you know, if you want to really fancy neuron, maybe it’s not just fitting a straight line, maybe it’s a bit smarter and realizes price should never be negative or something. But so first approximation, let’s just say it’s fitting a straight line, right? Maybe you don’t want it to be negative or something, right? But now, so this is maybe the simplest possible neural network, one input, one output, with a single neuron. So what is a neural network? Well, it’s just taking a bunch of these things, really taking a bunch of these things and stringing them together. So instead of predicting the size, the price of house just based on the size, maybe you think that the price of a house actually depends on several things, which is well, first is the size and then there’s a number of bedrooms. And depending on the square footage and the number of bedrooms, you know, this tells you, right, what family size this can come from the support, right? You can just support your family of two, family of four, family of six, whatever, right? And then what else based on the zip codes of the house, as well as the average wealth of the neighborhood, you know, maybe this tells you about the school, the school quality, right? So with two little neurons, one that tells us the family size, the house can support one that tells us the school quality. And maybe the zip code also tells us, you know, how walkable is this? Right? And maybe if I’m buying a house, maybe ultimately I care about my family size and support, is this a walkable region, was a school quality. So let’s take these things and string them into another little neuron, you know, another linear function or something like that, that then I’ll push the price, okay? So this is a neural network. And one of the magics of a neural network is that I gave this example as if, you know, when we’re building this neural network, we have to figure out that family size, walkability, and school quality are the three most important things that that determined the price of a house, right? As I, you know, drew this neural network, I talked about those three concepts. Part of the magic of a neural network is that when you’re training one of these things, you don’t need to figure out what are the important factors. All you need to do is give it the input A and the response B, and it figures out by itself, whether all of these intermediate things that really matter for predicting the price of a house. And part of the magic is when you have a ton of data, when you have enough data, A and B, it can figure out an awful lot of things by itself, right? You have taught machine learning for a long time, right? I was a full-time faculty assistant for over a decade, now I’m still at Jones faculty at the CS department. But, you know, whenever I teach people the mathematical details of a neural network, often I get from the students, like almost a slight sense of disappointment, is it really this simple? You’ve got to be fully, you know, but then you implement it and it actually works when you feed a lot of data. Because a lot of complexity, a lot of the smarts of the neural network comes from us giving it tons of data, you know, maybe tens of thousands or hundreds of thousands or more of houses in their prices, and only a little bit of it comes from the software. So the software, while non-trivial, I mean the software is not that easy, right? The software is only a piece of what the neural network kind of knows. The data is a vast larger source of information for the smarts of the neural network, than the software we have to write. So, and let’s see, yeah. So, one of the implications of this is when you think about building businesses, when you think about building products or businesses, you know, what is the scarce resource, right? If you want to build a defensible business that deeply incorporates AI, you know, what are the modes? Or how do you build a defensible business in AI? Today we’re fortunate that the AI community, the AI research community, is quite open. So, almost all, maybe all of the leading groups tend to publish our results quite freely and openly. And if you read our papers that I do, we don’t hold anything back. You read our, you know, say the art speech recognition paper, or say that face recognition paper, we really try to share all the details, and we’re not like trying to hide the details, right? And many leading research groups in AI do that. So, and so it’s difficult to keep algorithms secret anyway. So, how do you build a defensible business using AI? I think today there are two scarce resources. One is data. It’s actually very difficult to acquire huge amounts of data, right? A, comma B. Maybe to give an example, one of the projects, well, we have a couple examples, speech recognition. I mentioned just now that we’ve been, you know, we’ve been training on 50,000 hours of data. This year we expected training about 100,000 hours of data. That’s over 10 years of audio data, right? So, literally, if I pull my laptop and start playing audio to you, to go through all the data our system listens to, you know, we’ll still be here listening until the year, like 2027, I guess, right? So, this is a massive amount of data that is very expensive to obtain. Or take face recognition. We’ve done a lot of work on face recognition. So, to set some numbers, the most popular academic computer vision benchmarks as competition has researchers work on about 1 million images, right? And the very largest academic papers in, you know, computer vision published papers on maybe 15 million images, right? The kind of recognizing objects from pictures or whatever. At BIDU, to train our really bleeding edge, you know, possibly best in the world, but I can’t prove that. There’s definitely very, very good face recognition system. We train it on 200 million images, right? So, the scale of data is very difficult to obtain. And I would say that, honestly, if I were leaving a small team of 5 or 10 people, I would have no idea, frankly, how the replicate this scale of data and build a system like we’re able to in a large company, like I do, what access to just massive scale data sets. And in fact, at large companies, sometimes we launch products, not for the revenue, but for the data, right? We actually do that quite often. Often I get asked, can you give me a few examples? And the answer is, unfortunately, you know, actually, but I frequently launch products where my motivation is not revenue, but it’s actually data. And we monetize the data through a different product. So, I would say that today in the world of AI, the two Scares as resources are, I would say the most Scares resource today is actually talent, because AI needs to be customized for your business context. You can’t just download an open source package and apply it to your problem. You need to figure out where there’s a spanfall to fit in your business or where the speech recognition fit in your business. And what context do you, where can you fit in this AI machine learning thing? And so, this is why there is a talent war for AI, because every company to export your data, you need that AI talent that can come in to customize the AI, figure out what is AI and what is being, where to get the data, how to tune the algorithm to work for your business context. So, I say, maybe that’s the Scares resource today. And then second is data is proving to be a defense war barrier for a lot of AI power businesses. So, does this concept of a virtuous circle of AI that we see in a lot of products as well, which is, you know, you might build a product, right? You know, for example, we build a speech recognition system to enable a voice search, which we did it by due, I guess, the US search companies have deemed that some of the US, not sure, anyway, but the speech recognition system, whatever, some product. Because there’s a great product, we get a lot of users, right? The users using the product naturally generates data, right? And then the data through ML feeds into our product to make the product even better. And so, this becomes a positive feedback loop that often means that the biggest and the most successful products, you know, the most successful product often, the best product often has the most users. Having the most users usually means you get the most data and with modern ML, having the most data sometimes usually often means you can do the best AI, best machine learning, and therefore, have an even better product. And this results in the positive feedback loop into your product. And so, when we launch new products, we often explicitly plan out how to drive the cycle as well. And I’m seeing pretty sophisticated strategies in terms of deciding how to grow other products. Sometimes by geography, sometimes by market segment, in order to drive the circle, in order to drive the cycle. Now, you know, this concept was around for a long time, but this is really a much stronger positive feedback loop just recently because of the following reason is traditional AI algorithms work like that. So, there was kind of beyond a certain point, you didn’t make more data, right? This is a data performance. So, I feel like 10 years ago, data was valuable, but it created less of a defensive barrier because, you know, beyond certain threshold of data, it just didn’t really matter. But now the AI works like that, the data is becoming even more important for creating defensive barriers for AI-powered businesses. Let’s see. All right, slight digression. Then, you know, several of you asked me about, so actually Ravi was kind enough to take the audience questions from the sign-up form and summarize them into major categories. One of the categories, you see, summarize, so you summarize the questions into, you know, major heading categories, right? So, one of them was AI society impact. One was practical questions about AI. One of the headings that Ravi wrote was scared, as if AI take over human brains or kill humans or whatever. So, you know, I feel like there is a, there isn’t, so this is a very good circle of AI. There is a, I’m not sure what to call it, I’m going to call it the non-verkers. Circle of hype. When preparing for this talk, I actually went to the store is to look up Antonim’s offices of the word virtues and Vio came up, but I thought Vio, Circle of hype, was a bit too provoked, I don’t know. But I feel like that we are unfortunately, you know, there is this evil AI hype. Well, AI take over the world, it’s far the humans, the living human race, whatever. Unfortunately, some of that evil AI hype, right, even say fears of AI is driving funding, because well, AI could wipe out the human race, then, you know, sometimes they are wealthy individuals or sometimes government organizations or whatever, they now think, well, let’s fund some research. And the funding goes to anti evil AI. And the results of this work drive more hype, right. And I think this is actually a very unhealthy cycle that a small part of AI could meet us getting into. And, and, and, I’m young, as unfortunately, I see a small group of people is a small group with a clear financial incentive to drive the hype, because the hype drives funding into them. So I’m actually very unhappy about this hype. And I’m unhappy about it for a couple of reasons. You know, first, I think that there is no clear path to how AI can become sentient, right. You know, probably me, I hope that there will be a technological breakthrough that enables AI to become sentient. But I just don’t see it happening. It might be, you know, it might, that breakthrough might happen in decades. It might happen in hundreds of years, maybe, it happened thousands of years from now. I really don’t know. The timing of technology breakthroughs is very hard to predict. And so, you know, I once made this analogy that worrying about evil AI color robots today is a little bit like worrying about overpopulation on the planet Mars, right. And I do hope that someday we’ll colonize Mars. And maybe someday Mars will be overpopulated and someone will ask me, Andrew, there are all these people, all these young innocent children dying of pollution on Mars, how could you not care about them. And my answer is we have in London a planet, yes. I don’t know how to work productively on that problem. So, so, I feel like maybe the delinquents, you know, if you ask me, do I support doing research on X, right? Do I support research on almost any subject? Now, I usually want to say, yes, of course, doing research on anti-evil AI is a positive thing. But I do see that there is a massive misallocation of resources. I think if there were two people in the United States, maybe 10 people in the United States working on anti-evil AI is fine. If there are 10 people working on overpopulation of Mars, it’s actually fine. Form a committee, write some papers. But, but, but, but, I do think that there is much too much investment in this right now, right. So, yeah, so sleep easy, you know, is not, is not. And maybe the other thing, quite a lot of yosses about societal impact, which what what I found inspiring, the other thing I worry about is this evil AI hype being used to whitewash a much more serious issue, which is job displacement, right. So, frankly, you know, I go to academic, I know a lot of leaders in machine learning, right. And I talk to them about their projects. And there’s so many jobs that are squarely in the crosshairs of my friends projects. And the people doing those jobs, frankly, they just don’t know, right. And so, miss Lincoln Valley would be responsible for creating tremendous wealth. But part of me feels like we need to be responsible as well for owning up to the problems we cause. And I think job displacement is the next big one. Thank you. Thank you. And, and, and, and, and, let me also say a little bit that say say just a little bit more about that at the end. And then we shouldn’t whitewash this issue by pretending that there’s some other futuristic fear to fear MongoRaval and try to solve that while ignoring the the real problem. Um, let’s see. So, the last thing I want to talk about is AI product management. You know, so, um, AI is evolving rapidly, it’s super exciting. They’re, they’re just opportunities left and right. Um, but I want to share with you some of the challenges I, I, I see as well, right. Oh, really, some of the, some of the things we’re working on that are at the bleeding edge where I feel like, you know, our own thinking is not yet mature, but that you run into if you tried to incorporate AI into your business. Um, so AI product management. Right. So, maybe, uh, you know, many of you know what a PM is, but, but let me just draw for you a Venn diagram. There’s my simple model of how PMs and engineering work together. Right. So, you know, let’s say this is the set of all things that, uh, uh, users will love. Right. So, there’s a set of all possible things, you know, all possible products that users will love. And this is a sense of all, um, things are feasible. Right. Meaning that, you know, today’s technology or technology now or the near future enables us to build this. Right. So, for example, I would love a teleportation device, but I don’t think that’s technological feasible. So teleportation device would be here. We would all love one, but I don’t think it’s feasible. Uh, there are a lot of things that are feasible, but I know one one is a boy. We would build a lot of those things for the good value as well. Um, and, and I think the, the, the, the, the, the, the, the secret is to try to find something in the, in the, in the middle. Right. Um, and so, you know, roughly, I think of the PMs job as figuring out what is this set on the left and, uh, engineering, V-social engineering’s job, um, as figuring out, as figuring out what’s in this right set. And then the two kind of have to work together to build something that’s, there’s actually indie intersection, right. Now, one of the challenges is that AI is such a new thing that, um, the workflows and processes that we’re used to in tech companies, they, they, they’re not quite working, uh, for AI tools. Um, so, so maybe, for example, in Silicon Valley, we have pretty well-established processes for product managers and engineers and, and engineering to, to, to, do their work. Uh, for example, for a lot of apps, the product manager would draw a wireframe. Right. Where, you know, so for example, actually for the, for the, by do search app, right. Um, you know, the PM might decide, well, put a logo there, put a search ball there, put a microphone there, put a camera there, uh, and then put a new speed here. And then actually, well, we actually move our microphone button down here and have a social button, this button, this button, so you know, a product manager would draw this, you know, uh, like on a piece of paper or the cat thing, um, and the, and engineer would look at this drawing that the, um, product manager drew, and they would write a piece of software and this is, this is actually, this is actually a rough wireframe for the, for the, by do search app. With a search bar and then tons of news here, right. A little bit like a, um, yeah. Really combines the, the, the, the, uh, uh, uh, a search, uh, as well as a social news, not very social news, but even new speed, um, both, both in one. Um, but for, so this works for, you know, if you pull open your, your, your app, or you don’t, a lot of apps like a news app or a social fees app or whatever, this type of working together works with a established process of doing this. But how about an AI app, you know, like, you can’t wireframe a self-driving car, what is my wireframe for my self-driving car? Or, um, if you want to build a speech recognition system, you know, the PM draws this button, but, I don’t know, how good is my speech, like, how accurate does my speech recognition system need to be? So, all the processes are not, um, so where is this wireframe was a way for the PM and the engineer to communicate? Um, we are in still, frankly, trying to figure out what are good ways for a PM in an engineer to communicate a shared vision of what a product should be. That makes sense? So, so PM does a lot of where it goes out, figures out what’s important the users and they have in their head some idea of what this product should be. Um, but how did they communicate that to the, to the engineer, right? Um, and so as a, as a concrete example of that, um, let’s say that you’re trying to build a speech recognition system, I did a lot of work on speech recognition, right? So, really, my team and I did a lot of work on speech recognition, so I thought about that a lot. Um, if you’re trying to build a speech recognition system, say to enable voice search, um, there are a lot of ways to improve a speech recognition system. Maybe you wanted to work better even in noisy environments, right? But in noisy environment, it could mean a car environment or it could mean a cafe environment, you know, people talking versus car noise, high rain noise. Um, or maybe you really needed to work on low bandwidth audio, right? Maybe sometimes users are just, you know, in a bad cell phone coverage setting, so you needed to work better on low bandwidth audio, or maybe, um, you need to work better on accent speech, right? Well, I guess, US has a lot of access, China also has a lot of access, so what, and what does, what does accent to speech mean? Does it mean a European accent or Asian accent, and European does that mean British or Scottish or, you know, French or what does accent to really mean? Um, or maybe you really care about, you know, something else, right? So, one of the practices we’ve come up with, um, uh, is that one of the good ways for PM to communicate with an engineer is through data. And what I mean is, um, for many of my projects, we asked the PM to be responsible for coming up with a data set, for example, give me, you know, let me say 10,000 audio clips. That really shows me what you really care about, right? So, and, and, and so if the PM comes up with 10, you know, a thousand or 10,000 examples of people of recordings of speech and gives this data to the engineer, then this gives the engineer a clear target to aim for. So I found that having a, uh, PM responsible, collecting really a test set, um, is one of the most effective processes for letting the PM specify what they really care about. And so if all 10,000 audio clips have a lot of car noise, this is a clear way to communicate to the engineer that you really care about car noise. If it’s a mix of these different things, then it communicates an engineer how exactly what mix of these different phenomena, the PM ones you’re the optimized for, right? Um, I have to say this is one of those things that’s obvious in hindsight, but that’s surprisingly few AI teams do this. Um, one of the, the, the, uh, bad practices I’ve seen is when the PM gives you an engineer 10,000 audio clips, but they actually care about a totally different 10,000 ones. That happens surprisingly often in multiple companies, right? Um, and then I feel like we’re, uh, and, and, and I feel like we’re, we’re trying to still in the process of advancing the bleeding age of, um, these workflow processes for how to think about new products. So, so, here’s another example. Um, we’ve done a lot of work on constitutional agents, right? So, the constitutional agent, you know, um, I might say to the AI, um, you know, please order takeout for me. And then the AI says, well, what restaurant do you want to order from? And I said, oh, you know, I feel like a hamburger or whatever. So you go back and forth. So if a conversation or chat bot tell me order order food or whatever. Um, so again, if you were to draw wireframe, you know, the wireframe would be, well, you say this, the chatbot says this, you say this, chatbot says this, but this is not a good spec for the AI, right? The wireframe is an easy part, the visual design, you know, you could do that. But how intelligent is this really supposed to be? So, um, the process that we developed it by do is we asked the PM and the engineer to sit down together and write out 50 conversations that the chatbot is meant to have with you, right? So, for example, um, if, if you sit down and you write the following, let’s say the user, you for user, says, um, please book a restaurant, right, for my anniversary, uh, next Monday, something. I’m abbreviating this, so it says just, just, uh, right fast. So please book a restaurant for my anniversary. If the PM then says, well, in this case, I want the AI to say, okay, and do you want flowers? Right? Do you want me to order flowers? What we found is that this then creates a conversation between the PM and the engineer, whether the engineer asks a PM, wait, do you want me to suggest an appropriate gift for all circumstances and all possible appropriate gifts? I think Christmas, I would suggest some other, I don’t know what to buy for Christmas, I guess, you know, uh, uh, or is it only for anniversaries you want me to buy flowers? And I don’t have to buy any other gift and not for anything other than anniversaries, right? Um, but, but we found that the process of writing out 50 conversations between consular agents and, uh, engineering PM sitting down together, work through these conversations that this is a good process to enable the PM to specify what they think is this set on the left of what, you know, what the user would love and for the engineer to tell the PM what the engineer thinks is feasible given today’s chatbot technology, right? And so this is actually a process that we’re using in, in, uh, in multiple products. Um, so I think that, uh, um, AI technology is, uh, advancing rapidly and there’s so many shiny things in AI. Um, the things you see the most in PR are often the shiniest technology, but the shiniest technology is often not the most useful, right? Um, but I think that we’re still missing a lot of the downstream parts of the value chain of how to take the shiny AI technology that we, you know, find out and research papers and how to think about how to develop a product or business and, and, and, and, and, and, and, and, and, we’re definitely, uh, it definitely feels like we’re in the, you know, software engineering today has established processes like a code review and, you know, agile development. Some of you know what those are, right? But that these were established processes for writing code. I think we’re still in the early phases of trying to figure out how on earth to organize the work of AI and the work of AI product. And this is actually a very exciting time to, to, to, to enter this field. Um, let’s see. All right, all these time for questions. So, all right, real quick. Um, I want to share with you some specific examples of, um, short term opportunities for AI. All right. These are the things that are coming in the very near future. Um, let’s see, I think I mentioned, uh, um, well, yeah, I mentioned FinTech. I don’t want to talk about that. Um, you know, in the near term future, I think speech recognition will take off. Um, it’s just in the last year or two that speech recognition, each of the level of accuracy was becoming incredibly useful. So about four, five months ago, uh, there was a Stanford University, let’s study done by James Landay, uh, led by James Landay, uh, was a professor of in-community science together with us at Baidu and the University of Washington that showed that, um, speech input on the cell phone is the X-Faster using speech recognition and then typing on the cell phone, right? So speech recognition has passed the accuracy threshold where you actually are much faster and much more efficient using speech recognition than typing on a cell phone keyboard. And this is true for English and German Chinese. Um, but I think, uh, and at Baidu over the past year, we saw 100% year-on-year growth on the use of speech recognition across across all of our properties. So I think we’re beyond the knee of the curve where speech recognition will, will, will take off rapidly. Um, and so, you know, I guess in the US, there are multiple companies doing smart speakers. Uh, Baidu has a different vision, um, but I think that, uh, the, the, the device that you can come on with your voice in your home will also take off rapidly. So we have a operating system that would release the Hauhau ware makers and enable that, right? Um, what else? Uh, computer vision is coming a little bit later. Um, you know, I, I see some things take off faster in China than in the US. So because all of us living in the US, uh, familiar US ones, I might lean to a little bit, even sharing things I see from China. Uh, one thing that’s taking up very rapidly is, uh, face recognition. Um, so I think because, uh, China is a mobile first society, right? All of us, most of us in US first have the laptop or a desktop, then we got our smartphone. A lot of people in China, you know, really just have a smartphone or first go to the smartphone, then a, then a laptop or a desktop or a laptop, I guess, uh, not sure who buys this top. So you want, um, anyway, um, uh, but, because of that in China, a lot of people, let’s see, you can apply for an educational loan, uh, on your cell phone in China and just based on buttons, you know, just based on using your cell phone, we will send you a lot of money, right? For your education. Um, so because of, these very material financial transactions that are happening over your cell phone, before we send you a lot of money, we would really like to verify that you are who you say you are, right? Uh, before we send it to someone that, you know, claims to be you, but isn’t you? So this in turn has driven a lot of pressure for progress in face recognition. Um, and, and, and so face recognition on mobile devices is, uh, uh, uh, as it means of, um, biometric identity verification is taking off in China. Um, and then we’ve also done things like, um, today in, by those headquarters, you know, instead of, uh, do I have it? No, I don’t. All right, instead of having a swipe an RFID card to get inside, you know, the office building, today, I buy a dos headquarters, um, you know, I can just walk up and there’s a face recognition system that recognizes my face and I just walk right through, um, just yesterday, the day before I posted a video on my personal YouTube channel demoing this, you can look that out later if you want, but we now have face recognition systems that are good enough that we trusted with, you know, pretty security critical applications, right? Like if you look just like me, you can actually get inside my office if I do, right? So, so we’re really trust our face recognition systems and so, so, so, so, so, so, so, so, yeah, let’s see, oh, and, and I think both of these have been obvious to us for some time. So our capital investments and data investments and these have been massive. These are well beyond the point where a small group could be competitive with us unless there’s some unexpected technological breakthrough. I’ll mention something’s, you know, a little bit further out. I’m personally very bullish about the impact of AI on healthcare. I spend quite a bit of time on this myself, and I think, well, the obvious one, the lot of people talk about this medical imaging, um, I do find that challenging, um, uh, yeah, I do think that a lot of radiologists that are graduating today will be impacted by AI, you know, definitely sometime in the course of the careers. If you’re planning for a 40-year career in radiology, I would say that’s not a good plan. But beyond radiology, I think there are many other verticals, some of which we’re working on, but there’s a huge opportunity there. And, and, and, and, and, and, and on and on and on, right? I think FinTech is there. I hope education will get there, but I think education has other things to solve before it really needs a huge impact by AI. But I really think that AI, you know, will be incredibly impactful in, in, in many different, um, verticals. Um, so, uh, let’s see, and, you know, what I talked about today was kind of AI technology today, right? So really supervised learning. And I would say that the transformation of all of these industries, there’s already a relatively clear road map for how to transform multiple industries using just supervised learning. There are researchers working on even other forms of AI, you know, you might hear words I unsupervised learning or reinforcement learning or, uh, transfer learning. There are other forms of AI as well that maybe they’ll need as much data, um, maybe, you know, has other advantages. Most of those are in the research phase, or most of them are used in very relatively small ways than not was driving economic value today. But many of us hope that they’ll be a breakthrough in these other areas. And if that comes to pass, then they’ll unlock additional waves of value. Um, so, let’s see, the field of AI has had several winters before, right? I think the field was overhyped, wind down, somewhat overhyped wind down. So we think there were maybe two winters in AI, right? Um, but many disciplines undergo a few winters, winter, and then eternal spring. And I actually think that AI has passed into the phase of eternal spring. And, um, and I think one of the questions that someone asked was, you know, when, when will AI, uh, no, no longer be the top technology or something, right? And I feel like, um, uh, if you look at silicon technology, right? You know, I think we’re in the eternal spring of silicon technology, or maybe some other metal, some other material will surpass it. But the concept of a transistor, you know, and computational circuit, that seems like it’s going to be with the human race for a long time. And I think we have reached that point for AI, where, uh, AI, you know, neural networks deep learning, I think it would be with us for a long time, uh, uh, uh, concrete, they can’t just have years out, but, but it could be a very long time. Because it’s trading so much value already, and because there is this clear roadmap for transforming several industries, even with the ideas we have, uh, but, but hopefully there’ll be even more breakthroughs and even more of these technologies. Um, all right, very last topic. You know, the jobs issue, um, I think that, to the extent that we’re causing these problems, which is the job displacement issue, I think we should own up to it. Um, just as AI displaces jobs, similar to the earlier ways of job displacement, I think that AI will create new jobs as well, uh, maybe even ones that we can’t imagine. So, um, uh, that’s why I, I actually, you know, sorry, I’ve worked at, um, I’ve seen, you know, development cost error for a long time. I think one of the biggest challenges of education is, uh, motivation, right? Um, uh, uh, uh, as in, it’s really good for you to take these causes and study, but it’s actually really difficult for an individual to find a time, in a space and energy to do the learning that gives them these long-term benefits. So, um, with the end of, uh, uh, uh, uh, uh, uh, after the, um, automation replaced a lot of agriculture, the United States built this current educational system, your K-12 and the university, and there was a lot of work to build the world’s current educational system. With AI displacing a lot of jobs, I’m confident that there will be new jobs, but I think also we need a new educational system, uh, to help people whose jobs are displaced, rescill themselves to take on the new, the new jobs. So, one of the things that some governments, but one of the things I think we should, um, move towards is a model of basic income, um, but not, not, not universe, not necessarily universal basic income, where your paid to quote do nothing, but I think governments should get people a safety net that paid the unemployed to study, right? To provide a structure that helps the unemployed to study. So, it’s the increasing odds of the gaining the skills needed to re-enter the workforce and contribute back to the tax base that is paying for all those are basic income. So, I think we’ll need a, like a new new deal in order to evolve society to us as new world, where there are new jobs, but job displacements are also happening faster than before, and they have to say more about that. Um, finally, this is really final, final thing. Um, I know that, uh, over in, here in the GSB, you know, many of you have, um, fantastic, uh, product, business, uh, or social change ideas. One of the things I hope to do is, um, try to, uh, connect, frankly, connect GSB and CS. I think GSB and CS are very complimentary, sensitive expertise, but for various complicated reasons, we can get into the two communities, don’t seem very connected. So, um, um, yeah, um, in the process of organizing some events that I hope will bring together some CS, some GSB, maybe also some, uh, uh, uh, uh, you know, some VC, some capital investments. To those of you interested in exploring new, new opportunities that AI creates. So, if you want to be informed of that, um, sign up for this mailing list at, uh, bit.y slash gsb-ai. Um, there’s some things being organized, they’re already underway, but actually, instead of taking a picture of this, if you just go and sign up for this on your cell phone right now, yes, uh, you can do this on taking questions, but, uh, um, um, and some of these things are already underway, but, but when they’re ready to be announced, are announced to the mailing list there so that you can come in and be connected. So, these are the pieces of staff at campus. So, that, uh, happy to take questions, but let me say thank you all very much. Thanks so much, Andrew. Uh, it’s a great talk and a lot of us, I know what we engaged in product development and product management in the field of AI and you’ve given us a lot of good frameworks, uh, to think about these conversations. And, uh, the mailing list is right there in case, uh, you want to put note down. Um, so, uh, Andrew has gracefully, uh, accepted to feel some questions till about 5.30. So, if you have any questions, there are going to be some Sloan fellows that are going to be, uh, moving around the room, so please attract their attention. Uh, but I can kick off with a question. I really wanted to ask this question because it reminded me of my GSBSA, which is what scares you about AI and why, but I guess you already answered part of that. So, maybe you can touch on that and, uh, another question which I felt was interesting was, uh, what is the role of non-technical leaders in development of AI, who’s in charge of the ethical decisions being made in directing AI? Yeah, you know, right. So, what scares me as definitely the job displacement? I think that, honestly, part of me, I’ll be really honest with you guys, right? I part of me wonders, you know, with the recent, um, presidential election, part of me really wonders if, uh, many of us in Silicon Valley, if we really failed a large fraction of America, and that’s being, me being really honest. I’m not saying I agree with everything happening in politics right now, but part of me actually wonders if, uh, uh, we create a tremendous wealth, but also, frankly, we left a lot of people behind, um, uh, and, and I think it is past time for us to own up to it and also take responsibility for addressing that. Um, uh, let’s see. Most of the questions, um, there was a couple. Oh, and I think in terms of ethical issues, you know, there are some things, I mean, but, uh, I think that, um, uh, I think jobs are so important, I’m tempted to just not talk about anything else. Um, but, you know, uh, uh, uh, uh, I think that, um, AI is really powerful and could do all sorts of things. Um, and we, we see lots of, uh, I think there’s some small issues, such as, you know, um, as AI sometimes bias, right? So for example, um, uh, if you do a web search, right, uh, we want to make sure that if you search for a certain ethnic group, you don’t get, you know, lost results, they say, as well, is this, uh, you know, check up the criminal record or something like that, right? We don’t want AI to exhibit bias. So if an AI thinks you’re male versus female, we don’t want to show you different, very different types of information that they confer against gender stereotypes. So I think there are some cultural bias issues. Um, I think that, um, openness, AI communities is very open today. I think we must fight to make sure to keep it open, um, uh, you know, but I think the number one by far is actually jobs. Move, take some questions, boy. How does the microton work? Hi, um, Catherine Qian here. I’m, I’m a 16 graduated last year. And, uh, thanks for the talk. I had a question around, you mentioned the defensibility of AI as the three things. So access to data, talent, scarcity and positive feedback loop. And one and three, so access to data and positive feedback loop seems to really benefit large companies or companies that already have the AI technology. And so I’m wondering at what point, you know, is it going to be really tough for startups to, um, well, become AI startup and secondly for investors at what kind of scale do those investments need to make for a startup to be successful? Sure. Yeah. Oh, and just to kind of find, I think the scare of users is a data and talent. And then the positive feedback loop is a strategy or a tactic to drive to data, right? So, um, um, I think that, uh, for the problems I talked about, like speech you know, face recognition is going to be really difficult for a small company to acquire enough data to all, or tell in or whatever the computer effectively. Um, unless there’s an unexpected technology out logical breakthrough that let small groups do stuff that can’t be done with today’s technology. Um, but I think that lots of small verticals. So for example, take medical imaging, there are some medical diseases where there are so few cases around the world that, you know, if you have a thousand images or something, that might be almost all the data that exists in the world. And, um, so, so that’s one. Uh, there are some verticals that just isn’t that much data. But I think the other thing is that, um, there is so much, there’s so many opportunities in AI today. So I’ll tell you honestly, my, my team is regularly write full-fledged business plans, do the market research, sites of the market for your economics, it all looks good, with a full-fledged business plan for a new, you know, vertical. And we decide let’s not do it because we just, even we just don’t have enough talent to go after all the big options. So we decide let’s not do it because it’s something even bigger we want to do, right? Um, so I think today we’re fortunate that so many opportunities that there are plenty of opportunities that the large companies are, are frankly not pursuing because today’s world has more opportunities than talented AI researchers. It’s a question over there. Hi, hi, Andrew. Um, what do you think of the use of AI in the creation, sorry, over here? Oh, God. In the creation of inventions. So something that’s usually the reserve of what’s, I know, the human mind, the use of AI to create inventions and even patent to go inventions. Yeah, I will say when the very early phases, um, you know, creativity is a very funny thing, right? So can AI compose music is so subjective. I feel like even with a 20-year-old technology automated music composition by computers, a lot of us thought the automatic compute compositions sounded horrible, but there were some people that loved it, you know, with like 20-year-old technology. So, um, I, I haven’t, I don’t know. We’ve seen a lot of cool work with AI doing special effects on images, synthesizing, you know, make this picture look like it was painted by a certain painter. Um, I don’t know. It feels like a small but very interesting area right now. Oh, but making complex inventions like inventing a totally new, very complicated system with many pieces. I think that’s beyond what I will see a clear path to today. So a couple of questions here. Yep, go ahead. So, um, so, um, could people hear or say a repeat for them? So, um, so, um, so, um, so scalability drives a lot of powers in AI, but if more’s law, if more’s law is coming to an end, how does that affect the skill of AI? Um, it turns out that, let’s see, so, um, I think that, so seeing the road maps of, um, um, multiple, you know, uh, high performance computing hardware type companies, and whereas, more’s law for a single processor doesn’t seem to working very well anymore, um, I have seen specific and I think credible road maps of microprocessor companies that show that for the types of computations we need for, uh, for, you know, deep learning for neural networks, I am confident that they will keep on scaling for the next several years. Um, and so this is a sendy processing, single instruction, multiple data. It turns out to be much easier to paralyze than a lot of the workloads, you know, your, your, your, your work processes, actually much harder to paralyze, a neural network is actually much easier to paralyze, so I feel like there is still a lot of headroom for faster computation. Um, I would say that when I look across the mixer problems, um, many of the problems, AI problems are bottlenecked by data, but many of the problems are also just bottlenecked by computational speed. There are some problems where our ability to acquire data exceeds our ability to process that data inexpensively, so further progress is in, you know, HPC, which I think there is a roadmap for should open up more of that value. So question, I’d be ahead. Is on. Oh, hello. Hi, Andrew. My name is Eric Haley. Um, I’m a startup founder working machine learning. So I had two questions. Um, you mentioned that algorithms aren’t like the special sauce to being successful in AI. What do you recommend for people, though, um, building and working on AI, but IP protection are best ways to get around that to still build a valuable product. And then two, um, you mentioned like the relationship between the PM and the engineer about the cycle of data and like how they communicate. That’s for building a product that what about people doing, um, some R&D research on like reinforcement on supervised learning? Is there a certain like cycle of strategy would go for like research breakthroughs or like to improve the research processes process? Yeah, maybe sure. Um, right. Uh, boy. All right. So I think, you know, yeah, IP protection sound was one of those things that we give it based on it. I’ll get into trouble with lawyers or something. I don’t have a strong opinion. I see a lot of companies foul for some patterns, but how much you can rely on them for defensibility is an open question. I don’t know, which I call a lawyer. I actually don’t have a strong opinion on that. We do tend to use think strategically about data as a defense, what the area that we rely on data. Um, in terms of, um, you said processes for R&D, right? Um, I don’t, yeah, you know, um, the research academic community tends to favor novelty, right? Anything novel and shiny, you can get a paper published. Um, I would say that maybe if you want to train up a team of engineers, I’ve supervised, you know, PhD students at Stanford for a long time, where, uh, I feel like there’s a, if you want to be a deep learning researcher and if you go to publish papers, the form that you should give people is this, um, read a lot of papers, go beyond reading papers, be going replicate existing research papers results. This is one thing that is underappreciated actually even pull back a little bit from trying to how to invent your new thing, but spend a lot of time replicating published results. I found that to be a very good training process for new researchers. And then the human brain is this marvelous thing. It, it works every time, uh, you know, I’ve never seen a fail, but if we read enough papers and really study them and understand, um, and replicate enough results, pretty soon you have your own ideas for, for pushing forward to stay at the art. This is, uh, I’ve, you know, mentored enough PhD students to claim with high conferences. This is a very reliable process. Um, and, and, and then go submit your paper and get it published. What? Uh, thank you. So I’m a mechanical engineering student aspiring to be a robotist when I graduate. I was wondering what are the best opportunities for mechanical engineers to go into as it relates to AI and robotics. Would you know that? Yeah. So I’ve seen a lot of, um, MEP people, uh, take a very successful career as in AI, uh, actually some of my PhD, oh, actually one of my PhD students was an MEP PhD student and he transferred to the CS PhD program into very well. Um, so, uh, you know, I think that robotics has many opportunities and specific, well, your Stanford students, right? Uh, cool. Oh, I would say take some CS AI classes and try to work with the AI faculty. Um, I do think that there are a lot of opportunities to build interesting robots in specific verticals. So, um, I think precision agriculture is a very interesting vertical, right? So, so they’re, they’re, they’re now multiple startups using AI. Actually, for example, uh, now some of my friends, um, uh, running, uh, Blue River, uh, which is using computer vision to look at specific plants, like specifically his of cabbages and, uh, uh, uh, uh, co-off, you know, have an AI decide which heads of cabbage to kill and which ones to let live. So it’s the maximize crop yield, right? So that’s one application where AI is letting you, letting you make, well, this is life and death decisions, but this is life and death of heads of cabbage, not humans, uh, but it is letting you make one at a time life and death decisions for heads of cabbage. Um, uh, but I think that precision agriculture is one vertical, um, I don’t know. Yeah, I think actually it’s an interesting work on um, surgical robotics as well, but that has a bigger, kind of FDA process, uh, uh, uh, approval, so that’s a longer cycle. But I, I’m seeing lots of, uh, oh, actually one of the things to take off in China, the lot of, um, companionship robots, more social companionship robots that are being built in Southern China, uh, it’s not really taken off in the US yet, but, um, they’re, they’re, they’re surprisingly many of these things in China. Uh, thank you. What’s there? Hi, I’m Phil. I’m co-founder of Hero Baby. And the parallel to Bay startup that helps, uh, parents, um, to understand the developmental needs of that child and paired with baby products. Um, I’d love to hear your take on pairing AI with humans. Um, if you think it’s usually for most applications, a faster way to focus on AI-only approaches, right away, or actually have a hybrid solution of AI and humans, it’s, for example, um, self-driving cars or chatbots and so on. Yeah, I don’t know that a general rule for that is, is, is so case by case, I guess. You know, a lot of the speech recognition work is about making humans more efficient in terms of how you communicate with or through a cell phone, for example. Um, and then for self-driving cars, we know that, um, if a car is driving and it wants you to take over, you need maybe 10, 15, maybe even longer seconds to take over, so it’s incredibly difficult to venture the attention from a distracted human back to take over a car. So that’s why, um, I think a four-tonmy, uh, level four-tonmy will be, will be safer than, then, you know, trying to, um, have a human take over the moments, notice, microtters, and what to do. Uh, so that, that might be one case where, four, where, um, just mix between four and partial automation is challenging for a user interface point of view. So I don’t, I don’t know that a general rule for that. There’s some questions on the top, let’s go. A different direction. Okay, so, uh, when you talk about, uh, opportunities for AI, you mentioned the online education. I just want to know more about this. You mentioned that the motivation problem is one of the problem for online education. But do you think this is the biggest challenge that online education is facing? That AI could probably solve or do you think there are some other challenges for online education? Um, motivation, I mean, people don’t have, uh, don’t want to spend enough time, uh, to finish the whole course. Yeah, so I think that’s, actually, you know, AI is helping with education and, and people talk about personalized tutors for a long time. And, you know, today, Kocera uses AI to give you customized course recommendations and, and there’s, uh, AI for auto grading. So I would say it’s definitely helping at the margins. But I would say that, um, uh, uh, education still has a big digital transformation to go through. Maybe even without that much involvement of AI. Maybe one pattern that’s true for a lot of industries, um, is first comes the data and then comes the AI. So healthcare meets this pattern. Over the past, you know, thanks to, um, well, partially the Obamacare, right? Uh, uh, uh, uh, there’s a huge movement in the United States, uh, movement in other countries too, to electronic health records, uh, EHR. So the rise of EHR and, and the fact that, you know, your X-ray scans all went from film to digital X-rays. So that wave of digitalization has now created a lot of data that AI can eat to create more value. Um, I would say that a lot of education still feels like it’s first undergoing the digital transformation. And while AI can certainly help, I think there’s still a lot of work to do for, for just digital transformation. I think there’s one more question on the top. Yeah, if you could talk a little bit about how BIDO is using AI for managing your own cloud data centers, primarily IT operations management use cases. Um, sure. So I guess, um, boy, I, I, uh, let’s see, I’ll give one example. We talked about this several years ago. We did a, like, almost two years ago. We did a project showing that, um, we can detect, um, hardware failures, especially hard disk failures, um, a day ahead of time using AI. And so this allows us to do preemptive maintenance. The hard swap, the hard disk, you know, or copy the data off even before it fails. Does reducing constant increasing reliability. Um, we’ve also been working to reduce power consumption of a data centers, uh, uh, some of the low balancing uses AI. I wouldn’t say that I, I, I can’t point to like one big thing, but I feel like in many places AI has had an impact optimizing, you know, various aspects of data center performance. Um, would like to stay for longer, but, um, we have to leave the room for the next event. So it will probably be the last question. Um, hey, how are you here? Um, I’m actually, I actually studied at both CSN and GSB before. Um, so my question is, you actually mentioned that that’s the, uh, the sweet spot for AI problems, you know, a human can process for less than a second, that that would be a good problem set for AI to solve. Um, can you comment on the other way on the other side of spectrum like, you know, in your experience, uh, a problem would take a lot more seconds or a long time for human to process, uh, yet after careful modeling or careful planning, uh, you are able to solve a problem by AI. I can give some examples on that. Yeah, sure. No, no, no, no, no. I’ll give you a couple of, uh, there are things that AI can do that humans can’t do less than a second. So for example, I think Amazon today does a way better job recommending books to me than even my wife does, right? And the reason is, um, Amazon, you know, has a much more intimate knowledge of what books I browse and what books I read than, than, than even my wife does. Um, uh, advertising, you know, honestly leading internet companies have seen so much data about what adds, uh, people click on and don’t click on, come remarkably good at that task. So there are some problems where a machine can consume way more data than any human can and model the statistical patterns and make predictions. So, so this is something that AI surpasses human performance because it consumes so much data, right? Like Amazon knowing my, what my book preference is better than my wife. Um, the benefit. And then, and then the, the other thing of tasks that take a human more than one second to do, um, a lot of the work of designing AI into the workflow is, um, piecing many small AI pieces together into much bigger system. Uh, so for example, to build a self driving car, we use AI to look at, you know, uh, look at the camera image radar, light, uh, whatever, uh, sensor data. Let me just say a picture in front of the car and, uh, supervised learning estimates the position of the other car, supervised learning estimates the position of the pedestrians. But these are just two small pieces, well, two important pieces of the overall AI. Then there’s a separate piece that tries to estimate, well, where is this car going to be in five seconds, where is this pedestrian going? There’s another piece that plans well given that all of these are objects are moving in this way. How do I plan my car so that I don’t hit anything? And then after that, there’s then how do I turn the steering wheel? I turn the steering wheel five degrees or seven degrees to follow this path. So it often, uh, a complicated AI system has many small pieces and all the ingenuity is figuring out where to take this superpower of supervised learning and put it into this much bigger system that creates something very valuable. I’ll probably take one more question behind. Just, uh, I’m Mahidhar, I’m a solutions architect and, uh, what company called WOLDB. My question was, you know, I mentioned about jobs and wealth distribution as well. You know, since it’s a management forum, I want to ask, what sort of a role do you see for product managers when interacting with, uh, uh, select socialist sociologists or, or, or, or legal, uh, profession? I mean, I’m based on examples. Like if you’re building a car, self-driving car, like, um, you know, if there is a collision, which is about to happen, uh, with the developer or the AI has to take into consideration the person driving the car, or the person who’s, you know, the pedestrian who’s, it’s about to head. You know, that’s a legal question. So, but these sort of, there’ll be a lot of questions like these. What do you see the role of, uh, a management, uh, interacting with different, uh, function areas? You know, so, so the, the, the, the most famous example of variation, which is that, is the Zinc called the trolley problem, is a philosophy class ethical dilemma, where, I guess, I think your car is, um, you know, the classical version, you have a trolley running on rails, and the trolley is about to hit and kill five people, and you have the option of yanking on, uh, on a lever to divert the trolley to hit, to kill one person. So ethical dilemmas do yank on the lever or not, because if you do nothing, five people die, if you do something, one person dies, but you kill that person. So are you going to kill someone, right, versus not doing anything? Um, so it turns out that the trolley problem wasn’t important, even for trawlies, right? When we, we built trawlies and I, in the, in the, in the whatever, several hundred years of history of trawlies, I don’t know that anyone actually had to decide where to yank the lever, it just not an important problem outside of philosophy classes. Um, and, and I think that when, um, um, uh, uh, uh, self driving car teams are not debating this philosophers are debating this. Frankly, if you’re ever facing a trolley problem, chances are you made a mistake long ago, you should know. No, and when was the last time you faced a trolley problem, right? When driving your car, I expect a self driving car to face it about as often as you have driving your car, right? Which is probably pretty much never. So I think right now the problem of self driving cars is there’s a big white truck parked across the road, your options are slamming the truck and kill the driver or break and we don’t always make the right decision for that. So I will solve that for us before solving the trolley problem. That’s, I think a good point to end this, uh, great talk. Thank you very much. Thank you.

AI video(s) you might be interested in …