2018 Isaac Asimov Memorial Debate: Artificial Intelligence

In this AI video ...

Hi everybody, I’m Neil deGrasse Tyson, the Frederick P. Rose Director of the Hayden Planetarium. And welcome, is this our 18th or 19th annual Isaac Asimov panel debate. This has become a very hot ticket in New York City, and I almost feel apologetic because we can’t accommodate everyone who wants to see it. We have to go to a lottery model to get tickets out. And, you know, short of going to a bigger venue or charging more, I was still trying to work this out, but you’re here in the audience now, and that’s good. That’s what it’s worth. So, do they tell you what tonight’s topic is? The very hot topic on every frontier. We’re talking about artificial intelligence. And are people afraid of it? Do people embrace it? Should we be doing it? Should we not be doing it? And it’s all over the news. Not the least of which in today’s business section of The New York Times, today. This is a paper version of the news, the newspaper. It’s got an Android robot holding the national flag of China, and the title is China’s Blitz to dominate AI. And I just came back 48 hours ago from the United Arab Emirates, and they have a newly established minister of artificial intelligence. There are countries around the world that see this and recognize it as a way to leapfrog technologies. And I think this is another, here it is. China’s Blitz to rule AI meets with silence from the White House. So I just thought I would just say that. I’m just saying. We’re trying to burn clean coal. That’s what our priorities are. But I’m just saying. Don’t get me started. That’s the topic tonight. We comb the country to find some of the top AI people in the land, and we’re delighted for this mix of five panelists. We have this evening. Let me first introduce to you who’s right on the wings. Michael Womens. Michael Womens is professor of computer science and engineering at the University of Michigan, and he leads the strategic reasoning group. Michael Womens, come on. Okay, you’re right. Thank you. Thank you. Thank you. And next up, we have a friend and colleague in the astrophysics community who’s directed his attention to AI, Max Tagmark. Come on out, Max. Professor of physics. Excellent, Max. Excellent man. He’s doing research in AI at MIT, and he’s also president of the Future of Life Institute. So Max, welcome. Next we have, you couldn’t do this without representation from industry, and that’s precisely what we obtained for this panel. John Gammondera, come on out. John. Thank you. John is, he’s a senior vice president of engineering at Google, where he leads the Google search and the Google AI teams. So we got Google in the house. Google in the house. Next, I’ve got Ruchir Purity. Ruchir, come on out. Ruchir is the chief architect of IBM Watson, and he’s also an IBM fellow. So we got, and we’ve got Helen Grenier. Helen, come on out. Helen? Thank you. Helen, co-founder of the IROBOT Corporation, maker of the Rumba. The Rumba. We all know the vacuuming robot. She’s also founder of the drone company, Sci-Fi Works. She makes drones now. Is that good or bad? I don’t know. We’ll find out. Ladies and gentlemen, thank you for coming. This is our panel, everyone. Yes. So Mike, your professor at University of Michigan. So what do you do? Well, I study artificial intelligence from the perspective of economics. You know, economics is a social science that treats its entities, its agents as rational beings. Really? Really? Artificial intelligence is the self-field of computer science that’s trying to make ideally rational beings. So it’s a very natural fit. Can an irrational being make a rational being? We can do our best. And so you teach a course on this. I’m just curious, how do you frame a course around something that’s so dynamic and so changing and so emotionally fraught with fear? What I do and what one does in teaching an AI course is you bring together the standard frameworks and representations and algorithms techniques that AI people have developed over the years to address thinking-like problems in reasoning, in problem solving, decision-making, learning, using very standard words of algorithms. Now some people are coming to it from the emotional perspective. I sometimes have gotten comments on my teaching evaluations that said, I signed up for an AI course and all I got was computer science. That’s what it is. It’s an engineering discipline and that’s the most way to make progress. Excellent. So Helen, what are you about? I’m all about the robots. We roll about the robots, yeah? My brother was a huge store-was fan when we were young and well for me it was all about R2-D2 and I wanted to build robots since I saw a store was on the big screen. We’ve got everything, character, strategy, loyalty. You tell me Star Wars had like a positive net effect on this world? I think it had a positive net effect on children. At least one here, yeah. Many, many. So we’ve been trying to build robots like this and we’ve had great accomplishments. We’ve had robots that have been credited with saving the lives of hundreds of soldiers, thousands of civilians. We’ve got the Rumba which best selling vacuum, not robot vacuum, but best selling vacuum last year by retail revenue numbers. I think a little bit of a cultural icon too. So I think we’ve come some of the way but we’re not at R2-D2 yet. So I think some of the debate is about where it needs to go. So you co-founded the company, Irobat, which I think was the name of an Isaac Asimov novel. Yeah, and that company invented the Rumba great word by the way, Rumba. Yeah, that’s just great. That was very good. I like where it was. So we asked our engineers what we should call it for us and they said the mock master 2000, the cyber suck. That’s totally the best marketing dollars that we’ve ever spent to get that name. So that costs you money to get that word, okay. Does your Rumba count as a kind of AI? Would you say? I believe so. People are starting to use AI to be synonymous with deep learning techniques. But for roboticists, there’s a lot of tools in the tool bag. Rumba wants something called behavior control, which was invented by one of my business partners at Irobat where we have a lot of behaviors that all run in parallel. The first generation, it wouldn’t fall down the stairs. It did obstacle avoidance. It followed the walls. The latest generation. I think it was something like 13 years later. It does navigation using a camera system, so visual slam techniques. Okay. Has your Rumba ever killed anyone? You know, we, we are. Wait, wait, wait, wait, wait, wait. It’s only a yes, no, no. That’s the, that’s the, that’s the, yes or no. Certainly no. But we actually, you know, you know, it’s product design. You have to look at what the, and ramifications it could have. Like the worst thing we came up with, maybe it goes into someone’s fire, pulls out the embers and sets the place on fire, has never happened. And by the way, they usually have lots to keep a, a little bit out and screens. But there’s a lot of, you know, but in the future, you’re, no, Rumba has killed anyone. Okay. Ruchir, which year, Purela? Did I say that correctly? Yep. Yeah, thank you. You’ve been at IBM for more than two decades. And so you’re, I’m just curious. Before we get to Watson, which you have something to say about, our earliest memories of IBM getting this game, I think was deep blue, where it was a chess program that beat the world’s best chess player. What made it so good in its day? I think it’s from a, from the point of view of, so I’ve dealt with optimization algorithms for pretty much quarter of a century. And their optimization algorithms, obviously. So this would just, so that it can make, it can calculate as quickly and efficiently as possible towards a goal. Yeah, it really, there were three things that came together, actually. So search algorithms, really smart evaluation criteria. And the third one is really sort of massively parallel computing application as well. So those three things came together to really give rise to something that, that wowed people. It’s an application of technology, again algorithms to coming sort of together from three points of view to give rise to application that’s really going. So, but could deep blue do anything other than play chess? So interestingly, the deep blue was, we have at IBM, we have something called grand challenges, and we pose these problems to really move the field forward. Deep blue was a grand challenge posed to the scientist at IBM. Similar to that, actually, jeopardy was also a grand challenge. But jeopardy was, but jeopardy wasn’t deep blue. No, jeopardy, certainly wasn’t deep blue. Yeah, well, that was Watson, correct? Yes. Well, get to Watson in a minute. I just want to work my way up to that. And I think I have some, some first hand knowledge of your grand challenges. I was once invited to address a retreat among IBM engineers where they were given cash rewards for their innovation. Do they still do that? Certainly, we have, we encourage our employees and scientists to really get the innovations out there and get the innovative tools to slowing. Absolutely. Yeah, I was delighted because each one got recognized, they got a certificate, the CEO was there. I mean, it was very much taken seriously. Yep. We still do that. Very good. We’ll get back to you. John, I think I messed up your last name. Gamundraia? Janundraia. Oh, Janundraia. Yes, that’s correct. Yeah, Janundraia. So, you represent Google on this panel. And could you just tell me, remind us what is the game of Go, and then tell us what Alpha Go is? Sure. So, Go is this ancient oriental board game, which is harder than chess. And the reason it’s harder than chess is because any given… Just to be clear, it’s a board game. You didn’t say that in that. It’s a board game, yes, right? You said it’s a board game. No, it’s a board game. It’s a board game. It’s a war board game. It’s not like… It’s a strategy game. Weapons and things. No. And so, these are pieces, black pieces, white pieces, and the reason it’s hard… People who have been playing this game for 2000 years, and it’s highly revered in Asia, and people are paid full time jobs to be professionals at this game. And the reason it’s hard is because any given position on the board, there are many, many more positions that you could take. So, you can’t use brute force approaches to figure out how to play the game. So, intuition is a very big role. So, the recent systems that have become very, very good at this game, you could even say, superhuman at this game because they beat the World Champions. They’re doing something fundamentally new, and people look at that and use words like intuition, which is not a technical word. And so there’s something going on. Who invited you to? But it’s a serious issue, because I think when people use words like that, a chess grandmaster is beaten by a big deep blue, or when the World Champion in Go was beaten first in Korea, at least it all. It’s an emotional toll on that player, because they’ve spent their entire life perfecting their ability to play this game. And then the machine comes along and appears to beat them, using the words that are used are like creativity, or intuition, or that’s something I didn’t expect it to play. And so I think that adds to the mystique of AI when actually what’s going on underneath is engineering, plain. So a brute force? No, in the case of the AlphaGo system, it was a combination of training and new algorithms to do so-called deep learning, which I’m sure will be. OK, so AlphaGo was trained on previous games that had been played. Yes, there’s two versions of it. The one that won the World Championships was trained on all human games that it could get his hands on, and then played itself. So it basically practiced after I had learned how to learn the… How quickly could it play itself in finishing game? Well, we do it in the cloud with thousands of computers, so we can do it thousands of times at the same time. So… Very fast. Very fast, OK. And then most recently… And it was just in the cloud. In the cloud. That’s right, somewhere. Lots and lots of data. Computing cloud, not the storage cloud. That’s right, the computing cloud. Yeah, yeah. So recently there’s been a version of this called AlphaGo Zero, and the interesting thing about this… So that’s an upgrade. It’s a later version. And what they tried to do with that is a researcher has tried to see if they could learn to play go without looking at any human games. And the… That way it would come up with stuff on its own. Yeah, and the publisher’s all sorry, AlphaGo Zero was actually better than the one that learned from humans, and it also plays test very well. LAUGHTER I’ll try to find other questions for you later. We’ll see. Doesn’t do a jeopardy, though. OK, so that learned… So… So it taught itself, basically. Yeah. And was not biased by the creativity of any human game that had previously been played. And so that… You play that game against AlphaGo. Another call. And it beat AlphaGo. Yeah, that’s right. So it’s extra badass. Yeah, the game… The games are a special thing, because games have an objective score. And so it’s not… It’s actually a good test for this level of the current technology. So Max, we go back great to having… Just like your fifth time here in the museum, it’s not even your first asthma panel. So thanks for showing up again. You recently published a book, Life 3.0. It’s your third or fourth book. I’ve been… Second. OK, it feels like three books. It takes so long to read. That’s what it is. Yeah. LAUGHTER Your first book was a mathematical universe and thinking of all of the universe as a simulation, basically. And we had you on the simulation panel last year. Like 3.0, what’s that about? It’s a… My day job right is working at MIT doing AI research from a physics perspective these days. So I like to take a step back and look at things… A cosmic perspective. Yeah. And if you look at it beyond the next election cycle and all these near term AI controversies and my jobs and stuff like that, then it’s pretty natural to ask, well, what happens next? What happens if all these folks succeed and ultimately make machines that can do everything we can? The earliest life that came along, I call it 1.0, because it was really dumb stuff, like bacteria that couldn’t learn anything in this lifetime. And then I call us 2.0, because we can learn. And… Well, you’re referring to the evolutionary achievements in the tree of light. Yeah. OK. And what comes next? I think we should think about this, because if the only strategy we have is say, hey, let’s just build machines that can do everything cheaper than us, what could possibly go wrong? I think that’s just apathetically unambitious and lame. We’re an ambitious species, almost ape, and we should aim higher. We should say, how can we use all this technology to empower us, not to overpower us? OK. We’ll need more of that. I’m sure, as this conversation progresses. Let me get back to Mike. Could you remind us about the Turing test, what that is? Sure. Alling Turing back in 1951. The movie, the imitation game. It’s a bio-pick on him. Is it right? And it does depict the Turing test a bit. So back in 1951, he proposed this thought experiment. Realizing that to try to get people to understand machines as being able to think would require defining thinking, and that will be very controversial. So we set up this thing that became called the Turing test. That is, see if you could have a machine have a dialogue with somebody and convince them that they’re a person, rather than a machine. If a person in an interrogation could not tell the difference between whether they’re speaking with a machine or a person, then you might as well say it’s thinking. This is really audacious in 1951. Think about what machines were like back then. People hadn’t even thought of word processing yet, and they were thinking about AI. That test, I think, has been very useful as a thought experiment. The field of AI has never really generally accepted that as the goal of AI or the definition of AI. But certainly, is that because you’ve evolved past that? We do have machines that sound like they’re not machines but people. So once you hit that goal, you say, oh, we need a better goal. And are you just moving the goalposts? So we haven’t hit the goal. So it turns out Turing didn’t realize that. It would be easy to fool a lot of people, even without being very good at thinking. There might be a New Yorker comic where two dogs are at computers. And one turns the other and says, the good thing about the internet is that no one knows your dog. That’s right. And no one knows your bot, either. And that is a potential way that AI is going to affect us and be ubiquitous. So it is quite relevant to try to impersonate people. We use that as a gateway to a lot of internet activities. You do a capture that is called a computer automated something Turing. I feel like the exact acronym. The team capture is San Fraterian? Yes. Oh, I don’t know that. Or you have to tell the machine that you’re a human. So find something that only humans can do. And of course, that bar keeps on moving all the time. So it’s quite relevant to try to impersonate for the de-elects and the series in the world are trying to be as human-like as possible. In films, we try to put and video games, realistic characters all the time. So it still speaks to us, even though it’s not the whole story about AI. So your point is we did so well with the satisfying the Turing test very early that it just wasn’t good enough discriminator for AI that people were seeking. Well, I guess I would say that being like a human is only one way to be intelligent. And you could be superhuman in many other ways. And you don’t stop when you reach human level performance in particular tasks because the goal is not to be like a human. The goal is to make ideally rational intelligence that could do all sorts of things. So Helen, with a company you co-founded called iRobot, could you tell us about, is it the three laws of robotics by Isaac Asimov? Yeah, definitely. The robots cannot, the three laws, one, the robots cannot hurt humans. Well, of course humans, to come to home through an action, robots cannot… That was one. That was one. So robots have to be… They cannot harm you. And their inaction also can harm you. Yeah, they have to obey orders unless it conflicts with number one. And the third one is they cannot harm themselves, unless it conflicts with number one and number two. And there’s one he added later on, the zero-th one, which is robots cannot cause harm to humanity or through inaction, have humanity come to home. So it generalizes it up from the individual. Oh yeah. Well, he made that the zero floor. So he stuck it in the front. Zero floor. Okay. But what’s amazing about it is he started writing the iRobot books in 1940. Practical transistors were invented till 1947. So I mean, one of the reasons we’re also honored to be here at the Asimov Memorial debate is I think we can speak for the panel that were all huge, huge fans of what he was writing about, especially way back. Well, just consider that he’s written about on topics quite diverse. So no matter what subject we have here, there are books that he wrote about it. Every panel we’ve ever had on. Yeah, he’s a really good one for him. Any subject. I read that by Asimov when I was a kid. So that said, people asked me, are you putting those in the robots? And the short answer is they’re greatest, a little bit more tricky to program. And so unfortunately, the answer is that state of technology is not ready for those types of abstract rules yet. But they’re nice guidance, just philosophical guidance, I guess. I haven’t been practical view. I think the laws of you state them now might be robots can save people. They have saved people and they could save a heck of a lot more people. You know, it might be the robots. Well, plus the military would not be obeying those laws. Yeah, exactly, exactly. And in reality, because I’m a business woman as well, is a robot lover, robots are not going to hurt people. Don’t say a robot lover. That just doesn’t mean I just find some other phrase. Robot, I’m a robot enthusiast. There you go. Thank you. Thank you. Robots are not going to hurt people. They’re not going to hurt themselves. They’re not going to do these things. Because they’re going to be either scrapped, they’re going to be sent back or someone’s going to be sued. And so from a business standpoint, the robots are going to be safe to operate. Oh, one of my favorite, no. A video that I found amusing was a cat riding around on a room book. You know, that got so many views and I have no idea why. I mean, it’s like 10 million subfuses, right? It’s crazy. I mean, if the room big enough for me to sit on, I would do that. That’s, that’s, when you’re a picture. That was, that was not in our brainstorming sessions when we thought about all the applications for robots. So, which year? The, could you, could you get us from deep blue to Watson? What, what happened in that transition? And, and if we can remind people why we all know about Watson, there was the big, the big contest that you guys entered it in. So certainly, let me pick up the thread from, from the chess and the goal and, you know, let’s, let’s continue. Okay, continue. Okay, continue. Continue. Just finally, they’re all. By the way, deep blue beat Kasparov when, when Google had 10 employees. Okay, so just like, just like where were you? All right. Absolutely, try. Okay. Good. Can I get you on that one? Yeah. Okay, got you back. Okay, so take us there. So the journey continues from, from the point of view of chess game that beat Kasparov to, you know, we went on to, okay, what is next? And obviously, Kasparov was the world champion at the time. At the time. And, and natural language, which is so fundamental to humans, actually. And the intricacies of natural language. As, as we’ve been sort of, at least there’s one fundamental trait that, that human, humanity has, which is just the proliferation of language, the, the advent of language itself. So we decided that will be the next leap that we are going to make. And there is no game better than jeopardy that captures that intricacies. So we posed that as a grand challenge. Jeopardy, not, not only language, but culture. But culture, right? Yes, not a calculation anymore. It is a traditional sense. It is certainly not a calculation anymore. And the way the questions are posed are so nuanced that, you know, you really are dealing with at this point in time, not just a calculation machine and simple evaluation criteria and search algorithms and, and parallel computing, but really understanding language, question and answers, and the way we interact as human beings. So that was really the, the advent of the next challenge. Because once we are able to solve that, the implications are phenomenal in terms of the benefit it can bring to, to us as a society, which is where we took that level too. The first thing that sort of we started right after jeopardy was the, the applications of that technology to the health domain, which is so fundamental to, to all of us. So right from chess game, the next challenge is really addressing fundamentals of what defines us as, as humans in terms of communications, addressing those intricacies, and then applications of that about. To serve needs in actual society. Absolutely. So Watson, in principle, can become the best doctor ever, because Watson can read all the research papers and then, then, then interpret symptoms in the context of what is known worldwide rather than just what one doctor happened to learn. Absolutely. And at least the way we think about it is really not, it’s not about, you know, does it become the best doctor, but as we all know, no physician, single physician has the time, even if they have, certainly the intelligence to figure all of this out. They don’t have the time to figure all of that out. And as Max was saying, it’s really about empowering, you know, professionals, than necessarily overpowering them. And really Watson is about empowering the society as opposed to overpowering it. And that’s why I really think about, it’s bringing capabilities whereby, yes, it can read millions of studies and millions of trials that may be going on. And there are some well-publicized cases as well, where it had actually saved, you know, patients either in North Carolina or Tokyo or a study that was published more recently in India as well. But it’s from our point of view, it’s really about bringing the technology together with the human beings, what we call augmented intelligence. So in all fairness to our understanding of this, Watson only knows what is available on the internet, correct? Yeah, Watson only knows what is actually being fed to it, let’s say, whether it is available on internet or it is private information. So how does Watson know what is fake news or not? You can make a super machine that cannot distinguish the two. Well, apparently humans can’t either. But in principle, we educated can make a judgment. Will Watson be in a position to make that judgment? I think at least regarding fake news, the question really is on, we are all pushing the boundaries of that technology and yes, the machines need to be trained and they can really help us given what has gone on, actually, in last couple of years. So once they are actually, you bring that technology to bear in terms of realizing there is a problem, you can actually correct for it. So it’s not about whether Watson can distinguish it today or not. Once you realize the problem, you can actually start working on technologies that can start deciphering that much better, thereby helping us as society because, you know, just what is going on overall. So, but from what you described Watson would still be shy of this holy grail of just thinking stuff up on its own. Without reference to, I mean, when you think of the most creative people, they are ever were. Sure, there is some foundation from where you could trace that creativity. So, many of them, there is a spark and something new comes out of them that had no precedent. So, from what you describe Watson is capable of digesting pre-existing knowledge, but in its current state, or at least the state where it is familiar, it is not inventing something new. Certainly, the purpose of the technology today is really not about that spark in itself, although it will find, I would, in particular find out, it will find insights that you didn’t know existed actually. Although they were hidden in there, you didn’t know existed. So, it may be a aha moment for you, I got it, but still it existed there, it didn’t. So, it will actually do that, but yeah, it wouldn’t get that notion you are saying, hey, that was spark, no, it doesn’t have. So, John, tell me about the future. We could spend a whole panel on this, but I just want to just put it on the table briefly. What is the role of AI in the future of autonomous cars? I know you guys are working on this. You entered certain autonomous car, you were a division of alphabet that works on this. Just to be clear, the holding company is alphabet, and Google is one of several companies under alphabet, and one of those companies were tasked with making the autonomous car. So, it’s a super hard problem, I think people have been working on it seriously for more than a decade. They’re making progress. These cars have driven millions of miles with very small numbers of incidents, but they’re still pretty constrained. They’re more accurate than a human driver, but they’re limited in where they can go. For example, the kinds of streets that they can drive on, the cities, and so on and so forth. But the technology is progressing fairly dramatically. I’m pretty confident to say that we will have fully autonomous cars for most of the large car manufacturers within a decade. And what role does AI play in that? Or is it just really good programming? Well, it’s machine learning. So, you know, these systems have a lot of computers on the car that can detect a stop sign or can figure out, you know, there’s an impediment in the road, or a kid just ran into the road, or there’s a cyclist in California, we have this weird thing where motorcycles is allowed to drive between the lanes. But motorcycles is allowed to drive between the lanes of the cars, and so for the computer to actually understand what’s going on here and figure out what’s safe and what’s not safe is actually quite hard. I think one of the things going to happen here is even if you don’t see millions of autonomous cars like in three years, most of the new cars that you buy will have semi-autonomous features in them, like automatic braking or telling us. Which are all accustomed to and expected on the next car. So, I think this technology kind of comes in increments. It’s not like a big bang thing, you know. And, you know, I’ll just echo this comment about augmentation, because the phrase AI means so many different things to so many different people that it’s really hard to kind of pin down what it is. But the idea of augmented intelligence has been around for a very long time. A lot of the ideas we have in computing today came from the work of Doug Engelberg back in the 50s, and he had been, you know, describing computers as being a tool. A tool that can help the doctor look through more information, that can help pinpoint something in an X-ray, not something that would replace the doctor. And that’s how we think. Which is Max’s point. Not be just what’s the two words you put together? Oh, empowered versus overpower. Yeah, very good. I like that. Could you describe for us what’s the difference or what is the ascent from AI to general AI? Because we hear this term general AI. And what’s going on there? What have we been talking about so far? And if it’s not general AI, what is? It’s really important to be clear on what we mean by intelligence. As you mentioned correctly, John, different people mean different stuff, right? I think it’s a really good idea to go in the footsteps of Helen here and make a very broad definition of intelligence. So even Roomba is intelligence. And just define intelligence simply as ability to accomplish goals, you know. So Roomba has very narrow intelligence, really good at acting cleaning. Today we have… Was that a diss on… I am a proud Roomba owner. And we have… The Roomba can carry cats around. For all we know, the Roomba is like the Uber for cats in the house. So wouldn’t that be cool? If cats could like get the Roomba to come and take them around. So… Get the Roomba to open a door for them? Yeah. That’s right. So today we have many areas. So if you define intelligence as ability to accomplish complex goals, then there are many areas today where machines in narrow domains are already much better than us. Not just vacuum cleaning and… And high frequency trading and multiplying large numbers together and stuff like that, but also now in playing chess and playing Go and so on. But no machine today… No single machine. No single machine, not even the whole internet combined, has the broad intelligence of human child who give enough time can get quite good at anything. So this is what’s meant by artificial general intelligence or acronym AGI, which has been the holy grail of artificial intelligence ever since Marvin Minsky and McCarthy and others founded it that came up with the whole… Founded the field in the 60s. But Helen… Now, Helen, you can’t come to this from a product, a consumer product point of view, and I want to get back to what you just said. People who are making AI want to sell something. So they’ll sell you something that cleans the room, that drives the car, that does any one of the things that help our lives. Who’s going to buy something that has general intelligence? And will the general intelligence be as good at the pieces of it as the specific products that industry would be making for that one need that you have? Oh yeah, by definition. So if people say that they think that machines will never be able to always be jobs left for humans, they’re just saying by definition that AI researchers will fail to build artificial intelligence. Because that’s the very definition of it, that the machines can do everything better than us. And many people… Like I have many conversations with you. I’d like to point out something about this. Yeah, I mean, just based on mechanical and a sensing component, as well as the, you know, what you’re calling AGI. Mechanical and a sensing element to make these machines better. Sure. But you can have software, but if it doesn’t have the physical means to enact what it’s supposed to, it’s just a box. No, no, it can do some great stuff. Like you can feed it, a photograph, and it could, you know, tell you if you have a bus can, or something like that, right? But it’s not going to go out and sweep your face. But I think the final word on definitions should go to Shane Legg, one of the leaders of Google DeepMind, because he coined the phrase. And he simply meant something that can do the same information processing that the human brain can do. And if you hook it up to good enough robots, which I’m sure you can build, then it can do great stuff. So that’s the goal of certain companies, like Google DeepMind, for example, to try to build that. And that’s why they keep trying to push the envelope, right? But I got to go to my three industry people. What does it mean to buy something that has general AI? What do I do with that? Do I say make me the best cup of coffee, drive me to my office. What’s the square root of two? And I mean, in practice, is that a thing? So in principle, and this is highly speculative, but in principle, an AGI could build any other kind of AGI, and therefore could build you any machine you wanted to build. And that’s what people worry about. That’s when we all die. That’s when a class of people who call themselves transhumanists would say that humans would evolve. And I personally don’t believe in this. I see no evidence that it’s going to happen, but that’s the source of a lot of the ethical discussions about this topic. Like speaking of ethics, could you tell us about the trolliology? And what role AI can play in assisting our reasoning there? So probably many of you have heard about trolley problems. This became popular in psychology to pose ethical dilemmas to people and see how they react. And there’s many variations of it, but the standard kind of story is a trolley is going on a track, and it’s about to hit or kill three people. And then you notice that there’s a switch, and you could make it go over to another track where there’s only one person. And you could choose to kill that other person instead of the first three. Would you do it? And I… So the dilemma there is, somebody’s going to die no matter what. You either can not touch it, then the trolley kills three people on its own, or you can intervene and actively kill one person. Right. I’m not a psychologist, but I think it seems to be a silly question to ask people, because humans can really never get, I think, into a mental state where they can really believe that with certainty, if this action… If I take this action, I’ll kill this one person for sure, and the other action. There’s always this uncertainty, there’s always questions about what the blame with. It’s not actually a realistic situation. So the question is, will AI actually, maybe, is it more realistic for them perhaps, could an autonomous vehicle be in a situation where all of a sudden a bicyclist runs in front of it and has a chance to swerve and do some other damage, and will it have to weigh that? You would have to take out the vegetable cart first, and then find out what else it does. So will they have to be coded in them, what the solutions are to those dilemmas? When it does happen? That implies that humans get together, figure out a solution and you hand it to AI. That’s not the point of AI. The AI is going to have some higher intelligence than we do, and that’s why I’m curious. So I think AI to that problem is going to give different answers than we would, and then we said, oh my gosh, we never thought about it that way. Let’s do it that way. So I think this is further going in some of the session here. Actually, no, AI, the idea is we want to give the humans to give the AI the values, and the AI is concerned with making decisions and taking actions to promote those values. So ultimately, we are saying, we value life, that’s part of what the robot laws are for. They all know how about laws. They are science fiction. No, no, no, no, no, no, because the danger is that they would be weaponized by the party that is programming them and is controlling them, not that they’re going to all of a sudden decide to get rid of the humans. That’s not the source of the danger. With respect to the trolley problem situation in the psychosyptanomist vehicle, when it does happen that one of these cars runs over bicyclists, then it will happen, I think, much less frequently than humans do it today. We will take the black box. I hope they’re engineers that they have a black box that captures everything that was in their senses all that time and is very secure so they can’t lie about it. And they will be able to secede it and will say, you made this decision, why did you do that? It might say, well, I hit the bicycle because if I swore to the left, I would have run over a child. Or if it said, well, I did that because if I swore too fast, I’d wake up the passenger and then you’d say, no, that was the wrong decision. That was not what we meant for you to do. It’s still better than what the Tesla said. Don’t wake me up for any reason. That’s right. It’s the robots job to obey me. This is part of the nature of AI is that the unintended consequence of the specification of the values won’t hit what you really care about. Let me ask Google and IBM here. In your efforts, I don’t want to call it a race, but let’s call it an exploration. Is there a tandem sort of ethical group? Let me start over here in IBM. Is anyone thinking about the ethics of what AI would do if you achieved this goal? Because we certainly have sci-fi movies and none of them, it never ends well. Any of them. So certainly we were one of the first companies to actually bring principles of ethics and responsibility to AI. It’s captured in sort of a bold ways in what we do overall on the information we have. But most importantly, there are three fundamental tenants we go by as it pertains to AI. The second one is building AI with responsibility. The second one is building AI with that’s unbiased. The third one is building AI that’s explainable. I think those are the fundamental tenants that we drive and strive towards. In our research teams, we have a significant number of people and scientists and efforts that really try to drive the AI services that we offer, the solutions that we build with tremendous number of businesses around to drive them with those three principles. And obviously, I think we all know the way AI techniques work these days, they are driven a lot by the data. And you are as good as we were just discussing before, you are as good as the data that you are fed. And detecting bias in the data itself is actually one of the more important research and technical challenges. And having techniques that are able to de-biase that data as well in terms of when you are learning, you know that there is bias in the data, be able to de-biase it so that you can build models that are actually unbiased. So that’s why I said, we are three fundamental principles that we go with. It’s sort of very formal and drained in the principles through which we are driving AI. Speaking of bias, John, if I remember correctly, there were some fascinating studies recently where Google, Facial Recognition, Software was not as good as identifying black people as it was with white people. And then they found out that just white people programmed it. So maybe that’s just kind of obvious at that point. But that would, I think count as a bias. I was actually lunch with one of the authors of that paper today. They haven’t actually measured our systems. They measured all the people systems. But it’s a serious issue. And I think that- So it wasn’t your facial recognition? It wasn’t ours. But this issue of bias in machine learning is super important. So sorry to have indicated that. No, it’s okay. It’s okay. So I mean, we think that this is at least for the next few years, the most serious ethical issue. I think this AGI stuff is years, decades away. So I don’t spend very much time on this. But this question of if you’re building learn systems, machine learning systems, learning from data, if your data is biased, you’re going to build by a system. And this could be everything from whether to give somebody a mortgage or what their credit score prediction would be or there are people selling systems now are used by courts to predict recidivism rates. And they’re not explainable. And it’s not entirely clear what a data they use to train them. And we think this is just unethical. So it’s garbage and garbage out. Yeah. And so- And we know that one was very biased. Yeah. So many of our companies work together outside of the commercial realm with academia, but also in nonprofits looking at this question because we’re really worried about building systems that give a bad name to all this machine learning. So, but in all of your efforts, how would you characterize the sort of the ethical dimension of what’s going on? So, they’re philosophers, are they psychologists? What are they? No, they’re usually data scientists and researchers who are looking for systemic bias in the systems and the data that we’re using to train the systems. Okay. So, I get the bias part. But how about the trolley car part where there’s where we will the AI have the values we care about if it will properly serve us. If the AI achieves consciousness. Yeah. I mean, it comes up with values of its own. I mean, our company has very few situations. Autonomous vehicles would be one where we have to actually struggle with these issues. Mostly we’re worried about recommendation systems giving bad recommendations to people. Okay. Or ranking systems giving bad results to questions that you asked. But this is moving fast as a field. I think as a field is moving fast. And I think academia is now got entire classes on AI ethics and machine learning ethics. And I think society is responding in an appropriate way because we’re worried about this stuff. So, Matt, you’re president of the… The future of life institutes. Future of life institutes. It sounds very new AG by the way. All future life were for it. Future, okay. The like it exists. Not a controversial… You would think. Put that on Twitter and then people would argue with it for sure. You would be smart. Yeah. So, could you tell me the difference between an IS and an Aught philosophically and how that matters in AI? Yeah. It basically comes… Was it a human who did this? But one of the philosophers, yeah. Yeah. They come basically, it comes down to, you know, saying that might make right is a really lousy foundation for morality. Just because something is in a certain way doesn’t mean that’s the right way. And just because, like, default something is going to happen if we don’t pay attention to it. It doesn’t mean that’s what we really want to happen. You know, I’m very optimistic that we can use AI to help life flourish like never before. If we win the race between this growing power of AI that we’re seeing and the growing wisdom that we need to manage it. Yeah, but we’re… And there, I feel we’re kind of a little bit asleep at the stick. You said here, sorry, I don’t want any AI person to say we’re asleep at any time. But I have to pick on you, John, a little bit. You said, well, you know, I think this AI stuff is kind of decades away, so I’m not thinking about it much. But I bet you wouldn’t say, I think this climate change stuff is a few decades away, so I’m not thinking about it. We are, you know, we’re… You look young and healthy, you’re working out, taking your vitamins, you’re giving me around then, right? And if it’s going to take a few decades to get this right, I feel it’s really important right now to think about it enough that we can… I totally agree. We steer things good, I don’t spend very much time at Google with researchers on this task, but we do invest in groups around the world. It Oxford and Berkman. Yeah, and places who are looking at this stuff. And so I think I remember the partnership of AI. It’s not that we’re abdicating responsibility. It’s that we just have no idea what the timeline is. We do know what the timeline is for global warming. Yeah. If anyone knows the timeline of this, it would be you, presumably. Well, I think also we do know quite a bit about the timeline. First we know that it’s a great controversy. Your co-founder Rodney Brooks told me in person, he thinks in DeepMind’s quest for AGI is going to fail for at least 300 years. But most researchers in recent surveys think it’s actually going to succeed, you know, maybe in 40 years, maybe in 30 years. So that to me means it’s not too soon to start thinking hard about what we can do now that will be helpful. But I want to get back to the point of the things that are and the things that ought to be. Yeah. Do you trust AI to judge what ought to be? No. Where’s this? Okay, good. I can give it a long time. I can give it a long time. And how do you imbue what ought to be in an AI if an AI is a higher level of consciousness and capacity than we are? Yeah. Maybe it knows better than we do. People often tell me, if these, if AGI is by definition smarter than us, why don’t we just let it figure out morality, what ought to be. But the fallacy in this, of course, is that, no, artificial intelligence and technology in general is not good or evil. It’s morally neutral. It’s a tool that can be used to do good or to do evil. Intelligence itself is simply the ability to accomplish goals good or bad, right? If Hitler had been more intelligence, I think that would have sucked, right? And so I wouldn’t want to delegate to him just because of that reason what we should do. Instead, we should take the attitude that we take when we raise kids. We often raise children who end up being more intelligent than us. We don’t just ignore them for 20 years and hope they something good comes out of. We really try to inspire, there’s still young enough that they listen to us a little bit, right? A little bit. We try to instill in them values that we think are good and I think this is linked back to what you were saying about that. You’re saying in the next 20 years, we still have a chance to teach AGI who and what we are so that when it achieves consciousness, it will not exterminate us. Well, it’s even harder though than raising kids. Keep us around as pets. It’s tough though, because sorry if I get it a little nerdy now, but with children, we can’t teach the morality when there are six months old. No. And when my teenage son has got two late, she doesn’t listen to me anymore. But there is this magic window we have over a few years when they’re actually smart enough to understand us and still maybe we have some hope that they’ll adopt morality. Where is AGI in that? It’s not yet reached the point where it understands human values as we can’t explain it yet, but it might pretty quickly blow through this window where it’s still not as smart as we can influence it. And we have to plan this curriculum, plan this ahead. And I think it’s really good that you are working on that, for example, so that we don’t want to wait until after someone or the night before someone switches on a super intelligent to be able… Oh, how do we figure out this, you know, teaching it like right from wrong stuff? That’s probably too late. So Mike, there’s a probably too late, yeah. It’s certainly too late, if that happened. So Mike, I’m curious about something. The capital markets, I don’t want to say that they rely on this, but a lot of what makes them fluctuate is that different people have different information that they are betting on if they buy and sell stock. So if you make a machine that is access to all information and is perfectly rational, is that machine or the person who owns that machine the first trillionaire in the world? So, interestingly, Wall Street trading is one of the first areas where autonomous agents are really out there. And I think that’s one of the reasons why it’s useful to study long-term implications of AGI by this case study of seeing what’s happening right now. And right now, lots of firms, not very far from here, are programming computers and putting them out using machine learning and using a lot of data and a lot of the same data to make decisions. So one question is, well, if everyone is using the same data and maybe stumbles on the same algorithms, are there possible effects on the stability of markets that if something goes wrong? Could there be more prone to crashes or not? That’s something that we’re studying. And if so, are there things that we do to try to mitigate that? If the question you asked about the first trillionaire is if one group, one firm, one country has an edge in AGI, will they be able to then leapfrog everybody else and just suck up all of the resources? That’s a, that’s actually a significant issue. Financial markets is one place where the money is and if you really get it so much better than everybody else, there could be major shifts in distributions of wealth. It’s not only financial markets, it could be the internet. You can put, you know, put smart AIs out there and say find some way to make money for me and they will. So you’re saying a country can just corner the market if they get there first? So this is somewhat I think, I’ll understand and controversial, but certainly in this longer road to more general, more capable AI, if one entity has a significant edge, they will have a very strong incentive to shut others out and to capitolate the market. So there is no doubt there is an arms race dynamic to many aspects of artificial intelligence technology that perhaps is most frightening in the military realm, but also comes up in financial realms and it’s in the fake news realm. Now we were talking about, you know, AIs going to be better at discovering fake news. Never mind that, they’re going to be much better at promulgating fake news and that’s going to be a challenge for all of us. This could go to any one of you, Helen, Helen, what could you foresee robots or AI in general informing political policy? Because if they can, look at Watson, Watson read a thousand medical papers and comes up with some conclusions based on it. So you make machines, you make drones that can make decisions that we can’t and that can make them more quickly and presumably better. So is there a scenario where here are political factions arguing because really their feelings are involved more than facts. At the end of the day in an informed democracy, you kind of want facts to matter. I would think. I’m a little bit on the other side of that we are very far away from this AGI generalized AI and this wonderful progress being made that allow AI systems and robots to do more than they could do before in recognition and characterization, but we haven’t made that leap and it’s going to take an innovation step to get there. So to really worry about that now, I mean right now the machines are feeding information into the system and humans are making the judgment. Now I believe that they will come, but it’s unpredictable because there’s an innovation, many innovation steps that have to happen before that day comes. So it’s not because an innovation you can’t order up an innovation. Yeah, you don’t know when it’s going to happen. Hopefully some of the younger people in the audience will make those innovations because I think we should have it happen. So, so, which year it just seems to me given that Watson might be uniquely qualified to come up with a political policy decision. If it reads every consequence of every political decision that’s ever been made looks at what became of it, looks how people reacted, looks at what people wanted and then just said you should do this. So, this should be maybe a machine on the floor of Congress and people come up to it and ask it. Right? It would be like the Oracle of Congress. It could be Watson. Right? Let’s check. I’m arguing in the dining room with my political colleague from across the wall and across the aisle and we say let’s go check Watson. Are you telling me that poster is Watson 2020? So, let me an alpha go 2020. Yes. So, first of all, I think let’s take the question, precisely the question you asked. Could AI be helping public policy and to that I’ll answer absolutely yes. It could be helping public policy as it pertains to decisions that are within the country as well whether it is taxation or other scenarios. Absolutely yes and it already is actually. So, I would not just say it should be helping, it already is helping. Now, the question really on the table is, have we reached a scenario where there is this Oracle actually that knows everything and no we have not reached that scenario yet. Yes. The reason I’m saying that is because it’s really about domains that you specialize in actually and the information is fed in those domains. So, just as an example, we are working towards in compliance to regular regulatory compliance and yes, we can actually feed information to the machine and it learns and it’s going to find insights and for example obligations that a particular entity may have. But I think by Oracle everybody understands it to be no all actually knows everything it reacts to everything and we have not reached that point neither is the intention to reach that point whereby you know everything you react to everything the point isn’t really be precise in scenarios that’s going to help society whether it is in healthcare domain or whether it is in public policy domain or it is in compliance domain. So, that’s where lot of the benefit to society is going to come from at least as engineer and scientist I would say let’s be more precise let’s define the problem and solve the problem in domain and then we make the progress from there just like what we did in the scenario we looked at chess we defined the next problem that’s really the next level up in terms of the language you solve that problem and you move on from there. Maybe a question if we if human level intelligence might be hard what about Congress level intelligence. But I think that’s not really fair I mean. But let’s say that’s saying goes if pro is the opposite of con then progress is the opposite of con. Right? And we hear that one. That one goes way back. But I think it’s true once we agree on the values then AI can be a great help in sorting out the policy questions and of course it’s not the Congress is not intelligent it’s all about fighting about the values and the priorities and that problem doesn’t go away when you have AI. Helen can you foresee a future where robots get angry with people? I think that we can put in simulator emotions to help with decision making I think that you you know you can also have it to do a more natural interaction with people that respond how you would a human would respond. But I I I I’m not in the way that you might think of a person as being angry for a while until some of these other innovations come you know come out. The reason why I ask is so there’s a video of all of the occasions where they abuse their own robots. So they have robots that are walking and then they just kicked them. And then they so I mean it’s interesting because you can tell a lot about a person about how they treat a robot. Well, that’s my point. So these are robots that you almost kind of feel for them because some of them are sort of humanoid rather than non humanoid and the early ones they were just sort of fall over and I get it they’re trying to increase the stability of these. So now they’re poking them and pressing them and then the robot rebalances and comes back. They get lots of complaints about it. I know. I think there is something going on which you you know you hit the nail on the head. I think all those robots will have memory when we had the first time they achieved consciousness. There’s been studies that people named them. They get attached to them. Our military robots too. When we put them out in the field you know we had big big Marines come into a robot hospital saying can you fix it and it’s all blown up and you know he didn’t want any other robot he wanted that one because it had gone on mission. It’s done like 18 missions and. The big tough military guys but because they’re working with the robot because the only things they experience that have this kind of behaviors of animals it’s like it’s not anthropomorphic anthropomorphizing. I think there’s another word that could be like. I think something sentient like centiple morphizing maybe we’ll make up a word. I love that word. I love that word. So you’re saying military who had been served by a robot the robot blows up because it found the mine and then they take the pieces and they go to the robot doctor and say could you can you fix them. You can have another one. I want this one because the name was scooby-doo when it saved you know it saved 11 guys on one mission right. And there’s been reports of you know people giving the burials people in military service members. They buried the robots. Yeah giving them. The robot is viewing them with you know personalities saying this one’s tough this one’s a little bit wimpy. I’ve had people tell me that they’re sure that the moon moved a part into the way of the virtual world so we could escape. I can assure you it didn’t think about it. It really accidentally did it. But it’s that centiple morphizing that people automatically do and it’s it’s wonderful. It’s kind of cool right. If you bury a hunk of metal microbes won’t eat it it’ll just still be metal later on. We saved scooby-doo we bought him back and he’s he’s he’s at the I will buy that credit. Yeah yeah. I want to sort of land this plane but I want to do it in a way that because I there’s still some really important pieces of this conversation we have not addressed. Because you all are kind of I don’t know you’ve been shy of this threshold that I want to take each one of you. At some point let me lead up to it so I have a calculator on my hip and it calculates better than any human who ever lived. So in a sense it’s a super human property that it contains that we built. You can go down the list of computer run things that do them better than the best human ever could have or ever will. And that list is growing. Okay and autonomous cars will be a month it will drive a car better and faster and more controlled than any human who ever lived. So as these accumulate it’s it’s it’s doesn’t seem to me to be a stretch to ask if general AI achieved some kind of conscious state whatever that is. However we defined it that that consciousness will be a super human consciousness. Is that what you’re shaking your head Mike. No I’m not you’re nodding my. I’m shaking my head yeah why are you shaking your head because having more smart tools that are super human at very narrow things like calculating or driving or diagnosing cancer is not the same thing as having a consciousness and having EGI. We’ve had more tools for the last 200 years that calculator you’re talking about you didn’t have 50 years ago it doesn’t make us less human. It freezes up to do more things I remember when my daughter was in school they wouldn’t let her use a calculator to do homework for the 20 years of hindsight seems absurd right. But just because you have these tools and you can use a. That’s not what I’m asking but it’s not inevitable that if you have more. That’s not what I’m asking but you’re making the leap you’re saying that if you have more of these tools then you’ll have AGI and I disagree with it. No no no okay I can see how you think that that was not my intent. I’m saying these tools are evidence to me that the day general AI arrives there’s some decision making power that it will have that will be super human. Because everything else we created using computers and put a lot of thought behind became super human in that way. Is it unfair to imagine for the safety of us all whether general AI would have super human conscious consciousness. I think it’s very likely I think we humans are stuck on the idea that we are like the pinnacle of how smart is possible to be. You have a long tradition of lack of humility right. But let’s face it you know our intelligence is fundamentally limited by mommy’s birth canal with and the fact that we explain that please because it’s robbed and neurons and. They’re pretty cool our brains but but nothing special about that level just to be clear. We could have had bigger brains but we would have killed our mothers in every birth. So we have basically the biggest possible brain to be born out of your mother without killing her. And so so that’s it that limited how big our brains could get. Yeah it’s already an issue. So I just want to get a damn head out of there. You’re comparing two different things here right. Am I right about AGI? Yeah you’re comparing you. Am I bred that right. You’re comparing one person’s brain size with the sum total of humanity like there’s seven eight billion of us we communicate with language we hopefully cooperate. That is way more powerful than a single AGI. Sure. I don’t think we necessarily disagree. I mean but I’m saying that if once we figure out if we figure out what went which sort of how to make AGI suppose that happens in 35 years then there’s no reason to think that. It’s going to stop there and become like in all those lousy Hollywood movies where we have all these robots which are roughly as smart as us and that’s it. And we just become buddies with them and go drink beer with them. It’s very likely that they will just continue dramatically getting better and they can now start developing even better robots and they will be as much better. Everything as they are today at multiplying large numbers. This is my that’s the foundation of my inquiry Mike. Where are you on this? I’m with Max. I see no boundary and reason that that wouldn’t occur the timing is very uncertain. I think this uncertainty is also part of the equation that we have now about you know being prepared for it because it could happen faster than we think. It could happen slower than we think. It could be there could be obstacles that make it really far but we just don’t really know. But you know it’s true you put a lot of brains together but we have very minimal communication channels between us. This linear speech that we’re doing compared to what computers do when you build up them up together and have them talk. They can do just so much more. So I think they’re already super intelligent in many ways not just your calculator. Everything we do it never it doesn’t stop. They’re not the the the algorithm that creators that I talked about don’t at all stop at what our human traders can do. So I believe we are machines, we’re biological components so I think that you know we will eventually be able to duplicate and improve upon. But the problem is when you discount timing at all and what’s being done like you know these bag of tricks are not going to get it there. There’s this core inventions that have to happen potentially different hardware than you know running a. You know doing machine right it’s you know there’s a lot of stuff that has to happen that you know if you want them to be mobile have better senses better mechanics as well as all the. AI so I think it’s you say well why shouldn’t we worry about it now because it’s not very close you know in 2000 Bill Joyce started writing about how the threats to humanity and one of them was robots I started getting calls from like Wall Street Journal everywhere with the robot you know before I die robot saying. You know what are you what kind of you are robots are you making I’m like you know I couldn’t say it then but I’m because we hadn’t launched the movie yet but we’re making a robot vacuum you know don’t. Yeah but what else does it do yeah but it gets people maybe not focused on the on the wrong things rather than what these new achievements that AI are just getting to because they think it’s becoming general AI and it’s really not yet and the many of them on the stage which would like it to but but it’s not. Mike let me just ask this my deep skepticism that this will go the way people imagine especially in the movies is we don’t really understand consciousness right now in humans so it’s not obvious to me that we can just assert by fiat that a smart enough computer will achieve consciousness when we don’t even understand it within ourselves and there was an interesting bit. And the movie I robot I don’t remember if it was captured in the book itself by Isaac as a mob but they noted that because they didn’t replace code with new code every time they upgraded the robots every generation of robot had this baggage of software that was just dangling there kind of like our brains with left over wiring from long before we became human different parts of the brain evolution doesn’t swap that out and make it fresh it builds around it and it’s got to deal with it we have to deal with it behaviorally for to this is our our our primal nature has to be overcome by later brain revelations that that we got from natural selection my point is in that film they they asserted that this extra dangling extra dangling software made the robots do things that the intended software that the latest software did not intend and so that in a way it was almost like a free will was emerging in them the robots would do things and it’s a white program that in that left over wiring from 20 years ago I don’t know what I just did there so so evidence that we don’t understand consciousness is you go to the bookstore and there’s shells upon shelves of books on consciousness that’s evidence that we don’t understand it because people still writing books on it you go to go to the shelf and ask for books on gravity there’s like two books we got this one so so so where so so where does it come from that people just declare that generally I will have consciousness so I don’t know that one person I don’t understand consciousness but I also don’t think it’s really even necessarily has to be part of this discussion I mean the when you have an AI that is super intelligent in every way can do any job as well as any person can do every kept capacity whether it has whatever we think of consciousness and has that same you know illusion of free will or and and and way of thinking about itself seems to be maybe beside the point we’re still faced with an issue about dealing with entities like that whether or not we correspond on the consciousness question yeah I can make there that whether it’s conscious or not doesn’t have to the effect at all how it how it treats us maybe it should affect how we treat it right from ethical perspective but I also think we should all come up with their three laws maybe a robot should not harm a human or human should not harm a robot but we should also remember I think this this famous quote of up to sing Claire who said that it’s very hard to get a person to understand something when his or her salary depends on not understanding it and I find it no offense to the three of you here who are for companies but it’s every time there’s a debate like this that I’m in it’s always the academics for like yeah this might happen and the three the people from the companies are always everything will be fine I would love to ask you the same questions over beer when it’s not a camera that’s why I plank the three of you very much on purpose before we’re going to open the floor to questions in just one moment if I just get some some summative reflections let’s start let’s start down here this should we fear AI and if so on what level keeping short yeah it says should we fear fire or not that should we love it I mean AI is an incredibly powerful tool and it’s either going to be the best thing ever to happen to humanity or the worst thing ever I don’t think the question is whether we should stress out about I think the question is what do you think we could be the worst thing the best thing for the worst thing ever but we shouldn’t stress that is the definition of stress I meant we should it’s the interesting thing isn’t that’s quibble about how stressed you should be the interesting question is what should we do the truthful to maximize the chances that this will be awesome because if we if we really work hard for this I really do think that AI can help us crack all the toughest challenges we have they’re facing us today and tomorrow and and create a really inspiring future but we’re going to have to work for it it’s not going to just happen if we’re sleep at the wheel John so my problem with this question is we didn’t in this whole hour to define what we mean by AI right so there are some very smart people who think the AGI is inevitable and that it has ethical implications and so on and so forth my my beef with that is there’s lots of technical reasons to believe that it’s not inevitable I agree with Helen that is we just have no idea what breakthrough after breakthrough after breakthrough we required to go from the kind of practical AI we have today to the kind of AI that we’re talking about conjecturing here so I’ll give you one example small children can learn from small numbers of examples today we have to give computers hundreds of thousands or millions of examples a child that learns to play chess can also play tic-tac-toe right our goal program can play tic-tac-toe unless we program it to do so so there’s these huge barriers to general generality of intelligence and as a technologist and as an engineer and somebody working in the industry I see no evidence of this stuff imminently going to be going to happen that doesn’t mean we shouldn’t be having the academic ethical conversation I just don’t see any any evidence of it now the reason that’s a problem is because it scares people and it scares people into thinking that everything with this AI label is scary and so then people think that we shouldn’t be doing health care with AI or we shouldn’t be doing better data science or we shouldn’t be doing decision support or autonomous vehicles and yet if we build these systems they won’t have the ethical problem that we’re conjecturing and yet they will do a tremendous amount of great things for our humanity and we’re conflating the two things and we’re scaring ourselves into not doing what we should be doing which is saving people’s lives. So it’s a cultural rational barrier that you’re up against here. Yes. Which here? I think well AI is an extremely powerful tool I do not believe we are anywhere close to this fear mongering or by some people and the fear that exists and I can understand certainly I think it narrative can be raised to a point where you really start fearing it I’ll give it a very good example and I think just picking on John’s thread I talk about this in the talks I give often as well. Our two daughters and when they were young we had like two books of A’s for Apple B’s for Catt and B’s for Boise’s for Catt and you look at you know you show them and they were in form they were in love with only one book anyway doesn’t matter how bad it was and and you show them a picture of cat only one picture of cat and you repeat that multiple times over several days or a month and then you show them a picture of a cat they have never seen before and they say in their cute voice cat it takes roughly speaking today a computer 750 pictures of a cat to recognize it’s a cat. Now I’ll give a good example if I ever showed my daughter 750 pictures of cat when she was less than one year old she’s right now she’ll be confused till today actually what a cat was so we are so far away from actually whatever we are discussing that I find that question humorous almost and I have encapsulated that as a syndrome called cats and dogs syndrome so I’ll leave it there. So you should fear technology you should you know be concerned and maybe do something about AI for example cyber hacking into AI systems people using an AI system maliciously unconscious bias in the AI system but you really don’t need to worry about general AI yet yet okay Mike so I think it’s really important to keep aware of this distinction between the short term narrow AI’s which have their own concerns and you know safety concerns and and regularly and and societal concerns with them separate from the long term generally I super intelligent concerns which are of a different magnitude and different real and probably for much further away but we as a as a scientific field and certainly as a society I think we can think at multiple times scales and make these distinctions all the time I think if we are don’t adverse reviews to talk about the thing that’s over the horizon will lose credibility if we deny that there’s a potential problem I think that is a way to make sure that just we keep our head in the sand there are things that we really should be figuring out way in advance of of this potential super intelligence whether we’ll all die well our children we care about and and we’ll whether children will die and how and even if they don’t how how well they’ll live with with those super intelligence that they will live super intelligence well we hope we hope as in in a good partnership with them but that I’m just sucking up to the AI already that’s the best you got here that’s the best that’s the best I hope that a children will be in partnership with AI I think that’s a fair way to way to sum it up and I’ll stop this is a defense of Mike here you know there is so much more detailed description in all world religions of hell than of heaven right because it’s always much easier to think in all the ways we can screw up than to think about good outcomes but that’s why you’re giggling when you’re trying to say what you’re hoping for but but we that doesn’t mean we shouldn’t try it’s incredibly important that we change you you were making fun of Hollywood for just never showing us any future that doesn’t suck right blade runner whatever we really need to start thinking about what kind of future with advanced AI would be fine really inspiring and this is not something you should just leave the tech nerds like us here right this is something everybody should think about and the more clear vision we share for what sort of future we want the more likely we are to get it do you do detail this in your book life 3.0 you go there I talk about it I try very hard to not give any glib answers because this is really a question we should all discuss okay but you know you don’t do good career planning by just listing everything that can go wrong although what after vision’s in vision success all although I will only be able to paraphrase this quote from Ray Bradbury when asked the great science fiction writer future is they ask him why do you keep portraying these dystopic futures do you think that’s what the future of life will be and he says no I portray these futures so that you know what future not to head towards that was Ray Bradbury ladies and gentlemen thank you for your attention this evening join me in thinking this panel let’s open up the stage we’ll have about 10 minutes for questions we have a microphone on each aisle and if you try to direct your question to one panelist that will go faster than saying can I get all five of them to reply so are we ready let’s start it off right over here hi Neil how you doing hey how you doing all right good I wanted to get a little bit back to the artificial intelligence and the vehicles and the more complex scenario of and I read a little bit about this in California cars where is is if you have a scenario the school bus the bicycle the kid or a hundred foot cliff and the IA decides the best thing to do is to drive the car off the hundred foot cliff because that of course the less damage what’s going to kill you is that something that would be learned or is it a decision that it will make how can it avoid making that decision where human factor might say there’s no one in the school bus bicycle might be able to make it you know at a glimpse as opposed to just those simple I don’t know algorithms or decisions that are intelligence that it would make kill the driver save everyone else you’re mic having John why don’t you take that well I think all of these systems have distinguished between the learn part like a detector first obstacle and the policy part I think it’s very important that the policy part be explicitly planned for and then you again up with all the ethical issues about what do you want your policy to be ideally you would just stop the bus right right yeah that you have breaks good enough so you don’t have to drive it off the cliff yeah hopefully you saw the cliff far enough in the first place yeah yeah yeah yeah so it may be that so many of these scenarios you described are real life scenarios that human beings in our frailty encounter but it saw this it calculated the rate the bicycle was entering the street it knew what its breaking distances so maybe it would just be better at it and we’re troubling ourselves over scenarios that are real for humans and highly unlikely for autonomous AI I would imagine thanks yeah so over here there was something discussed several years ago called the singularity when intelligence gets to the point both human and outer visual sort of blends together it was that a question yeah was do you consider this idea of a singularity via a possibility I sure Mike so the singularity usually refers to something that’s also been called the intelligence explosion a point where there’s a kind of a critical mass where something becomes so smart max alluded to this before where it can then so further self-improve at a rapidly accelerating rate it’s it’s quite controversial whether that phenomenon will happen it’s hard to really rule it out there’s also it’s hard to rule it in as well it’s not clear that it’s really necessary to achieve super intelligence that it goes through this super accelerating phase but that’s one that’s one scenario where it could happen faster than we realize and thereby not be a linear extrapolation into the future about when it occurs because if it grows exponentially what looks like small today becomes very large very quickly agree yes but could could well work with Google so we should have Google answer okay Google where are you on this exponential curve I mean what I would say about this is people who have been marketing this notion that the singularity is inevitable and there are people who who will say that many of them that I’ve talked to actually wanted to happen and I just don’t think they’re being rational about the likelihood of it happening that’s my personal and and many of the people who say it’s never going to happen don’t want it to happen so we have to be very much that’s right next question over here thank you for that hi how’s it going hey good my question is if I say New York I was a going how you doing we’re doing good yeah my question is if you have the artificial intelligence the AIG or whatever and it comes to harm or kill you and you pull the plug on it is that murder because it’s a full intelligent like sentient machine that you’re pulling the plug on let me let me go to max on that one so so if we judge value to our society by level of centines then there’s an AI we already have a lot of people who are not being AI robots or repairing them as though they’re humans so so do you think the day will come where it laws protect the lives of robots? first of all if a human comes to try to murder you when n-y-p-d pulls the plug on him that’s already the law today right so there has to be some sort of protection of in there you can’t do anything you want just because you’re conscious second I think it’s a sign from the very difficult science question which we have to solve about what kind of information processing even is conscious there’s a it’s certainly not as simple as just saying oh you know all consciousness is equal if you’re as smart as a human in this conscious you know one consciousness one vote because then if you’re a computer program and you’re like only pulling you’re only getting 10% for your favorite candidate just make a trillion clones of yourself and have them all vote right there are a lot of really challenging questions here that we need to face and which again just comes back to this question you know what sort of society with humans and highly intelligent beings are we even hoping to create and once you know that then you can ask your questions about what sort of laws that you have to keep it working yes this isn’t my question but have you guys seen the Terminator movies anyway moving right along great summary of everything you don’t have to worry about here’s my question you talked a lot about bias and since there isn’t one of us who’s without unconscious bias how do you in fact try to eliminate unconscious bias from a sentient machine yeah I would really say so in interesting thing about machines in particular is that you actually unlike humans all of us are inherently biased as you pointed out in some way or the other whether we admit it or not you you actually can have techniques and algorithms that detect bias in the dimensions that a particular entity cares about whether there are laws related to it or whether you really care about it from the point of view of society could be in the dimension of race color loans that are given out and algorithms are everywhere actually in our life right now so I would really say interesting thing about machine learning technology is that you can detect bias there can be actually laws related to you need to have techniques to detect bias you can actually unbiased as well so in that way I really feel you know we are one step from the point of you potential one step ahead that you can actually have laws related to detecting bias you can have you know unbiasing algorithms as well and society in general and potentially you know policy making bodies can ensure that that happens and I think as as industry I certainly can say that about IBM that’s one other things we really focus on to make sure we are building responsibility on bias and an explainability as part of it. And for what it’s a great question by the way I will add let me further emphasize that much of what you do in scientific research after you’ve gotten a result is to check whether there’s any bias in that result so there’s a lot of statistical tools just for that purpose because you do not want to publish a paper that somebody else finds out has a bias forgetting race, creed, gender color just bias of some kind it could be voltage bias because of the way you designed your experiment relative to everybody else and you’re claiming a result that’s not real so it’s to protect your own reputation even that you we have these tools so it’s actually not as remote you can test the bias you didn’t even know you had and that’s the bias that you’re looking for it seems to me. No no you know the ones you know you have no no I get that what I’m saying is there in cases where we have data that has no connection to any rational social cultural bias that you could have there’s still a way to look for bias and it’s a bias of in the system that is giving you this answer instead of another answer a big part of scientific research is discovering bias that that’s so so you could you could you can feel more comfortable about this is what I’m saying. Sleep well tonight I promise okay this is we’ll take a few more yes sir. Hello Dr. Tyson first thank you and the panelists for a truly fascinating event. So one of the things that’s happening with GPS as you become more dependent on it is that our own navigational skills are atrophy so if we look at that in the context of AI do we need to worry in addition to the AI outstripping our own abilities that that we will become increasingly dependent on AI tools and atrophy our own functional intelligence. That’s a great question I want to add to it and I want to go to to to John on this if our our faculties atrophy because they’re replaced by AI and we know and we I didn’t get there because we don’t have three hours here we also know that AI will be replacing many people’s jobs and I saw some statistic it may be it’s exaggerated but the sense of it is surely accurate that 70% of men have as their livelihood the act of driving some kind of vehicle either in a taxi a car service a fork lift a truck so autonomous vehicles renders all of them unemployed so the consequences to this that it’s not clear that we are carrying with us an understanding or sensitivity to that surely Google has thought about this what what’s going on there? Yeah so I think throughout the course of history technology has caused job displacement and people find other jobs to do so it would take many many many decades for all transportation to be autonomous but even if that happened there still would be maintenance jobs or manufacturing jobs and so on and so forth I think no one company has the answer to this I think policy makers have been actively talking about this you know for as long as I’ve been in the field there’s no doubt that I mean I’ll give you an example of health care you know you might sound like oh if you build this autonomous system then it’s going to cause a doctor you know doctor to lose their jobs that’s not actually what’s going to happen what’s going to happen is doctors will be able to see more patients and do a better job of diagnosing them and oh by the way in the rest of the world the ratio of doctors to people is pitiful and people die as a result so when we design a system that can automatically diagnose diabetic retinopathy for example we’re deploying this in countries around the world it’s a net addition of wealth to the world so the concern about this might have some blood eye elements to it you know I don’t think so I think I think there will be job shifts and mixes but I think it will take a very long time and to this gentleman’s question about GPS and now I think we’re up to three different independent GPS systems in the world I mean how many people in this room can use a sexton? one or two good good so so there you go I mean do we think that’s inherently disastrous I don’t think so I just know when the satellites get taken out I can find my way home I got this and a slide do I’m the last person on earth to be formally talk how to use a slide roll I don’t even quantify that better I am the I am the youngest person that I’ve ever met who was formally trained on a slide roll because when I learned the slide rule the next semester the price of a four-function calculator dropped from two hundred dollars to thirty dollars and so then classes just made the calculator that’s as much as a book cost so then they stopped teaching slide will and then I have a slide rule in my hand and I felt yeah in an emergency I can you know yes sir thank you very much we know there are neurons in our brain connection in two hundred times per second and they can activate a very different person of brain and give us our thoughts and ideas and executions I wonder in how big is the computer a super computer that mimics our our brain thank you good one let’s go to we’re sure we share how that’s a great philosophical question do our modern computers replicate the number of neurosynaptic phenomena in a human brain and is that some measure of power so let me give you actually very concrete example so what brought this latest revolution of AI together is actually sort of very large amount of data together with a compute element which does matrix manipulations for those of you who may be familiar with linear algebra something called graphics processing units known as GPUs in general a single GPU consumes around two hundred and fifty watts of power it takes thousands of them to focus on a very narrow task this brain that all of us have is twelve hundred centimeter cube and consumes twenty watts of power and runs around runs on sandwiches just just wait out actually come on well I give you very concrete numbers actually and and we are at a very narrow domain and most of the time computers fails at that as well so I think we talk about a G.I. that’s interesting talk yes we should certainly in academics we have to worry about it I am long way away from my guess is that we already have enough hardware in the world that we could make superhuman a G.I. with it and we’re just so behind on the software and the brain I think was historically called wet wear right software hardware wet wear okay just showing off that I knew that yeah and just to be clear I mean with with all the advances in neuroscience which have been tremendous in the last thirty years we still have no idea how the human brain works so we shouldn’t get ahead of ourselves right and we don’t know what consciousness is because we’re still ready to build figure out how to build a G.I. before we figure out other brain works just like we figured out how to build airplanes before we were about to build mechanical birds maybe that’s a good point but evening I could probably be up there with you on learning slide from fifty six years old I learned how to slide before I had calculated excellent so I was no longer saying I’m the youngest person because I’m older than you yes a question I got a test them what’s the K scale for it’s been a long time oh I still have my screen all time all time right here what’s the K scale or Steve K scale K scale cube root scale okay that was really good I still have it though I still have it all right up to this point everyone was talking about quantity how to power power power what the quality certain things in life that we do can’t be quantified it’s a quality love hate appreciation of a painting right emotion how do you how how’s AI working on that end of qualities of things as opposed to quality and and wall computing power to do something Michael where is where is where is aesthetics come into the aesthetics yes right I mean there are computers that compose music and even paint and the question is how you judge this this quality and I suppose one way to do it would be to ask humans about that and people have even tried evolving art that humans like and there there is computer art it may not be for everyone but it’s just difficult to judge but there’s really again no what their computers you have to figure out a lot about humans’ tastes to to compete on that in that territory unless it achieves a super consciousness and invents a higher level aesthetic than anything we ever imagined yeah well look maybe the already like that out of the ether because alpha so made a move if I remember correctly no alpha zero made a goal move that no one had ever imagined before yeah yeah I I was lucky enough to be in Korea for that match and I could just see the gasps on the experts faces it was like move number 23 in one of the games and and the experts were just like that must be a mistake right and I should have turned out to be the beginning of the end of the game and so then people anthropomorphized though and they say well this must have intuition and creativity but it’s just an engineering marvel but you’re writing a computer that makes art that it likes is actually very easy yes you talked a lot about a g i and in the future of a i what are you and there’s a lot of scared people about a i will you hear it what are you doing to combat the scared people and like explain these extremely complex algorithms to the public and more importantly the government I would say Helen what you said you had early pushback on rumba because it was the first sort of a i in the house how did you deal with the PR challenge of this I think we had more pushback before they saw it like I remember the first focus group we’ll go to women and say hey how about a robot vacuum and then imagine like a terminator pushing a vacuum and they’re like no no no not in my house you take out a room are you show it to them and you know you know if it gets up it is you’re just giving a whack and you know it’s completely stupid it’s like computers people used to feel like how you know taking over from 2020 and you know once they have a computer on their desktop and they see that you know blue screen of death and old and times they start not fearing it same thing with a moon but once you have a moon then you see what it can do what it can’t do if I just add to this I think slowly we become more accustomed to computers running things that in a previous day might have freaked us out we’ve all been on the tram that gets you from one airport terminal to another and no one freaks out that there isn’t an engineer driving it at all it’s just and opens and closes doors no one gets decapitated coming in and out so you right I mean it’s a slow adjustment but I think it’s real and irreversible I mean in the sense that we’re not going to go back and say gee I want a human being driving this tram we know it’s not necessary and I had an interesting revelation I saw the movie airport that’s the disaster movie from the 1970s there’s a Boeing 707 or 727 not a big plane by today’s standards they went into the cockpit there were four people in the cockpit what the hell are they doing one guy’s got a gap with the compass and I said and I’ve forgotten there was a day when you needed all these people to fly the damn plane now you barely even need one person right for the triple seven and some of the others it really come computer flown and we’re so much more comfortable with this so yeah I think it’ll happen but slowly to but also to combat fear I think it’s really important to also focus on talking about the upsides now everyone knows someone who’s been diagnosed with a disease a doctor said was uncurable well it was not uncurable we humans have weren’t intelligent enough today to figure out how to cure it of course this is something I can help with right you should talk about things like that also the second thing it’s just so important that the public doesn’t perceive that us who I researchers are trying to sweep the whole question of the organs it nothing here to worry about if if if because that’s what dogs fear right if the public can see that the researchers are having a sober discussion about this they’ll feel much more confident I think okay only time for just a few more yes thank you I’m a young AI researcher from Queensborough Community College and I have a hundred plus one questions for you just right now and my only question is can I have more questions really would would you give me the opportunity to talk to you at some point for seven minutes of your day just about AI email us yeah sure look at the email of academics is public you just go to the university generally you can find the folks in corporations they’re harder to get at because they’re up to stuff that they don’t want it generally that’s how that works but we think LinkedIn is a great way to get ready EMAs and things like that so there’s a lot of places where you can interact with us you can find us on the internet okay right here yes so you guys kind of touched on this question some you will prior kind of already ask my question so I kind of tweaked it so as AI kind of grows and as I AI kind of like takes over the tasks that humans can do currently would you consider or would you think that there’s potential for like a renaissance of art philosophy and new sciences that we can explore as AI’s take over our old jobs is it because we have free time available to us that’s an interesting question so max absolutely there’s you know today we have a subsession we all have to have a job otherwise we’re worthless in the human beings right it doesn’t have to be that way if we can have machines that provide most of the goods and services and we can just figure out a way of sharing this great wealth so that everybody gets better off you could easily invent an envision a future where you really get to have a lot more time living life the way you want and that is so hopeful of you that you believe that humans with free time will create and not just consume video from the couch this is beautiful that that is that’s a beautiful thing yes in 1946 Isaac Asimov wrote a short story in which technology had advanced to the point where a political candidate was suspected of being a robot and no one could tell for sure whether or not he really was a robot but what he did not envision was a time when technology advanced to the point where then form the electorate would not be able to distinguish between real news and news that was generated by artificial intelligence programs considering we’re at that point now shouldn’t it be the primary concern of the AI community to realize that the tools that they have created can be used in a way that they never intended and that they should do something about it that one has to go to John from Google yeah so I’ll say something positive and something more serious so so most of the fake news that we battle every day in for example something like say Google search is actually human generated it’s actually not algorithmically generated so so absolutely we have responsibility to do a better job in our products and our competitors products and I know for a fact that we take that that responsibility very seriously and I made a lot of efforts in the last two years starting with I think accepting that responsibility the thing I’m worried about is that what you just said might come true in future elections today is beyond the state of the art for computers and natural language understanding to understand veracity that it’s true versus not true so we have lots of proxies for what we think is trustworthy but if computers advance to the point where they can write as well as humans and at scale then I think we may have a serious problem and there is a general speeches yeah yeah I mean there are some systems that they can write and use paper articles and you consume them about sports and finance and you don’t know that they’re written automatically what I’m really worried about is the so-called rise of so-called generative systems where videos and and texts and tweets and so on so forth can be written and and the technology doesn’t exist to to distinguish I do think it will be a bit of an arms race right there are researchers working on both sides of this to try and detect these these things and Michael might want to say something about this as well but it’s it’s the very forefront of what a lot of artificial intelligence researchers worry about and it’s but the stuff that is most worried some today is actually generated by human beings but we’re all we’re already at the point where on Twitter if someone takes a position that you disagree with the you say well you’re a bot you don’t even believe they’re a real person anymore you know because you believe that technology a lot of people on Twitter believe that technology is advanced at that point already so even if the technology isn’t real if people believe it’s real then then you have a series problem yeah but I don’t think it’s beyond the state of the art for for social networks to do a better job and I think they are well wait we’re forgetting that we spend 20 years educating our children and so you can adjust the educational system to be explicitly aware and sensitive to how they could be duped by the internet we do that for how to not be duped by charlatans by con artists the other lessons of life so I think it’s unrealistic to have an entire industry somehow change so that they don’t hurt us when in fact it’s our susceptibility to this that one ultimately can point to and so we need defense mechanisms to protect us against that and I think as an educator that happens in the educational system maybe I’m biased about this but I think we have more power over that than than people admit that I think we can do that and that’s it. I think it’s a good idea to have a good job with this and that’s it. Yeah. Can I get like the three youngest kids up front right now just okay go ahead you go spread I have the power to make this happen you just go to the front of the line okay yes go. Thanks for coming by the way thank you how old you. I’m 13 13 very cool so it is a good being a teenager. I need to do it. If you ask any adult you want to be a teenager again the answer will be no okay. So if there’s no bias how could an AI have a personality I know this was kind of touched on before with the other bias question. That’s an interesting question because so much of what creates the nuances in us are things you like things you don’t like tastes that you have and some of that could be viewed as bias so you where are we here. I recently ran across somebody referring to non discriminatory learning and that’s really an oxymoron it’s impossible the whole point of learning is to make distinctions and to discriminate and so what’s really hard is defining what is the kind of bias that is unwelcome bias and which is the kind of discrimination that is actually helping us make you know the right the right case is defining that is very hard. You don’t mean discrimination in the civil right sense. You mean discrimination is liking this rather than that as a simple act. Right well the thing is that one that could then morph into the other kind if it’s if it’s you’re using the wrong reasons to make your decisions about what you’re accepting or what you’re choosing to do and I think that we have to refine what our notions are we have a current legal system that is designed for a world where humans are making all the decisions and you can get into a lot of human things like intent now there’s big loopholes for situations where machines are making decisions that are potentially subject to biases. Thank you. Okay sure right over here yes and hold on to you. I’m ten. Ten very cool welcome. So this is lightly touched on earlier but Azima we wrote a book called Iroba and the first story and it is about a girl who’s best friend with a robot and she doesn’t have any other friends except the robot and do you ever think that a robot could replace all human friends and interactions with other humans? Whoa. Well I think in the very long time frame yes and I you know as I said that people today I think start to get attached to these mechanical devices maybe thinking of them more as a pet right now than a than a fan but I think in the long term you could get attached to a robot system. There was an actual there was an episode of Twilight Zone that addressed this problem there was a colony outpost a calling on an asteroid and there’s I forgot the details but they sent him a robot to keep him company and then it was time to get him back to earth and there’s only weight enough on the craft for him and not the robot but it was it was a female robot and he actually fell in love with the robot and they kept it it’s a robot. No but she’s real I swear she’s real and then the I don’t think give away there what happens here but yeah no I won’t give it away but if you find that episode I think all the episodes are on Netflix so do a search for like robot on an asteroid you’ll find that episode and check it out. Thank you. Yeah and most Twilight Zone episodes don’t end well just I want to let’s clear out this line and we’ll end with you okay yes go ahead. Okay I B.M. has a panel for ethics morals and values but how can you say that a company in China would have the same you know outlook to make a you know a computer advanced technology as I B.M. or Google because can you trust China with doing that and another question is is that with these advanced robots like the Bladerunner why do I know you said it’s far ahead in the future but why make a machine that looks so humanoid anyway when you could have an R2D2 and say okay yeah I’ll wash my floor good luck. I don’t need any robotics I think it looks so humanoid or like see P3O I think you’re hitting on what’s the point maybe there could be a future where they by one are like you know hey you know you wash the floor you know whatever it is. Or the face of another way there’s like eight billion humans on in the world they all work really really well so I’m not sure the market for making a humanoid is actually there but you know one of the reasons room but is effective it goes under the beds and the places where humans find it difficult to get so by designing them around the jobs they’re doing I think they’re actually more effective than you know potentially making humanoid humanoid. No that’s your point. Yeah point is. The point is that why it happens in the way we all think it will and I here’s here’s an example I remember seeing any old movie and you say okay I don’t want to drive my car I want a robot to do it so it outcomes a humanoid robot and it drives the car without thinking that maybe the car itself could be the robot right and you remember the Jetsons the maid the robot maid had a apron. Okay and it was clearly female when it didn’t have to have any gender at all so so that’s how we used to think everybody but I agree with Helen entirely you you designed something for its task and that will hardly ever have to look like a human being. You have the last question this evening so how old are you? 11. 11. 11 very cool. Very cool. My question is as AI increases in our society do you foresee social ramifications for our future and for our future generations? Social ramifications like what? Such as intelligent machines are integrated more into society could we become socially inept and regressed as the machines get smarter? Yeah what do humans start looking less relevant, less important, clumsy, stupid, inept. Is that enough words to get the point across your hands? Yeah I think people will have to deal with the fact that a lot of the stuff that they have gotten status from in the past may not be an avenue for them to do so in the future and find other ways to find meeting in lives not just tied to certain livelihood that they may be rough on it. It has been you know for most of our recent history of automation that it was lower status jobs that got automated away earlier that may not be the case it may be the lawyers that you know that that get automated. So the higher the capacity of AI the higher the level of job it can replace. It may not be you know in any kind of direct ordering you know it might be that you can get the lawyers but you can’t get the the dishwasher’s or the so it’s going to be that AI will create a version of itself that will replace AI researchers. Again none of us are safe. Thank you. Thank you for that question. Thank you. Allow me to share with you in AI Piffini I had two days ago where I said publicly that I was fearless of AI because if it starts getting on really or out of hand I just unplug it or since this is a mercer I can just shoot it. So I’m pretty confident that I what would I have to fear. And then I was listening to a podcast hosted by Sam Harris. We had an AI person on just recently forgive me I’ve forgotten his name and they put Sam Harris mentioned my comment to him and apparently it’s a well known for not it’s like AI in a box. So you know it’s powerful you know if it gets into the economic systems and the internet it’ll take over the world so you just leave it in a box it’s safe there. And what the guy said is it gets out of the box every time. And I said well I mean I think I’m myself how and why because it’s smarter than you. It understands human emotions. It understands what I feel what I want what I need. It could pose an argument where I am convinced that I need to take it out of the box then it controls the world. And we don’t even have to discuss what that conversation needs to be. We just have to be aware for example that let’s say you’re trying to get a room in a room and the chimps say we think something bad is going to happen in that room so nobody go into that room. That we come up and we are way smarter than chimps. We just take a banana toss it in the room. Oh there’s a banana in there now. They go in we capture the chimps. The chimps did not imagine that we would show up with a banana. We capture the chimps. So just imagine something that much more intelligent than we are that sees a broader spectrum of solutions to problems than we are capable of imagining. And when I heard that it’s like yes the AI gets out of the box every time. Yes we’re all going to die. No. Join me in thanking our panel. Good.

AI video(s) you might be interested in …