Google for Startups – Campus Chats: Demis Hassabis, DeepMind

In this AI video ...

Good evening, Campus London. Okay, this is one of the most energetic audiences we’ve ever had, right? You guys are very excited. Hello, my name is Sarah Drinkwater. Welcome to Campus London. I got a woo by lovers. Without further ado, I would welcome to the stage Demis Asabis of Deep Mind. And you can’t see but we have our names on Chesapeake. We’re just in case we forget on the way to the stage, which is quite fun, right? Demis Asabis is the founder and CEO of Deep Mind, a neuroscience-inspired AI company bought by Google in 2014 as the largest European acquisition to date. He’s led projects including AlphaGo, the first programme to ever beat a professional player at the Game of Go. Demis was a child prodigy who finished his levels two years early before coding the multi-million-selling simulation game Theme Park at only 17. After graduating Cambridge with a double-first in Computer Science, he founded the Pioneering Games Company Elixir Studios. After a PhD at UCL, and since the MIT and Harvard Demis founded Deep Mind. His research connecting memory with imagination was lifted in the top 10 scientific breakthroughs of 2007 by the Journal of Science. He is a five-time soil games champion, recipient of the Royal Academy of Engineering’s Silver Award, and a fellow of the Royal Society of Arts, Demis, welcome. Thanks. Thanks for inviting me. So we have an audience here of entrepreneurs, startup workers, many of whom are kind of working in the AI and ML space. And it’s been really interesting watching the sheer interest and buzz in the media for the last couple of years. If you were talking to a layperson and an audience is not laypeople, I would argue, how would you define Deep Mind and what you guys do? Well, our mission statement at Deep Mind has been sort of quite simple to state, really. So the idea is that we are going to try and fundamentally solve intelligence. And then our belief is that once you’ve done that, you could use it for practically anything else. So more presentically, the way we’re trying to do that is by building learning algorithms. So algorithms that are very general, that learn how to do things rather than being programmed with a solution, actually learn directly from experience or directly from data, so effectively find structure on their own. And then the plan is if you have such a general system, you could apply to all sorts of domains where there’s huge amounts of data or huge amounts of complexity, perhaps so much complexity that even the human experts, the best human experts in that area can’t comprehend it all unadored. And the idea would be to create this amazingly powerful tool, which those experts can use to make further bigger breakthroughs in their own areas, of science, healthcare, social, almost every area you can think of. I’m kind of really interested in, you know, you’ve been a world class chess player, you’ve been an academic, you’ve been a games designer. How have all of those fields, what have they brought to Deep Mind and the culture that you’ve created there? Well, I think all of my experience, if you look, if you look back at them, people sort of think it’s quite an eclectic sort of path I’ve taken with quite a few different things. But my plan has always been since I can remember to work on AI ultimately, sort of do Deep Mind, and a company like Deep Mind. And I always felt from a very young age that that would be one of the most profound things that could be worked on, and it would also be extremely interesting and fun to work on. And the impact of it could be enormous. And all the things I chose to do, you know, from around the age of 14 or 15, were with that end goal in mind. So, you know, I started off in games and learnt programming, games themselves ended up being a huge part of what we do at Deep Mind, in terms of we use simulations, including games, to obviously develop our AI algorithms in and test their performance. It’s very convenient to do that. I’m really fascinated by the idea of simulations in general as a way of generating data. So, you don’t have to have used data that’s really out there. You can actually generate your own data, synthetic data, if you have virtual environments and simulations, which I think is an important point, maybe we should touch on later. And then the neuroscience component was sort of finding out about how the brain works. Obviously, the human brain is the only example we have of such a general learning system, the type of system we would like to build and mimic artificially. So, it’s the existence proof. It seems to me like we should, it’s worth spending a lot of effort trying to understand how the brain works, at least on a functional level, if not on a kind of, you know, detailed biological level. And that would inspire new ideas about algorithms and representations and so on. And simultaneously, because I never liked doing things just for one reason, ideally, you’re experiencing things for multiple reasons, is, you know, one was learning the neuroscience, and other was understanding how academia worked, how cutting edge research worked in the best labs in the world, what was good about those environments, what was bad, comparing that against startup environments. And actually, deep mind is a sort of hybrid of what I learnt from the best practices of the best startups. And what I learnt were best practices from the best academic labs. And try to fuse the best of those worlds together, which is one way of describing deep minds, sort of fairly unique culture. So all of those, so to much of a question, all of those experiences, I’ve tried to utilise them in some way, including my chest, which is probably the fundamental way I approach all the planning that I do, and whether that’s business or anything else, I think about things with that kind of logical planning, that you train up when you play chest for a very young. Yeah, that’s interesting. I was reading a piece this week about tests deploying chests in schools for eight to nine year old children, really helps to focus, particularly over energetic kids. I was always the kind of kid that stopped doing that. I’m curious about games in general, whether it’s chess, whether it’s Alex here in Theon Park, whether it’s Go. This seems to have been a very big theme in your life. Can you tell us a bit about how you first got into games? Yeah, so I guess I got into games, firstly, through chess, and I was aged about four when I found out, I mean, I can’t remember this, but my dad told me later I saw him play my uncourt chest, and I apparently asked them how to play, and they were sort of humoring me as, you know, obviously I can’t learn, but they started trying to teach me, and then after a couple of weeks I was beating both of them, and so then they thought, well, maybe something strange is happening here, and they told me to, you know, chess clubs, and I was always winning the age group sort of many years above my age, and then, you know, ended up playing for England and so on. And the thing is, I think it’s an amazing thing for a child to train in, because it treats you to do so many meta skills, you know, to do with obviously planning, problem solving, imagination, how to sort of cope with pressure, and, you know, competition, what it means to be excel at something, and the work that that takes, all of these are meta lessons that you can sort of then take into any other field. So I actually don’t think it’s so much about the chess itself. I just think it’s an incredible sort of training ground, if you like, for the mind. And then I’ve extended that to later on in my career, in my games career, you know, I learned many other games, which are very interesting, including Go, Borsair, Polka, and lots of other things. And if I was to actually design an MBA, one of the things you could do is you could design it around games, actually, and different games teaching you different skills that I think are really important in business, but also in science, and many other things. And I, for the first, until at least sort of in my early 20s, I used to effectively play games professionally as either my main thing or as a side hobby. And I used to view it as like a gym for the mind. So there’s not very many cases in real life where you can practice scenarios again and again, and test how you would cope under that pressure or what your decision making, where there aren’t any real, really big consequences of that. So in a way, that obviously is connected to the idea of simulations, where the whole point of using simulations for the AI systems to learn in is that they can try out things in the simulation, which is totally, it’s totally safe, it has no consequences, but they can still learn from it. So in a way, what we’re doing with the AI algorithms is kind of what I did to myself, and I was training as a kid. I hadn’t thought of that. So my parents met Plango, which I find, you know, it’s the world’s most complicated, I’ve tried to learn it some of the times. And I’d love to ask a bit more about, obviously, with AlphaGo, you’ve had a couple of dazzling wins in terms of soul against an incredible human champion, who himself has gone on to win, I think, 13 matches since then. Tell us a bit more about, I mean, we’ve already talked a bit about gaming as a way to kind of show capability to help algorithms train in a kind of tech environment, but I’d love to know more about particularly the first, you know, that moment in soul, the first concept that you guys made out there, how was that? Like, where did that notion come from? It’s an amazing, I mean, there’s so many things to say about that, is first of all, the career match, which was really special because it was the first big one, was generally a once in a lifetime type of experience. And the whole country is quite hard to imagine, because goes not very big here in the west, but in career, it’s just absolutely massive. And literally, the whole country came to standstill. So, you know, on the airport and the way home, like, you know, the check-in person was, you know, a recognised me, and it was a bit like, my driver was telling me that it was like, for one week only, I was the second most famous British person in career after David Beckham. So, as you guys all know, it was researches and programmers and business people that sort of never happened, so it was quite funny to experience it as for one week, right, and then come back to London, not unless if nothing had happened. And everyone’s like, yeah, exactly, it’s like, you know, what is go? But it’s actually a culmination of 20 years of thinking, because I learnt go relatively late in my games career, so I learnt it in Cambridge. And go is actually quite popular in the west, in universities, especially in maths departments, because it appeals to mathematicians and physicists because of its nature of the game. And so I learnt it there, and I actually, you know, really loved the game, but I didn’t have enough time to get to, sort of, practice it, to the level you would require to get really good, but it always stayed with me. And then David Silver, who is the head of reinforcement learning at DeepMind, but also we were undergrads together at Cambridge, and he was my CTO, my first games company, so we’ve had a long history together of working on things together. When we were programming computer games together, we talked, you know, I taught him how to play go. And we, this was just after Deep Blue beat Gary Kasparov, and we knew that go was much harder than chess. And we thought then, even, that we thought about building a go program. This was like 20 years ago. And we thought about building it in the way that Deep Blue was built. So, you know, a handcrafted set of heuristics that sort of run brute force with brute force search, which is what basically what Deep Blue is. And we really, we quickly realized thinking it through that that wouldn’t work for go, because it’s so contextual. Like, there’s no way you could program enough if then statements to figure out, you know, how should, or this part of the board affect this other part of the board, in the kind of myriad of configurations you could have. You just can’t do it. And it was obvious once you started thinking about that wasn’t going to work. So we, we parked that thought back then, but we also agreed that that would go that would made go really worthwhile working on, because if you could crack go and get it to world champion level, then probably you must have done something pretty significant in terms of, of AI. More significant than something like Deep Blue, which was quite narrow. And so we kept that with us. And in fact, after the games company, you know, we, we sold off the IP of that to Microsoft and other people. We then, I went, you know, I went back to the PhD in neuroscience and he went to a PhD on computer go and reinforced it learning in, in Canada. And so he actually followed that track a lot more. And then we, we sort of came back together at Deep Mine with both of our two different experiences ready to actually tackle this problem. And, and so AlphaGo was the culmination of, although we, we’ve been working it for about three years and so two years up to the Leesodole match. But the actual thinking behind it and the, the idea to do it was sort of 20 years on. So it was really like the end of a 20 year culmination of, of, of effort and thinking. And the, you know, and just to mention it at Leesodole and Kajay, the, the Chinese champion. They’re absolutely incredible human beings, obviously incredibly creative. I mean, the amount of dedication and craft it takes to be at that level or anything, but as, as these art forms and go is an art, is unbelievable. And they’re both such gracious people as well. And I think one reason AlphaGo was received quite well by the Go community was that, because I’m a games player and actually several other people on the team are, I think they could instinctively tell, even though it was a language barrier, that I really appreciated and understood their, their capabilities and their talent and, and their art form. It wasn’t just something to be conquered. And I think that that’s what, in the past, a lot of computer sort of machine versus human champion games that’s happening draft and it happened in chess, ended up being very antagonistic because I think that the programmers, brilliant as they were, most of them were not professional games players or something in the past. So I just think they thought of it as this, uh, Mount Everest they had to climb, but it’s not an inanimate object. It’s, it’s, you know, it’s another brilliant human being with their own brilliant craft they’re bringing to it. And I, I think there was just a deep respect from both sides about the, you know, that AlphaGo at the end of the day, and it was those machines, it’s a human endeavor that built the machine. And we appreciate that, the art, then the artistry that, that these amazing people are displaying. And what was very cool is, as you mentioned afterwards, you know, both Lisa Dole and Kajay went on huge winning streaks. And, and I think it inspired them to play, you know, new ideas and new moves. Actually, Lisa Dole went on a kind of eight or nine game winning streak and, and Kajay is currently on a 13 game winning streak. And I tweeted about Lisa Dole, he lost the next game. So I hope I haven’t jinxed, uh, I hope I haven’t jinxed Kajay, because I just tweeted that as well. I think what I love about that as well is, you know, when I’m going to ask you about this later, um, the worries we have around AI, but that story in particular makes me think that these, these, these programs can help us be more creative and more imaginative and be more, right? I think so. And I think that’s the key thing about these matches. It wasn’t actually the fact that we won. Although, of course, that was, you know, that was great, but it was the, it was the way it won and the types of things it did. You know, it wasn’t just regurgitating human ideas that just copying them and just more efficiently calculating. It actually came up with genuinely creative ideas, original ideas. And the cool thing about Go as I always talk about it is, it’s art, but it’s objective art, because at the end of the day, someone wins, you know, you can measure was the was, you know, we can all, any one of us here could come up with a novel move. You know, we could play a random move and it would be a original move in some sense. But the point is, is it any good? Did it help, when was it, was it pivotal in winning the game? And then you can only then, can you go back and judge, was that move, you know, generally creative, a creative brilliance here or not? It’s not enough for it just to be novel, right? It has to be effective. So, and that isn’t a matter, you know, that’s not a subject, it’s not a matter of opinion. It’s, you can calculate that out after the fact. So, and AlphaGo is created now, you know, in all the games it’s played, countless new ideas, genuinely new ideas that were, you know, and this is a game that humans have played a professional level for, you know, hundreds of years. It’s existed for more, you know, 3000 years. So, you could argue it’s one of the, one of the things that we’ve contemplated the most as humanity, you know, continue to think for 3000 years that we’ve contemplated very heavily. And yet AlphaGo is able to find totally new ideas. And it’s not only is that directly influenced what people are trying out now, because they were analysing all these games, but also it’s freed up their minds too, like Fan Hui, who is the European champion, who now consults with us. He said it to me that, you know, it’s, he felt like he was, his mind was free from the shackles of tradition. He could now think unthinkable thoughts about what, you know, that he wouldn’t have to be constrained by the received wisdom that had hand been down to him. So, I think it’s going to inspire a whole new level of creative exploration in the game of Go. As a games player, have you played AlphaGo? How was it? Yes. So, this is pretty funny actually, because, you know, I’m not that strong at Go, and within a few very early on, it was already much stronger than me, so it would have been much point playing against it. But I’ve actually, what was really more fun was playing with AlphaGo. So, out in China, we had this in the recent match we played. We had this interesting very, we had the match against Kajay, but we also had some interesting other formats. And one of the formats we had was called Pego, where you play as a team, and it’s two of you normally, and you take turns to play, and with no conferring. So, you’re trying to kind of infer what your partner’s trying to play and what they’re trying to achieve. And we had a version of that where AlphaGo was be your partner, and, you know, playing, you playing alternate moves with AlphaGo. So, it was really cool. I had a game with that with one of our programming team. It was a very fun experience, actually, sort of having AlphaGo on your side. And, you know, solving your problem, sort of correcting the errors in your play and making up for your things you haven’t seen. So, interesting the idea of seeing a machine as powerful as that SS-Happredatable you are as a human, right? Or something about decision-making methods. Yeah, no, it’s really interesting. I mean, it’s sort of what’s interesting is that this version of AlphaGo doesn’t have any opponent modeling. So, it isn’t trying to adjust itself to what you’re doing. It assumes it’s playing itself effectively. That’s how it’s trained. But you could create another version that also tries to predict maybe, you know, play moves that it knows that you’re going to be able to, you know, sort of understand the motif for and things like that. I think it could be quite an interesting variation. So, I think every founder CEO has an incredibly hard job, but I think more than most you really do. I think particularly the area you’re working in, the potential social impact, it’s really fascinating space. And so much of it depends on the intent of the creator. And it sounds even talking in our brief conversations so far, there’s been a huge amount of intent behind DeepMind’s work so far. How do you think about that whole notion of intent? Well, we take that responsibility extremely seriously. And we always have done from the beginning of DeepMind when we started DeepMind in 2010. It’s not like, you know, it’s hard for everyone in the room, you know, knows AI is a huge buzzword now and many of you are working in that. But it wasn’t like that in 2010, right? If you said the word AI to a VC, they would have just rolled their eyes out you. Now they’ll throw $10 million out of you, right? So, you’re competing to do that. So, you know, just wasn’t like that just a few years ago, seven years ago. But even then, we had this big mission in mind. And we were planning for success. You know, like we really thought we could make great progress on that, that the time was right, there was enough compute power, there was enough data, there were interesting algorithms, the beginnings of Deep Learning was there, we knew about reinforcement learning. All of these things came true in the end. And if, you know, there’s always been the back of our mind, if it really is going to be one of the most significant technologies we ever invent, humans ever invent, then of course you’ve got to take the ethics of that very seriously. And we’ve been from the beginning at the sort of forefront, you know, in terms of thought leadership on how, you know, society should think about this, how the other big company should act. You know, we were big driving force behind this thing called partnership on AI, which is big collaboration between, you know, all the big tech companies trying to come together to start thinking about the ethical deployment and use of these technologies. And then further filled, you know, we talked to lots of academics, including people like Nick Bostrom and Fina Future Manitio, Institute of Oxford and Cambridge. There’s lots of think tanks now around, they’re thinking about the potential impact of this technology. And they’re regular visitors to our offices and we have regular brainstormings with them. And, you know, we’re going to be, it’s going to be very interesting, you know, next few years to see how to map that out. Obviously we don’t have all the answers and no one does. But I think we’ve got to start really thinking about these issues seriously and also doing a lot of research. So both technical research on safety aspects and interpretability and transparency of these systems and bias in the systems. So all these technical challenges. And then there’s the ethical challenges of, you know, distributing the increased productivity to as many, you know, so everyone benefits not just a few people. So there’s sort of huge challenges on the whole gamut range of things from the technical side to the ethical and, you know, policy side. Big job. And I guess what do you make of the debate around automation and jobs? Do you see this as being the latest wave of Luddles’ SQ concerns around how will rethink workplace, you know, the workplace in general? Or do you think it’s something more profound? Look, I think it’s not clear yet. So I think that, you know, we’ve seen any time a major new technology comes in, it creates a big change, right? So that we’ve known that from the industrial revolution, you know, the internet did that, mobile did that. So you could view it as another, I mean, that’s still not underplaced importance, but another really big disruption event in that lineage. So that’s certainly one, you know, reasonable view. In which case, society will just adapt, like it’s done with all the other things, and some jobs will go, but newer, hopefully, better, higher quality jobs will become possible, facilitated by those new technologies. And I think that’s definitely going to happen in the shorter term. The question is then, is this really some kind of one-time epochal event that’s beyond the level of even those big things? And, you know, I’m not sure, I think you could argue that it might be. And then the question then comes is, you know, does become what profound around, well firstly, if you’ve got all this increased productivity, you need to make sure that’s distributed fairly. But I think that’s more of a political issue than a technical one. But then the question comes is, if you’ve managed to do that, and assuming you can do, you know, you as a society have managed to do that, then the next question comes is about, you know, things like purpose, and, and, you know, those kind of higher level questions, which I think are very, you know, interesting things to think about. And, you know, we need to do a lot more thought of, you know, sort of research about those kinds of things. How, how that might go. So I want to shift gears a bit and ask you a bit more about DeepMind and London. You know, you’re headquartered in the UK. I’d love to, I know that you, I know that, you know, I’d love to hear a bit more about how, how you’ve grown in the UK and how particularly London has kind of helped the company. Obviously 2010 was a very different world we were talking about earlier. But tell us a bit more about about your thinking, you know, some companies in your position would have moved over to the States. So I think most companies probably would have done. Yeah. So I think, yeah, it’s quite a few interesting things to say that. Firstly, you know, I was born in London and I’m a sort of proud, born and born and bred Londoner. And I’ve always wanted to sort of show that, you know, I obviously visited Silicon Valley and knew people out there. And also I’ve been to MIT and Harvard and seen the East Coast. And there is this view over there that, like, you know, these kind of deep technology companies could only be created in Silicon Valley. Certainly back in 2010, that was definitely the prevailing view. And I wanted to show that that was, you know, one of the many reasons I’ve been in London is obviously I was living here and a lot of my start, you know, the people started with me were also already based here. But I also felt that that just wasn’t true. You know, we have world class universities here, you know, Cambridge Oxford, Imperial, UCL, Kings, all these amazing places that are producing just as much talent as, just as high quality as these other places in the US. So it just felt like it was, they just hadn’t been galvanized properly. And there was wasn’t the level of ambition. But if there was just no reason, real reason why it couldn’t be done here. And one of the reasons you could say back then was lack of funding, but you could get funding from the US, but still we based here, which is what we ended up doing. And it was a funny story, like our first big and major investor was Peter Teals fund, Founders Fund. And I remember the first time I pitched to him, you know, he decided to invest within the first meeting. But it took about three months before it actually was closed because the main sticking point was he wanted us to move Silicon Valley. Because at that time, he’d never invested outside of, I think definitely not the US, maybe not even outside of the West Coast. Because he felt it was that, you know, the sort of power Silicon Valley was that sort of mythical that you couldn’t create a successful, you know, big technology company or anywhere else. And eventually we decided we convinced him that there was good reasons to be in London. Actually, one of the things was that the time I thought was going to be competitive advantage because in terms of talent acquisition. Because if you’re, yeah, well, it’s partly cheaper, but actually because, you know, if you’re, say, you know, physics PhD out of Cambridge, and you didn’t want to move to the US, but you also didn’t want to move to work in the city for a hedge fund, which was, you know, luckily becoming less and less attractive to people, you know, 2010. After the crash, and I think that’s good. Then, then we, you know, but you want to do something that’s going to really push you to the limit intellectually and utilize all your skills. Then there aren’t, there wasn’t that many other options in the UK or even Europe, I would argue, for that type of talent to be fully, feels sort of fully satisfied with the challenge of their job. So I felt like we would have sort of Europe to ourselves for a while, which is kind of how it was. And that actually built up our initial, you know, first sort of 50 people or so, which was the critical mass then we needed to go beyond that. Of course, now everyone knows that secret plus also there’s all the, there’s loads of startups now in deep technology. And I hope there’s going to be many more. Hopefully, you know, deep minds made it easier for all of you guys to raise money because we’ve shown that this is possible. You can get a great ex here. And there is that talent here and the drive and the, and the ability to create those kinds of companies here, which is partly what we wanted to show. And I think, you know, I’m really pleased to see, you know, other success stories like, you know, improbable and magic pony and all these other things that have happened and reinforcing the fact that, yeah, this can be done anywhere. And London’s an amazing, and the UK is an amazing place to do that. No, I couldn’t agree more. I feel very strongly about that. I think, you know, with cognition acts recently, we’ve got our residency program applications. Our residency program applications open now. I know a lot of you guys are applying for it. It’s our six month program for machine learning, pound startups. That’s my pitch over quickly. It’s a really good time for companies in this vertical right now. For a lot of the guys in the audience who are sort of post-seed. Kind of trying to kind of particularly commercialization. What would be your advice for them as founders? Yeah, I saw I was thinking a little bit about this. I mean, you know, there’s a few things I’ve learned through the two, I mean, I’ve sort of been involved in several startups, but two main ones that I ran, right? My games company and then deep mind. And again, my games comes with a lecture studios. And what I found is actually that running a company is incredibly hard, right? We all, you know, all of you guys know that, but it’s actually doesn’t really matter if your idea is a smaller idea or really big idea. It’s still just as hard. So actually, there’s a lot of advantages about it being a bigger idea in terms of exciting, you know, co-founders and start and top talent, getting investors excited. And also for you yourself, when things get hard, if you’re dealing, if you’re actually tackling a really big problem, that really is important to you personally, but also important to the world. I think that can get you through a lot of hard times. So I would say it’s sort of counterintrusively like a big going for a bigger idea actually can sometimes be easier. So that was one thing. Second thing is, you know, I, one thing I kind of imported in my own thinking from Silicon Valley, even I wasn’t building the company there, was their level of sort of ambition and sort of bravado in some ways. When my first company, I would, you know, we would raise a little bit of money and we’d be like, very British about it. So we’re like, oh, we’ll raise like a couple hundred thousand, you know, half a million, like small raise. And then we’ll, we’ll have some milestone and we’ll wait to prove that out to before we even dare to talk about another race because like, you know, we didn’t say we haven’t done what we have said yet. And so, you know, you can do it like that and it’s very slow to be a bit hard. And of course things are unpredictable because it’s, you know, you’re doing a start-up and sometimes things don’t work and so on. And then any small thing can kind of kill you then because you haven’t got much leeway. But that’s not the way that the big US, the kind of US companies that end up being successful. They’re just, they’re always raising, they’re raising like million, you know, they’re going for big raises all the time. Right? And I just sort of realized that actually there’s a self-fulfilling prophecy because obviously if you’ve got all hooked that money and that runway and that leeway, you’ve got much more time to figure out how, you know, the correctual mistakes. So you’re actually, in the end, you end up becoming more successful partly because you projected more success. So it’s sort of, it’s this weird thing of like just being a little bit more ambitious and brave about, you know, what you’re claiming to do and what you’re doing. Not going to kind of hype because that doesn’t help, but actually just in terms of projecting your, the ambitions of what you’re going to try and do. And then obviously it helps if you have, you’re starting with a big idea that really matters. So yeah. So we’ve talked a lot tonight about the kind of very long term, but I’d love to get you thinking about the next 10 years with deep minds. What do you hope that will happen in those 10 years for the company? Yeah, so my big hope and I think we’re just approaching the point where we’re going to be able to start doing this is applying all these learning techniques we’re building, learning algorithms to science itself. So actually sort of, you know, we use the scientific method to create these algorithms, but actually then reapplying them back into other domains of science like, you know, quantum chemistry or protein folding or a whole bunch of areas we’re looking at. And to, you know, one really looking forward to is the first big breakthrough that comes in a really hard area of science that makes a huge difference, you know, to the scientific or medical community that was in a large part helped by, you know, an AI tool. And so we’re in tandem being used by experts in those in those areas. So I think in the next 10 years, we’re going to start seeing, you know, my hope would be in a couple years from now, we’re going to start seeing that happen. And then by 10 years, maybe it’s going to be quite routine, which I think it’s going to be unbelievably revolutionary if that can happen. We talked right at the beginning of the conversation about games as an art, and I was really interested in your desert and disks, but you and Christopher Nolan, friends. Inception is my favorite film. It’s very exciting for me, you know, home. I think we’ve talked a lot, just had to get it in, guys, what can I say? We’ve just been talking a little bit about science, and I think something that struck me a few years ago, I think Eric Schmidt did a great talk about this, about this English notion of arts versus science, rather than arts and science. When you’re thinking about Deep Mind, you’ve just talked about the kind of science element of what you hope, discoveries you hope will come out in the next 10 years. How much does the notion of art and the sort of artistry of theory or working influence your thinking? I think I’m one of those people that doesn’t really see that boundary between arts and sciences. I think it’s all about trying to understand the human condition and express it. And I think the best sciences like that too is artistic. If you were to tell me what separates the good scientists from the truly great ones, it’s not their technical ability, it’s their creative ability. And we’re lucky enough for Deep Mind to have a lot of those people. I think that they apply that creative flair that you would more than associate with artists all the time in their scientific work in terms of coming up with new ideas for ways to build these algorithms and achieve the types of functions that we want. I also think another thing that we do, which is maybe interesting for people here, is that I’ve tried to apply the scientific method to organizational design. So actually the design of the company itself, I try and use the scientific method on that too. The scientific method in my opinion is one of the most powerful ideas humans have ever had. And yet I don’t think we apply it in science obviously, but I think it could be more broadly applicable, that using that technique. But if your question is, will AI be able to create? I don’t know if that is a side question. Duke Decker, the company here, they are Ed of the Founders, Cambridge Educator, composer who learnt to code, and you might know them. Nolan’s films, for example, a lot of it is around memory, artistry, the human maze. Yes, that’s what I was actually so. I’m sort of a fanboy of Chris of the Nolan’s. And I was so pleased to meet him finally, me, him last year. I was really interested in all his films, and what was really interesting is I felt like my scientific love was mirroring his interest in his films. Because it is obviously Memento, which is great, which is on hippocampus and memories, which is what I did for my PhD. And then, inceptions about imagination. And then obviously, he’s very interested in AI and all these things as well. So I just think the way he thinks about, it’s just a really unique film director, I would say, to the way he thinks about his research as his topics. So I’d love to end on a few very short fill-in the blank questions. I’m going to hand over to you guys in a minute. So, do you be thinking of your questions? My next holiday will be two. I’m not sure. I don’t have many holidays, but I think it’s going to be… I like the idea of going to Costa Rica. Apparently, there’s direct flights there now. So maybe I’ll do that at Christmas. I always find inspiration in films, actually. Any other recommendations beyond Christopher Nolan? Oh, gosh. Well, yes, I mean, there’s so many… It’s hard to say. Christopher Nolan is one of my best. I would say he’s one of the people I get most inspired by. A book I recommend all the time is. So, recently, actually, it’s been Sapiens by Yvonne Harari. I think that’s a really great book. There’s so many interesting things. There’s not many times I read a book where I’ve come out with 20 new ideas I hadn’t thought about before. And that book made me think like that. One thing I would say to founders is… I would say, like, make sure that you’re starting your company for the right reason. So, you know, I think a lot of, again, it’s quite fashionable to do startups, right? And I think it’s great. But I think you really should find your passion and make sure that what you’re doing you really actually genuinely care about. And it’s generally important rather than you’re doing it just to make money or some other ancillary reason. That’s a lovely note to end on. So we’re going to hand over to the audience. A few grand rules because we only have 15 minutes. Please no comments. Please keep questions short. And let’s not do too much pitching. Thank you. Sorry to ask. Okay. Gentleman here from the Puppet T-shirt. And we’ve got some microphones coming in. Is your research helping you to understand the human mind more? And are there any insights about how people think that you might want to share? Yeah, that’s a great question actually. We hope that… I mean, the way I pitch it is that the way I think about it is that the journey we’re on trying to distill intelligence into an algorithmic construct. If we compare that ultimately to the way the human mind works, I think it’s going to uncover a lot of mysteries about our own minds. So we have a big neuroscience group at DeepMind. It’s about 35 people. We do FMRI experiments. We collaborate with labs around the world. It’s led by a guy called Matt Botvinis, who is brilliant professor from Princeton. And we look at ideas that we want to test on machine learning and validate them. Does the brain use that as well? That’s a good piece of evidence that we’re on the right track. But we also… I think some things have sparked ideas about things we should look at in the brain. So the way that we’ve done this thing called meta-RL or meta-learning. So learning to learn. And how does the brain do that? And we’ve built that in machine learning first. And now we’re looking at it in the brain. Which brain area is responsible for learning to learn? And we’ve got some good results on that actually pretty soon. Okay, then I’ve got one here. Simon Nadolski. In another talk, you mentioned that one of your ambitions is to make the scientific inquiry, the process of making science more efficient than otherwise it is in academia. Could you share some tips with us and how we achieve it on the daily basis? Yes, so people often ask about it. The problem is that it’s not like a list of three things that you can just do. It’s sort of dozens of smaller things that add up. But what I will tell you is one thing. So the problem with academia is that it’s got… Obviously, it’s got tons of smart people in it. But there’s no coordination at all. So everybody is sort of… It’s like brownie emotion. Everyone is kind of exploring what they think is best. And of course, lots of great creative things come out of that. But the problem is that if you’re not adding to each other’s knowledge quite as efficiently as they could be if it was a bit more coordination. And also it’s difficult if what you’re trying to achieve is a very difficult and vicious problem. And it needs dozens of experts to come together to work together with their own complementary expertise. That’s quite hard to organize in academia as well. So the way we do it, the mind is that we set out… We have a road map, like a 20-year road map. It’s more detailed than near a term, obviously, that we are like a year out. But it’s 20 years and what we have is capabilities that we would like our algorithms to have, maybe informed by neuroscience or animal psychology or various domains. And then we have benchmarks that we create that we would like to test those capabilities against, whether it’s a game or some other kind of benchmark. But we don’t prescribe the solutions to those problems. We order… Everyone knows what the road map is and everyone knows the ordering of the capabilities we would like in which order that makes sense, according to the road map. But we don’t specify the solutions. Like I might have an idea for how we could solve, you know, give our agents imagination. But my idea is not as worth the same as anyone else’s idea. So anybody, even the most junior researcher can put ideas into the melting pot. And then it’s the ideas that survive the scientific method. I survive the objective benchmarking that end up… We end up converging around and putting more resources on too. And by doing that, you get… All the creativity still bottom up. So like the best science is done. So, you know, all the solutions come bottom up. But there’s a sort of loose coordination from top down to do with the ordering of the tasks. And that’s missing in academia, that part, generally speaking. Okay, let’s go a little further back. On the back right hand side. Hand up still, yeah. Hi. So you mentioned championing sort of UK, the ecosystem. And you’ve already sold one company. So I was wondering why you sold to Google. Yeah. It’s a good question. And, you know, at the time, this is 2014, we had a big decision to make. So we had a lot of options. We were clearly the world leaders in AI, and we had this sort of amazing group of people. And that was very valuable. And we also had some really cool technology by then. We had the Atari program, you know, DQN. So we’d proven out our main pit of our thesis. And we had some options to do with several companies that were interested, but also our investors didn’t want to sell. They wanted to continue. But the reason we did it in the end was that I felt that we could accelerate our mission within Google by using sort of as quite complimentary what we had and what Google had. Like we needed their compute power. Their data was secondary, but actually the compute power is more important. But also their resources obviously to expand the team to like we’re 500 people now with 300 PhDs. So we couldn’t have done that, you know, and any, you know, so that’s the suspense. Right. So it’s so it would be difficult to do that without having a, you know, concentrating on the research without the backing of something like a Google. So I felt that was the right thing to do. And then when I met Larry and you know, and I report to Larry and we basically go on very well. And he convinced me that he thought about Google as a ultimately as an AI company. And that’s become public sort of now. And so so felt like he appreciated the significance of AI as much as I did and understood it and it was sort of as passionate about it as I was. And then the final piece was we negotiated that we would run it. You know, I would basically run it like a CEO, a totally separately, we stay in London and build build the London base here. And, and you know, there would be no interference with the research program. So, and that’s all transpired. So I think it would have been difficult to make as fast the progress as we’ve have done without that backing behind us. Okay. The lady here. Sorry. Can you just wait for the microphone just in the back in here. Thank you. She spoke about understanding learning how to learn or meta learning with your team of 500 people. Are you doing anything cool or different in how you help them learn? Yes. So they’re game. Yeah. So we are actually all the time. I mean what I stress to people is that the whole of deep mind is about learning, right? Everything. So and of course, most of the people we’ve got are used to doing a lot of learning because they’ve, you know, they’ve normally got, you know, gone away to PhD level and beyond. So they’re good at that. What the interesting thing is you’ve got to do is keep your humbleness about even if you’re a world expert in your area, there’s always something more you can learn, something more you can improve on. So my motto that I have for myself from, from very young was this Japanese word, kai zen, which means, you know, roughly translators or direct translation of like continuous striving for continuous self improvement. And that’s what I’ve tried to do in my life. And that’s what I try and embody in my culture at deep mind is that, you know, we do it in our new start to talk to everything about everybody can learn something more. And you’ve got all the world. Look at this amazing environment you’re in. You’ve got, if you want to know about something, there’s probably a world expert sort of sitting somewhere near you in that topic, right? If it’s to do a machine learning on your science. So go and take advantage of that. Don’t just stay siloed in the thing that you know how to do well. And I think generally speaking, people really are open to that and embrace it. And the other nice thing about academics, which a lot of our people, well, ex academics is they love teaching and mentoring generally speaking, a lot of our professor type people. So they’re already in that kind of mode. So it’s actually quite a, it’s quite a, you know, any one time I go through our canteen, I see all these amazing conversations, I mean, really amazing conversation going on around whiteboards and people teaching each other and brainstorming incredible things. And of course, what we’re building our learning system. So the whole, the whole thing is kind of, I call it cathedral to the mind, right? Our whole, our whole building, our whole organization is, is, is to do is like an ode to, to the mind. I love it. And right at the back of the blue jacket, I think. Hi, Dennis, you hinted at synthesizing data or synthetic data. Do you have any views on tackling problems in domains where there might not be a lot of data today? Yeah. So, so my main solution to that would be, would be to build a simulation. So there’s a kind of path you can think of. If you, if you have a port city of real data, but you maybe have some, perhaps you’ve got enough to build a handcrafted simulator of the system that you’re interested in. That at least approximates some of the properties of the real system. I mean, it’s not going to be perfectly accurate, obviously, for most systems. But at least if it’s, if it’s gotten close enough approximation, then potentially you can then build an AI system that experiments and that and learns from that. So I think that that is a very powerful way to overcome in some domains of port city of data. There is actually another interesting angle on this, which is that you could also create another AI system, which are called generative models, which actually learn from the data you do have to create a generative model that can automatically generate new data from the same distribution. Right. So you could imagine I could imagine in future having one AI system that’s generating the data, you know, maybe was trained on some real data, but not there wasn’t, which there wasn’t very much of. And then that data being fed into a second AI system that tries to sort of understand it or work out how to make decisions in that environment. So I think there’s lots of interesting research there to be done on the simulation side. And this gentleman right in the front. Thank you. Just also on the fascinating subject or solving intelligence and also on meta learning and creativity. Do we not need some kind of universal algorithm to actually drive the learning process within the deep learning? And in terms of that, would we not need to define, for example, what purposes? Very difficult problem. Maybe, you know, that relates to the physics biology, thermodynamics and literally no more prize winning work would need to be done as part of solving intelligence. Do you think you could be at the center of that because it seems as though you’ve done a lot of fantastic and amazing work already? Well, we hope to be near the center of that. I agree there’s a lot of work that, you know, very hard work that needs to be done. I mean, if I understand your question correctly, the, in terms of like the motivations of the system, you know, we’re experimenting at the moment with, there’s obviously the idea of it external rewards from the environment. So reinforcement works like that. So, you know, if you win a game of go, you get some reinforcement and then you basically means that you’re more likely to do the actions that got you to that result next time you’re in that situation, right? Or in a similar situation. But the problem is the real world is, is got very sparse rewards or maybe the rewards aren’t specified. So how do you decide like what you should be optimizing? And then you can start thinking there’s various solutions to that. You can start thinking about what we call intrinsic rewards. So internal rewards that reward things that are internal to the system rather than looking for external rewards. And they might be things like there’s a lot of hypotheses about that and we’re testing all of these. But some of the big candidates are things to do with physics based things like information gain. So information gain being intrinsically rewarding. And actually, there’s some evidence in biology that this is the case like novelty seeking is actually rewarding. It releases dopamine. And so obviously, if you see something novel, you’re gaining a lot of information. So, and there are various theories of thermodynamics that sort of talk about actually that’s what your brains trying to do is trying to minimize the amount of energy free energy it has in the system. So there’s actually quite a lot of interesting criteria that you could try and maximize in the absence of external, any external rewards. And then that second question is like, how do you set those objectives? I think that’s a very tricky question. And for games, it’s very easy. You maximize the score or win the game. For a lot of scientific problems, it can also be quite easy. You know, minimize energy, make this certain property above a threshold. But as you get into more and more human-based systems, it gets more messy and more complex as to how you specify the goal or the values of the system. So I think we have to, you know, again, that’s becoming sort of very cutting edge research to actually understand how we should do that. So we only have to have two more questions. I’m afraid. Junkle them here with your hand up and then Jacket. Hi, thanks. I have two questions for you. Just one, please. Do you take validation on your thinking when you started DeepMind and how did you do that? Both technical validation, commercial validation. Yeah. That’s a great question. Actually, I didn’t take, I mean, if I’ve taken too much validation, I would never have started it. So, because clearly I was going against the general consensus at the time. But the thing is, but it’s important not to take validation where you can. So one thing I always say to entrepreneurs is, and something I learned from my first company is that you, you want to be like five years ahead of your time, not 50 years ahead. Because if your idea is 50 years ahead of time, even if it’s the right idea, and you thought it through correctly, you’re basically going to be in a world of pain in the real world. There’s just, there’s just going to be too many forces acting, acting against, you know, from raising money to convincing people this is the right thing or getting to commercialization quick enough. And so the great question is, what I was trying to validate at the time was not my ideas about how to build AI. That I already was doing in my scientific research, and I was quite convinced of it. It was trying to calibrate was that the right time was I five years ahead, and not 50. And that’s a pretty hard art in itself to figure out. And you kind of try and read the ruins. And so that’s why I would would would would talk to people who were experts in certain components that I thought were required. If it was going to be true that we were, you know, we were right at this fault of Vanguard, but not, you know, hopelessly in front. You know, my example that would be Charles Babbage being 100 years too early with the computer, right? I mean, he was right at works as you know, defense machine, but he, you know, he died never seeing it built. So I think, you know, that’s the validation that I took. But if you’re looking for validation and it’s the consensus is everyone says you yeah, do it. It’s probably, you know, you’re maybe too late already, actually. Okay, final question of the evening. I’m liking your boldness. I mean, I mean, hands, right. So, Demis, so what’s your priority one to 10 about building a global eye are probably a transfer compatibility learnings without knowledge transfer addressing the knowledge transfer. You can’t definitely move towards solving intelligence, which is your goal, right? So what’s your priority of one to 10? One to 10 of what? A transfer learning, or building a. Yeah, yeah, or what the technical transfer learning. Yeah, I think it’s one of the most vital. So transfer learning is when you, when you transfer your knowledge from one domain to a new, totally new domain. And I think that is the key to actually general intelligence, right? And that’s the thing that we as humans do amazingly well. And something I honed when I was, you know, now I played, for example, I played so many board games now. When I, if someone wants to teach me a new board game, I wouldn’t be coming to that fresh anymore. I would know, you know, straight away, I could apply all of these different heuristics to that I’ve learned from other games to this new one, even if I’ve never seen that before. And currently, no machines can do that. And I think the key to doing transfer learning is going to be conceptual knowledge. So abstract knowledge, so the so the acquisition of conceptual knowledge that is abstracted away from the, the perceptual details of where you learned it from. So then you can go, OK, I’ll apply it to this new domain. And the example I always give on that with our Atari work is the game of pong and breakout. So both of those two are, you know, like that batten ball games, but they look really different. And our systems at the moment, you know, if you learn, if they learn to play pong and then you give them breakout, they won’t be, you know, they won’t learn breakout any more quickly than if you just gave them breakouts straight away. And clearly that’s wrong, right? Obviously, they should be translating the notion of sort of Newtonian mechanics from one to the other. And at the moment, they don’t because all the knowledge is implicit. So they’ve learned how to predict the ball going around the screen in one game, but they haven’t made that explicit conceptual knowledge. So they can’t, there’s no way of transferring that knowledge to the new perceptual domain. It just, the system just regards it as a totally new problem. So I think that’s actually like, you know, one of the big challenges to be tackled towards a general AI. Fascinating note to end on. Unfortunately, it has to go to a different engagement after this. They can’t stay around to talk. I just want to do a massive thank you. It’s been a fascinating evening. I’ve loved hearing more. Thank you so much for joining us. Thanks.

AI video(s) you might be interested in …