MIT AGI: Artificial General Intelligence
In this AI video ...
Welcome to course 6S099, Artificial General Intelligence. We will explore the nature of intelligence from as much as possible an engineering perspective. You will hear many voices. My voice will be that of an engineer. Our mission is to engineer intelligence. The MIT motto is mind in hand. What that means is we want to explore the fundamental science of what makes an intelligence system. The core concepts behind our understanding of what is intelligence. But we always want to ground it in the creation of intelligence systems. We always want to be in the now, in today, in understanding how today we can build artificial intelligence systems that can make for a better world. That is the core for us here at MIT. First and foremost, we are scientists and engineers. Our goal is to engineer intelligence. We want to provide with this approach a balance to the very important, but over-represented view of artificial intelligence, the black box reasoning view, where the idea is once we know how to create a human level intelligence system, how will society be impacted? Will robots take over and kill everyone? Will we achieve a utopia that will remove the need to do any of the messy jobs that will make us all extremely happy? Those kinds of beautiful philosophical concepts are interesting to explore, but that’s not what we’re interested in doing. I believe that from an engineering perspective, we want to focus on the black box of AGI. Start to build insights and intuitions about how we create systems that approach human level intelligence. I believe we’re very far away from creating anything resembling human level intelligence. However, the dimension of the metric behind the word far may not be time. In time, perhaps through a few breakthroughs, maybe even one breakthrough, everything can change. But as we stand now, our current methods, as we will explore from the various ideas and approaches and the guest speakers coming here over the next two weeks and beyond, our best understanding, our best intuition and insights are not yet at the level of reaching without a major leap and breakthrough in paradigm shift towards human level intelligence. So it’s not constructive to consider the impact of artificial intelligence, to consider questions of safety and ethics, fundamental, extremely important questions. It’s not constructive to consider those questions without also deeply considering the black box of the actual methods of artificial intelligence, human level artificial intelligence. And that’s what I see, what I hope this course can be. It’s first iteration, it’s first exploratory attempt to try to look at different approaches of how we can engineer intelligence. That’s the role of MIT, its tradition of mind and hand. It’s to consider the big picture, the future impact of society, 10, 20, 30, 40 years out, but fundamentally grounded in what kind of methods do we have today? And what are the limitations and possibilities of achieving that, the black box of AGI? And in the future impact on society of creating artificial intelligence systems that become increasingly more intelligent, the fundamental disagreement lies in the fact the very core of that black box, which is how hard is it to build an AGI system? How hard is it to create a human level artificial intelligence system? That’s the open question for all of us, from Josh Tenenbaum, to Andre Carpati, to folks from OpenAI, to Boston Dynamics, to the brilliant leaders in various fields of artificial intelligence that will come here. That’s the open question. How hard is it? There’s been a lot of incredibly impressive results in deep learning, in neuroscience, in computational cognitive science, in robotics. But how far are we still to go to the AGI? That’s the fundamental question that we need to explore before we consider the future impact on society. And the goal for this class is to build intuition, one talk at a time, a project at a time, build intuition about where we stand, about what the limitations of current approaches are, how can we close the gap? A nice meme that I caught on Twitter recently of the difference between the engineering approach, at the very simplest of a Google intern typing a for loop that just does a grid search on parameters for neural network. And on the right is the way media would report this for loop, the Google AI created its own baby AI. I think it’s easy for us to go one way or the other, but we’d like to do both. Our first goal is to avoid the pitfalls of black box thinking, of the futurism thinking, the results in hype that’s detached from scientific engineering understanding of what the actual systems are doing. That’s what the media often reports. That’s what some of our speakers will explore in a rigorous way. It’s still an important topic to explore. Right, CurseWile on Wednesday will explore this topic. Next week, talking about AI safety and autonomous weapon systems will explore this topic. The future impact, 10, 20 years out. How do we design systems today that would lead to safe systems tomorrow? It’s still very important. But the reality is, a lot of us need to put a lot more emphasis on the left, on the for loops, on creating these systems. At the same time, the second goal of what we’re trying to do here is not emphasize the silliness, the simplicity, the naive basic nature of this for loop, in the same way as was the process in creating nuclear weapons before during World War II. The idea that as an engineer as a scientist, that I am just a scientist, is also a flawed way of thinking. We have to consider the big picture impact, the near term negative consequences that are preventable, the low hanging fruit that can be prevented through that very engineering process. We have to do both. And in this engineering approach, we always have to be cautious that just because we don’t understand, just because our intuition, our best understanding of the capability is the modern systems that learn, that act in this world, seem limited, seem far from human level intelligence. Our ability to learn and represent common sense reasoning seems limited. The exponential, potentially exponentials could be argued and he will, growth of technology of these ideas, means that just around the corner is a singularity. It’s a breakthrough idea that will change everything. We have to be cautious of that. Moreover, we have to be cautious of the fact that every decade or the past century, our adoption of new technologies has gotten faster and faster. The rate at which a new technology from its birth to its wide mass adoption has shortened and shortened. That means that new idea, the moment it drops into the world, can have widespread effects overnight. So as, and I think the, in the engineering approach is fundamentally cynical on artificial general intelligence because every aspect of it is so difficult. We have to always remember that overnight everything can change. Through this question of beginning to approach from a deep learning perspective, from deeper enforcement learning, from brain simulation, computational cognitive science, from computational neuroscience, from cognitive architecture, from robotics, from legal perspectives, and autonomous weapon systems. As we begin to approach these questions, we need to start to build intuition, how far away are we from creating intelligent systems? The singularity here is that spark, that moment when we’re truly surprised by the intelligence of the systems we create. I’d like to visualize it by the, by the certain analogy that we’re in this dark room looking for a light switch with no knowledge of where the light switch is. There’s going to be people that say, well, it’s a small, the rooms are all small, we’re right there. It’s anywhere we’ll be able to find it in any time. There are realities where there’s no very little, so we have to stumble around, feel our way around to build the intuition, a far, far away we really are. And many will, speakers here will talk about how we define intelligence, how we can begin to see intelligence, what are the fundamental impacts of creating intelligent systems. I’d like to sort of see the positive reason for this little class and for these efforts that have fascinated people throughout the century of trying to create intelligent systems. Is that there’s something about human beings that want, that craves to explore, to uncover the mysteries of the universe, fundamental in itself a desire to uncover the mysteries of the universe. Not for a purpose, and there’s often an underlying purpose of money, of greed, of craving for power and so on, but there seems to be an underlying desire to explore. Nice little book, an exploration of very short introduction by Stuart Weaver, he says, for all the different forms it takes, in different historical periods, for all the worthy and unworthy motives that lie behind it. Exploration, travel for the sake of discovery and adventure is a human compulsion, a human obsession even. It is defining element of a distinctly human identity, and it will never rest at any frontier, whether terrestrial or extraterrestrial. From 325 BCE, with a long 7500 mile journey on the ocean, to explore the Arctic, to Christopher Columbus and his flawed, harshly criticized the modern scholarship trip that ultimately paved the way, didn’t discover, paved the way to colonization of the Americas. To the Darwin trip, the voyage of the Beagle, while this planet has gone cycling on according to the Fix Law of Gravity, from so simple beginning, endless forms, most beautiful and most wonderful have been and are being evolved. To the first venture into space, by Yuriga Garian, first human in space in 1961, what he said over the radio is the Earth is blue, it is amazing. These are the words that I think drive our exploration in the sciences, in the engineering, and today in AI, and the first walk in the moon, and now the desire to colonize Mars and beyond. That is where I see this desire to create intelligent systems. Talking about the positive and negative impact of AI in society, talking about the business case of the jobs, laws, jobs, gains, jobs created, diseases, cured, the autonomous vehicles, the ethical questions, the safety of autonomous weapons, of the misuse of AI in the financial markets. Underneath it all, and there are people, many people have spoken about this, what drives myself and many in the community is the desire to explore, to uncover the mystery of the universe. And I hope that you join me in that very effort with the speakers that come here in the next two weeks and beyond. The website for the course is agi.mit.edu. I am a part of an amazing team, many of whom you know. agi.mit.edu is the email where on slack, deep-mit.slack, for registered MIT students, you create a count on the website, and submit five new links and vote on 10 to vote AI, which is an aggregator of information and material we’ve put together for the topic of agi. And submit a entry to one of the competitions, one of the three competitions projects that we have in this course. And the projects are dream vision, I’ll go over them in a little bit, dream vision, angel, ethical car, and the aggregator of material vote AI. We have guest speakers, incredible guest speakers, I’ll go over them today. And as before, with the deep learning for self-driving cars course, we have shirts and they’re free for in person for people that attend in person for the last lecture most likely, or you can order them online. Okay, dream vision, we take the Google deep dream idea, we explore the idea of creativity, where it is signs view of intelligence, the mark of intelligence is creativity. This idea is something we explore by using neural networks in interesting ways to visualize what the network see, and it’s so doing create beautiful visualizations in time through video. So taking the ideas of deep dream and combining them together with multiple video streams to mix dream and reality. And the competition is through mechanical Turk, we set up a competition of who produces the most beautiful visualization. We provide code to generate this visualization and ideas of how you can make it more and more beautiful and how to submit it to the competition. Angel, the artificial neural generator of emotion and language is a different twist on the touring test, where we don’t use words, we only use emotions to speak, expression of those emotions. We create, we use an age, a face customizable with 26 muscles, all of which can be controlled with an LSTM. We use a neural network to train the generation of emotion. And the competition in you submitting the code to the competition is you get 10 seconds to impress with these expressions of emotion the viewer. It’s AB testing, your goal is to impress the viewer enough to where they choose your agent versus another agent. And those that are most loved, the agents most loved, will be the ones that are declared winners. In a twist, we will add human beings into this mix. So we’ve created a system that maps our human faces, myself and the TAs, to where we ourselves enter in the competition and try to convince you to keep us as your friend. That’s the touring test. Ethical car, building and the ideas of the trolley problem and the moral machine done here in the media lab, the incredible, interesting work. We take a machine learning approach to it and take what we’ve developed, the deeper enforcement learning competition for 6.094, the deep traffic, and we add pedestrians into it. Stochastic, irrational, unpredictable pedestrians. And we add human life to the loss function, where there’s a trade-off between getting from point A to point B. So in deep traffic, the deeper enforcement learning competition, the goal was to go as fast as possible. Here, it’s up to you to decide what your agent’s goal is. There’s a parade of front trade-off between getting from point A to point B as fast as possible and hurting pedestrians. This is not an ethical question. It’s an engineering question. And it’s a serious one, because fundamentally in creating autonomous vehicles that function in this world, we want them to get from point A to point B as quickly as possible. The United States government, insurance companies put a price tag on human life. We put that power in your hands in designing these agents to ask the question of how can we create machine learning systems where the objective function, the loss function has human life as part of it. And Vodei is an aggregator of different links, different articles, papers, videos on the topic of artificial general intelligence, where people vote on vote quality articles up and down and choose on the sentiment of positive and negative. We’d like to explore the different arguments for and against artificial general intelligence. There is an incredible list of speakers, the best in their disciplines, from Josh Tenembaum, Fiernan MIT, to Ray Kurzweil, at Google, to Lisa Feldman Barrett, and Nadia Binsky from Northeastern University, Andre Karpathy, Stephen Wolfram, Richard Moys, Mark Reiber, Iljessa Schieber, and myself. Josh Tenembaum, tomorrow, I’d like to go through each of these speakers and talk about the perspectives they bring to try to see the approach, the ideas they bring to the table. They’re not, in most cases, interested in the discussion of the future impact on society, without grounding it into the expertise, into the actual engineering, into creating these intelligence systems. So Josh is a computational cognitive science expert professor faculty here at MIT. He will talk about how we can create common sense understanding, systems that see a world of physical objects and their interactions, and our own possibilities to act and interact with others. The intuitive physics, how do we build into systems, the intuitive physics of the world, more than just the deep learning memorization engines that take patterns and learn through supervised way to map those patterns to classification. Actually begin to understand the intuitive, the common sense physics of the world. And learn rapid model based learning, learn from nothing, learn from very little, just like we do as children, just like we do as human beings, successfully often only need one example to learn a concept. How do we create systems that learn from very few, sometimes a single example, and integrate ideas from various disciplines, of course, from neural networks, but also probabilistic generative models and simple processing architectures. It’s going to be incredible. Of course, from a different area of the world, another incredible thinker intellectual speaker is Ray Kurzweil, he’ll be here on Wednesday at 1 p.m. And he will do a whirlwind discussion of where we stand with intelligence, creating intelligence systems, how we see natural intelligence, our own human intelligence, how we define it, how we understand it, and how that transfers to the increasing exponential growth of development of artificial general intelligence. Something I’m very excited about is Lisa Feldman-Barratt coming here on Thursday. She’s written a book, I believe how emotions are made. She argues that emotions are created, that there is a distinction, there is a detachment between what we feel in our bodies, the physical state of our bodies, and the expression of emotion, from body to the contextually grounded to the face expressing that emotion, which means, now why is there a person who is a psychology person in a fundamental engineer in computer science, like AGI, because if emotions are created in the way she argues, and she’ll systematically break it down, that means we’re learning, as human beings, we’re learning societal norms of how to express emotion, the idea of emotional intelligence is learned, which means we can have machines learn this idea. It’s a machine learning, just like it’s a human learning problem, it’s a machine learning problem. In a little bit of a twist, she asks that instead of giving a talk, I have a conversation with her. So it’s going to be a little bit challenging and fun, and she’s great, looking forward to it. We’ll explore different ways that we can get emotion expressed through video, through audio, through the angel project that I mentioned. So there’s been work in reenacting intelligence, so reenacting, mapping face to face, mapping different emotions on video that was previously recorded. So if you can imagine, that means we can take emotions that we’ve created, the kind of emotion creation we’ve been discussing, and remap it on previous video. That’s one way to see intelligence is taking raw human data that we already have, and mapping new computer generated, the underlying fundamentals are human, but the surface appearance, the representation of emotion, visual or auditory, is generated by computer. It could be in the embodied form like with Sophia. Sophia, I know there’s lots of human. I think there will be so much in a lot of ways to different and few others. It will take a long time for robots to develop a complex emotion. And possibly robots can be broken about the non-common and emotions by rage, jealousy, hatred, and so on. It might be possible to make them more effective and humans, so I think there will be a good partnership, virtual and brilliant and complete together. Very important to note, for those captivated by Sophia in the press, or have seen these videos, Sophia is an art exhibit. She’s not a strong natural language processing system. This is not an AGI system. But it’s a beautiful visualization of embodying, of how easy it is to trick us human beings that there’s intelligence underlying something, that the emotional expression, the physical embodiment, and the emotional expression, that has some degree of humor, that has some degree of wit and intelligence, is enough to captivate us. So that’s an argument for not creating intelligence from scratch, but having machines at the very surface, the display of that emotion, the generation, the mapping of the visual and the auditory elements, where underneath it is really trivial technology that’s fundamentally relying on humans, like in the Sophia’s case. In the simplest form, we remove all elements of, as I say, attractive appearance from an agent. We really keep it to the simplest muscles aspect characteristics of the face, and see with 26 muscles controlled by a neural network through time, so a current neural network, OSTM, how can we explore the generation of emotion? Can we get this thing, and this is an open question for us too, we just created the system, we don’t know if we can. Can we get it to make us feel something, make us feel something by watching it express its feelings? Can it become human before our eyes? Can it learn to, by competing against other agents, A, B, testing, on Turk, on mechanical Turk? Can the winners be very convincing to make us feel entertained? Pity? Love? Maybe some of you will fall in love with Angel here. Nate, Derbenzki on Friday, we’ll talk about cognitive modeling architectures. So you’ll speak about the cognitive modeling aspect. Can we have a model cognition in some kind of systematic way to try to build intuition of how complicated cognition is? Andrik Arpati, famous for being the state of the art human on the ImageNet Challenge, the representative, the 95% accuracy performance, among other things he’s also famous for. He’s now a Tesla, he will talk about the role, the limitations, the possibilities of deep learning. We’ll talk as I have spoken about in the past few weeks and throughout about our misunderstanding or our flawed intuition about what are the difficult and what are the easy problems in deep learning. And the power of the representational learning, the ability of neural networks to form deeper and deeper representations of the underlying raw data that ultimately forms that takes complex information that’s hard to make sense of and convert it into useful actionable knowledge. That is from a certain lens in a certain, certain lens in a certain problem space can be clearly defined as understanding of the complex information. Understanding is ultimately taking complex information and reducing it to a simple essential elements. Representational learning, in the trivial case here, in drawing, having to draw a straight line to separate the blue and the red curves that’s impossible to do in a nidial input space on the left. What the active learning is for deep neural networks in this formulation is to construct a topology under which there exists a straight line to accurately classify blue versus red. That’s the problem. And for a simple blue and red line, it seems trivial here, but this works in a general case for arbitrary input spaces, for arbitrary non-linear, highly dimensional input spaces. And the ability to automatically learn features, to learn hierarchical representations of the raw sensory data means that you could do a lot more with data, which means you can expand further and further and further to create intelligent systems that operate successfully with real world data. And the actual representation of learning means that deep learning allows, because the arbitrary number of features that can be automatically determined, you can learn a lot of things about a pretty complex world. Unfortunately, there needs to be a lot of supervised data. There still needs to be a lot of human input. Andre and others, Josh, will talk about the difference between our human brain, our biological neural network, and the artificial neural network. The full human brain with 100 billion neurons, 1,000 trillion synapses, and the biggest neural networks out there, the artificial neural networks, having much smaller, 60 million synapses for ResNet 152. The biggest difference, the parameters of human brain being several orders of magnitude, more synapses, the topology being much more complex, chaotic. The asynchronous nature of the human brain, and the learning algorithm of artificial neural networks is trivial and constrained with back propagation, is essentially an optimization function over a clearly defined loss function from the output to the output. Using back propagation to teach to adjust the weights on that network, the learning algorithm for our human brain is mostly unknown, but is certainly much more complicated than back propagation. The power consumption, the human brain is a lot more efficient than artificial neural networks, and there’s a very kind of artificial trivial supervised learning process for training artificial neural networks. You have to have a training stage, and you have to have an evaluation stage, and once the network is trained, there’s no clear way to continue training it, or there’s a lot of ways that they’re inefficient. It’s not designed to do online learning naturally, to always be learning, is designed to be to learn and then be applied. Obviously, our human brains are always learning, but the beautiful, fascinating thing is that they’re both distributed computation systems in a large scale. So it’s not a, it doesn’t ultimately boil down to a single compute unit. The computation is distributed, the back propagation learning process is distributed, can be paralyzed in a GPU massively paralyzed. They’re underlying a computational unit of a neuron is trivial, but can be stacked together to form forward neural networks or current neural networks to represent both spatial information with images and temporal information with audio speech, text, sequences of images and video, and so on. Mapping from one to one, one to many, many to one, so mapping any kind of structure vector and time data as an input to any kind of classification regression, sequences, captioning, video, audio as output, learning in the general sense. But in a domain that’s precisely defined for the supervised training process. We can think of the in deep learning case, we can think of the supervised methods where humans have to annotate the data as memorization of the data. We can think of the exciting new and growing field of semi supervised learning with most of the data through through generative adversarial networks or through significant data augmentation, clever data augmentation. Most of it is done automatically, the annotation process or through simulation, and then reinforcement learning where most of the, most of the labels are extremely sparse and come rarely. And so the system has to figure out how to operate in the world with very little human input, very little human data. We can think of that as reasoning because you take very little information from our teachers, the humans, and transfer it across, generalize it across to reason about the world. And finally, unsupervised learning, the excitement of the community, the promise, the hope, you could think of that as understanding because ultimately it’s taking data with very little or no human input and forming representations of that data is how we think of understanding, requiring making sense of the world without strict input of how to make sense of the world. The kind of process of discovering information, maybe discovering new ideas, new ways to simplify the world, to represent the world that you can do new things with it. The new is the key element there, understanding. And Andre and Ilya and others will talk about certainly the past but the future of deep learning. Where is it going to go? Is it overhyped? Underhyped? What is the future? Will the compute of CPU, GPU, ASICs continue? Will the breakthroughs, the Moore’s law and its various forms of mass-apparialization continue? And the large data sets with tens of millions of images grow to billions and trillions. Will the algorithms improve? Is there a groundbreaking idea that’s still coming with Jeff Hiddins’ capsule networks? Is there fundamental architectural changes to neural networks that we can come up with that will change everything that will ease the learning process, they’ll make the learning process more efficient, or we’ll be able to represent higher and higher orders of information such that you can transfer knowledge between domains. And the software architectures that support from tons of Florida PyTorch, I would say last year and this year will be the year of deep learning frameworks. Those will certainly keep coming in their various forms. And the financial backing is growing and growing. The open challenges for deep learning, really a lot of this course is kind of connected to deep learning because that’s where a lot of the recent breakthroughs that inspire us to think about intelligent systems come from. But the challenges are many. The need, the ability to transfer between different domains as in reinforcement learning and robotics. The need for huge data and an efficient learning. We still need supervised data, an ability to learn in an unsupervised way is a huge problem. And not fully automated learning. There’s still a degree, a significant degree of hyperparameter to a necessary. With the reward functions, the loss functions are ultimately defined by humans and therefore are deeply flawed when we release those systems into the real world where there is no ground truth for the testing set. And the goal isn’t achieving a high classification on a trivial image classification localization detection problem, but rather to have an autonomous vehicle that doesn’t kill pedestrians. Or an industrial robot that operates in jointly with other human beings. And all the edge cases that come up, how does deep learning methods, how do machine learning methods generalize over the edge cases, the weird stuff that happens in the real world. Those are all the problems there. Stephen Wolfram will be here on Monday evening at 7 p.m. He’s done a lot of amazing things, I would say, is very interesting. From his recent interest in knowledge based programming, Wolfram Alpha, I think is the fuel for most middle school and high school students now. For the first time taking calculus, I’d probably go to Wolfram Alpha to answer their own questions, but more seriously, there is a deep connected graph of knowledge is being built there with the Wolfram Alpha and Wolfram language that still will explore in terms of language. And interesting thing, he was part of the team on arrival that worked on the language, if for those of you familiar, the arrival were alien species spoke with us, us humans through a very interesting, beautiful, complicated language. And he was brought in as a representative human to interpret that language, just like in the movie, who’s represented that in real life. And he used the skills that him and his son Christopher used to analyze this language. Very interesting. That process is extremely interesting. I hope he talks about it and his background with Mathematica and new kind of science, the sort of another set of ideas that have inspired people in terms of creating intelligent systems is the idea that from very simple things, very simple rules, extremely complex, patterns can emerge. His work with cellular automata did just that, taking extremely simple mathematical constructs here with cellular automata. These are grids of computational units that switch on and off in some kind of predefined way and only operate locally based on their local neighborhood. And somehow based on different kinds of rules, different patterns emerge. Here’s a three dimensional cellular automata with a simple rule, starting with nothing with a single cell, they grow in really interesting complex ways. This emergent complexity is inspiring. It’s the same kind of thing that inspires us about neural networks that you can take a simple computational unit and when combined together in arbitrary ways, conform complex representations. That’s also very interesting. You can see knowledge from a knowledge perspective, you can see knowledge formation in the same kind of way. Simplicity at a mass distributed scale resulting in complexity. Next Tuesday, Richard Moise from Article 36 coming all the way from UK for us. We’ll talk about, it works with autonomous weapon systems. Works with also nuclear weapons, but primarily autonomous weapon systems and concern legal policy and technological aspects of banning these weapons. There’s been a lot of agreement about the safety hazards of autonomous systems that make decisions to kill a human being. Mark Ribert, CEO of Boston Dynamics, previously a long time ago faculty here at MIT. We’ll talk about, we’ll bring robots and talk to us about his work of robots in the real world. He’s doing a lot of exciting stuff with human robotics and any kind of robots operating on legs. It’s incredible work, extremely exciting and gets to explore the idea of how difficult it is to build these robot systems that operate in the real world from both the control aspect and from the way the final result is perceived by our society. It’s very interesting to see when intelligence in robotics is embodied and then taking in by us and what that inspires. Fear, excitement, hope, concern, and all the above. Ilya Susquevar is an expert in many aspects of machine learning because the co-founder of OpenAI talk about their different aspects of game playing that they’ve recently been exploring about using deep reinforcement learning to play arcade games. On the deep mind side, using deep reinforcement learning to beat the best in the world at the game of Go. In 2017 the big fascinating breakthrough achieved by that team with AlphaGo 0 training an agent that through self-play playing itself, not on expert games. Truly from scratch learning to beat the best in the world including the previous iteration of AlphaGo will explore what aspects of the stack of intelligent robotic systems intelligent agents can be learned in this way. Deep learning, the supervised learning memorization approach looks at the sensor data feature extraction representation learning aspect of this. Taking the sensor data from camera, light, or audio, extracting the features forming high order representations. And those representations learning to actually accomplish some kind of classification regression task figuring out based on the representation what is going on in the raw sensory data and then combining that data together to reason about it. And finally in the robotic domains taking it all together as with human or robotics, industrial robotics, autonomous vehicles taking it all together and actually acting in this world with the effectors. And the open question is how much of this AI stack can be learned. That’s something for us to discuss to think about. The alien will touch on with deeper reinforcement learning we can certainly learn representations and perform classifications state of the art better than human an image classification image net and segmentation tasks. And the summit of deep learning is what’s highlighted there in the red box can be done end to ends raw sensory data out to the knowledge to the output to the classification. Can we begin to reason is the open question with the knowledge base programming that Stephen Wolfram will talk about can we begin to take these automatically generated high order representations and combine them together to form knowledge bases to form aggregate grass of ideas that can then be used to. And can we then combine them together to act in the world for whether in simulation with arcade games or simulation of autonomous vehicles or robotics systems or actually in the physical world with robots moving about can that end end from raw sensory data to action be learned. That’s the open question for for artificial general intelligence for this class can this entire process be end to end. Can we build systems and how do we do it that achieve this process end to end in the same way the humans do were born in this raw sensory environment taking in very little information and learn to operate successfully in arbitrary. The constraints arbitrary goals and to do so we have lectures we have three projects and we have guest speakers from various disciplines. I hope that all these voices will be heard and will feed a conversation about artificial intelligence and it’s positive. It’s concerning effects in society and how do we move forward from an engineering approach the topics will be deep learning deeper enforcement learning cognitive modeling competition cognitive science emotion creation knowledge base programming a i safety with autonomous weapon systems and personal robotics with human centered artificial intelligence that’s for the first two weeks of this class that’s the part where if you’re actually registered students that’s where you need to submit the project that’s. When we all me here every every night with the incredible speakers but this will continue we’re already have several speakers scheduled the next couple of months yet to be announced that they’re incredible and we have conversations on video we have new projects I hope this continues throughout 2018 on the topics of i a ethics in bias there’s a lot of incredible work in we have a speaker they’re coming on the top of the world. We’re coming on the topic of how do we create artificial intelligence systems that are do not discriminate do not form the kind of biases that us humans do in this world. That are operating under social norms but are reasoning beyond the flawed aspects of those social norms with bias creativity as with the project of dream vision and beyond there’s so much exciting work in. Using machine learning methods to create beautiful art and music brain simulation neuroscience competition your science shockingly in the first two weeks we don’t have a competition your science speaker. Which is a fascinating perspective brain simulation or neuroscience general competition your science is a fascinating approach from the from the mock of actual brain work. To get the perspective of how our brain works and how we can create something that mimics that resembles the fundamentals of what makes our brain intelligent. And finally the touring test the traditional Japanese should of intelligence defined by Alan touring was grounded in natural language processing creating chat bots that impress us that may this and trick us into thinking they’re human. We will have a project and a speaker on natural language processing in March. With that I like to thank you for coming today and look forward to seeing your submissions for the three projects. Thank you very much.