Dear Fellow Scholars, this is two-minute papers with Kato Ejolene Ifehir. I get a lot of messages from you fellow scholars that you would like to get started in machine learning and are looking for materials. Words fail to describe how great the feeling is that the series inspires many of you to start your […]
Dear Fellow Scholars, this is two-minute papers with Carlos Jean-Layfahier. A neural network is a very loose model of the human brain that we can program in a computer. Or, it’s perhaps more appropriate to say that it is inspired by our knowledge of the inner workings of a human brain. Now, let’s note that […]
Hey everyone, welcome to this week’s Shreptech. I am unbelievably excited about today’s guest. We have Joel here who leads our Montreal Lab in a large part of our Facebook AI Research Lab. She’s also a professor at McGill University, and she’s going to talk to us about AI. Joel, welcome to Shreptech. Thank you. […]
Algorithms have been helping people type faster for years. For instance, if you type best restaurants into a search engine, you’ll see automatic suggestions like near me. These suggestions are usually based on top search trends or popular phrases. Now, modern AI systems help you complete entire sentences that are specific to your conversation as […]
Tell me about deep learning. Maybe it gets slightly Matthew, but we can just go after with an example to see what the things are. But one typical example is if someone wants to say automatically what is the price of a property. So there are certain variables that are important for property, you know, […]
We’re going to go through the back propagation example, which I went through very briefly in the last lecture, talk about newters’ neighbors, which I did within one minute. And also, they’re going to talk about a scikit-learn, which is this really useful tool for doing machine learning, which might be useful for your final […]
Okay, welcome back everyone. This is the second lecture on machine learning. So just before we get started, a couple of announcements. Homework 1, foundations is due tomorrow at 11 p.m. Note that it’s 11 p.m. not at 11.59. And please, I would recommend everyone try to do a test mission early, right? It would […]
Stanford CS221 AI Lecture 2: Machine Learning 1 – Linear Classifiers, SGD (Stochastic Gradient Descent)
Okay, so let’s get started with the actual technical content. So remember from last time, we gave an overview of the class. We talked about different types of models that we’re going to explore. Reflex models, state-based models, variable-based models, and logic models, which we’ll see throughout the course. But underlying all of this is […]
All right, let’s get started. Please try to have a seat if you can find a seat. Let’s get the show on the road. Welcome everyone to CS21. This is artificial intelligence. If you’re new to Stanford, welcome to Stanford. First let’s do some introduction. I’m Percy. I’m going to be one of your instructors […]
Welcome, everyone to 2019 it’s really good to see everybody here make it in the cold This is 6.S094 deep learning for self-driving cars It is part of a series of courses on deep learning that we’re running throughout this month The website that you can get all the content the videos the lectures and […]
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. Okay, welcome back. You know, it’s that time of term when […]
So today we have Nate Derbinski. He’s a professor at Northeastern University working on various aspects of computational agents that exhibit human level intelligence. Please give Nate a warm welcome. Applause Thanks a lot and thanks for having me here. So the title that was on the page was cognitive modeling. I’ll kind of get […]
We have Lisa Feldman Barrett with us. She is a University Distinguished Professor of Psychology at North Eastern University, Director of the interdisciplinary Effective Science Laboratory, author of the new, amazing book, How Emotions Are Made, The Secret Life of the Brain. She studies emotion, human emotion, from social, psychological, cognitive science and neuroscience perspectives.
Today we have Stephen Wolfram. Wow That's of course I didn't even get started you're already clapping In his book a new kind of science he has explored and revealed the power of beauty and complexity of cellular automata As simple computational systems were which incredible complexity can emerge
Today we have Ray Kurzweil. He is one of the world's leading inventors, thinkers, and futurists, with a 30-year track record of accurate predictions, called the Restless Genius by the Wall Street Journal and the ultimate thinking machine by Ford's magazine.
Today we have Joshua Tenenbaum. He's a professor here at MIT, leading the computational cognitive science group. Among many other topics in cognition and intelligence, he is fascinated with the question of how human beings learn so much from so little. And how these insights can lead to build AI systems that are much more efficient learning from data. So please give Joshua a warm welcome.
Course 6S099 will explore the nature of intelligence from as much as possible an engineering perspective. My voice will be that of an engineer. Our mission is to engineer intelligence.