Deep Learning606 Videos
This is going to be the first of a few lectures that will be on Zoom. Today we'll be talking about what neural networks learn. We'll talk about how neural networks are universal approximators.
The first lecture on GANs was the first lecture of the semester on Generative models. We have seen discriminator models which Model the conditional distribution. Discriminative models find and it aims to find a decision boundary which separates this data from this set of data. So in in in generator models your aim is to just find the distribution of the data and not just to find the boundary.
Now we're speaking of how to use neural networks as generative models to model the distribution of any data so that we can draw samples from it.
Today we're going to be talking about what neural networks learn. So what we've seen so far is this neural networks are universal approximators. They can model any Boolean category color real value function.
We're going to start our new sequence of lectures on neural networks for modeling distributions. So what we've seen so far is that neural networks are universal approximators. They can model Boolean functions, classification functions.
This is the first part of the transformers recitation where we will quote from scratch. This one will be more about the basics of transformers and how we coded so that for the future recitations we can just go over what researchers in the community have done and what the architectures are.