This is going to be the first of a few lectures that will be on Zoom. Today we'll be talking about what neural networks learn. We'll talk about how neural networks are universal approximators.
The first lecture on GANs was the first lecture of the semester on Generative models. We have seen discriminator models which Model the conditional distribution. Discriminative models find and it aims to find a decision boundary which separates this data from this set of data. So in in in generator models your aim is to just find the distribution of the data and not just to find the boundary.
Now we're speaking of how to use neural networks as generative models to model the distribution of any data so that we can draw samples from it.
Today we're going to be talking about what neural networks learn. So what we've seen so far is this neural networks are universal approximators. They can model any Boolean category color real value function.
We're going to start our new sequence of lectures on neural networks for modeling distributions. So what we've seen so far is that neural networks are universal approximators. They can model Boolean functions, classification functions.
GPT-3 was announced nearly two years ago in May 2020. It came out a year after the original GPT paper was published. OpenAI CEO Sam Altman stated a few months ago that GPT4 is on the way.
Rafik and Christian from the Rafik & Adoles studio have joined us to discuss PyTorch. Suraj, Justin and Suraj are both developers and developers at Pytorch.