# 11-785 Spring 2023 Recitation 0B: Fundamentals of NumPy (Part 8/8)

We’re talking about some commonly used math operations in numpy. We’re going to be covering broadcasting, point-wise and element-wise operations, reduction operations, vector matrix operations, and tensor dot. So we’re first going to start with broadcasting. So suppose we have a numpy array with a shape 1, 3, and another numpy array with a shape 4, 1. So when we add both of these arrays, we see that we get an output array of the shape 4, 3. Now you might be wondering how this happens. Well when operating on two arrays, numpy compares their shape’s element-wise. It starts with the rightmost dimension and works its way left. Two dimensions are compatible when they are equal or when one of them is 1. When either of the dimensions compared is 1, the other dimension is used. In other words, the dimension with size 1 is stretched or copied to match the other. So we see in this case, numpy will first compare the rightmost dimension. It will see the value of 3 and 1 and see that since one of these values is 1, these dimensions are compatible. This means that this dimension of 1 will now have to be stretched to match this dimension of 3. This means that this same column is going to be copied twice over to get a final matrix of the shape 4, 3. numpy then moves to this dimension and sees that one of these values is 1, which means that both dimensions are compatible. This also means that this dimension of 1 will be stretched or copied to match this dimension of 4. This means that this row will be copied three times over to get a final matrix with the shape 4, 3. Now these two can be added element-wise to get our final output of the shape 4, 3 and the same rule applies for multiplication as well. We’re now going to talk about some element-wise operations in numpy. So here we have two arrays with the shape 2, 3. Now if we add both of these arrays, what happens is that the elements get added element-wise and we get a third array of the same shape. So we can see that the element at index 0, 0 of the first array and the element at 0, 0 of the second array get added to produce the element at 0, 0 of the final output array. This same rule applies for all of the elements. The same element-wise operation can also be used for multiplying elements of two arrays and this is the result that we get by doing that. Now suppose we multiply negative 10 with the random array. Now all the elements in the random array will be negative. Afterwards, we can use the apps function, which will return the absolute value of each element in that array and make all the elements positive. We also have a square root function, which returns the square root of every element in an array and this is the result that we get from doing that operation. Now let’s talk about the reduction operation that we can perform on a numpy array. So suppose we have an array with the shape 2, 3. We can use the max or the min functions to find the maximum or minimum value in this array. We can also use the sum function to find the sum of all the elements in this numpy array. We can also use the arg min or arg max function to find the index of the minimum or maximum elements in an array. We also have to specify the axis along which we want to find the index of the maximum or minimum value. And finally, we can use the mean, the standard deviation, and the norm functions to find the mean standard deviation and the norm of an unpy array. Now let’s talk about the vector matrix operations that we can use on numpy arrays. So we can use the mat mall operation to multiply 2 arrays and we can also use the at operator to do the same thing. This operation can also be used to multiply a matrix in a vector and 2 matrices as well. We can also use the numpy dot dot function to find the dot product between 2 arrays. And if the 2 arrays have only 2 dimensions, we can also use the at operator to do the same thing. Now let’s quickly talk about the tensor dot operation. This operation is very important as we will be using this in our homeworks where we will be working with convolutional neural networks. So suppose we have a numpy array of the shape shape 3, 4, 5 and another numpy array of the shape 4, 3, 2. So while using the tensor dot operation, we need to specify the axis along which we want to perform the sum reductions. So for our axis A, we specify the axis along which we want to perform the sum reduction to be 1, 0, which means 3, 4 or 4, 3. And for the array B, we specify the axis along which we want to perform the sum reduction to be 0, 1 or 4, 3. So what happens when we perform the tensor dot operation is that the axis along which we perform the sum reductions collapse. So ultimately, we get a new array with the shape 5, 2 as the 3, 4 and 4, 3 dimensions have now collapsed as these are the dimensions along which we are performing our sum reductions. The tensor dot operation can also be broken down into a 4 layer 4 loop. And we can compare the output of the tensor dot operation and the output we get from the 4 loop. And we can see that they are all the same. So it is highly recommended to use tensor dot instead of 4 loops whenever you can because using the tensor dot operation is a lot more efficient and a lot faster as well.