So, the final thing left to do is using principal components to actually project the original data and reduce it's dimensionality. And this is actually super simple. So, I'm talking about the final step here, number three. The way we do this is by say that I have 2D data, so I have two input attributes X1, X2. And this is my data set, I have here X11, X12 that's my first observation. Then I have X21, X22 that's my second observation X31, X32, and so on. And because this input data set here is two dimensions, I will eventually end up with two principal components E1 and E2. So, what I can do now is just grab the first observation take its transpose and multiply it by the first principal component. And this will give me what? This will be X11 times E1 plus X12 times E1, and this will be some number here right? And I can do the same for the second observation, just take the transpose multiplied by the first principal component and get another number. And I can do the same for the third observation and so on and so on. And at the end I will end up with a new data set that has just one attribute X1. And I have thus reduced the dimensions from two to one right? And if I have more dimensions in the original data set, if I have three, four, five dimensions then I will end up with two, three, four, five principal components. And I can repeat the same procedure not just for the first principal component but also the second, the third and I can end up with a data set that yes not just one but two or three attributes as long as the number of attributes in the new data set is lower than the dimensionality of the original data set. I am doing dimensionality reduction. Let's see this in the code. Okay. So, we are back in the notebook and just to remind you we have here in the first subplot the original data set on the second subplot. I have the first principal component E1 and on the third subplot I have the first and the second principal component E1 and E2. And now, if I want to project my data onto the new component, what I do is just take the dot product of the original data set and the first and the second principal component. And I am naming my resulting new feature as F1 and F2. So, if I want to do dimensionality reduction I would just take F1 only and this will be essentially the transformation of my original data set that here's to input attribute into new data set that retains most of the variance and has only single input attribute. Or I can just retain both and get new projection of my data using the principal components and print it. And I do this because, I also want to do kind of sanity check and use the principal component analysis implementation from circuit learn and just see what it will do with my original data set. And if the results that we get from our manual computation are close to what the out of the box PCA implementation in circuit learning this. So, I will just call this decomposition.PCA method and I'll request the first two principal components. And then I will transform my original data based on the components and I'll print the results. And then you'll see that they are actually an exact match. So, our implementation works perfectly works as expected. So, this was PCA and now you have a very good understanding of how principal component analysis is carried out, how the components are calculated behind the scene and actually how you can use them to reduce the dimensionality of your data.