Right. So, now that we have the covariance matrix, we need to know how to calculate the eigenvalues and the eigenvectors. The way to do this is as follows; we first start by finding the eigenvalues. So, eigenvalues first, and we do this by solving this equation. We take the determinant of the covariance matrix and we subtract Lambda and I use Lambda to denote my eigenvalues, times I, where I used the identity matrix equal to 0. We solve this, and this will produce the Lambdas, the eigenvalues for us. If you think about this, this operation here, Lambda times the identity matrix, so in our case we have a covariance matrix two by two, so we want to have the same dimensions for the identity matrix. So we have Lambda times a matrix that looks like this, right? And this will produce another matrix that looks like this, and when we subtract this matrix from this one, from Sigma, we'll get something like the determinant of if s_1,1 is the first element in the covariance matrix, then we have to subtract Lambda s_1,2 s_2,1 s_2,2 minus Lambda. Okay, and this equates to 0, and we'll have to solve it. Now the determinant, we have to multiply these two and subtract these two. So, we will get something like s_1,1 minus Lambda times s_2,2 minus Lambda minus s_2,1 s_1,2 equals 0. You immediately spot that this is a quadratic equation because when you multiply these guys here, you get Lambda squared, and then once you solve this, you will end up with two roots. In our case, these two roots will be the first eigenvalue and the second eigenvalue, right? And you can solve this, you can apply the operations and solve the equation. There is a shortcut formula which I will use in the code and this formula is that Lambda is the trace of the covariance matrix, plus minus the square root of the trace squared of the Sigma K minus 4 times the determinant of Sigma. Okay, and this all has to be divided by two and if you're interested you can go on Wikipedia, search for innovative algorithm and you'll see all the details. I don't want to spend much time on this now, but the bottom line is this formula will produce these two values for us; Lambda 1 and Lambda 2. Once we've got the eigenvalues, we move on to finding the eigenvectors. We find the eigenvectors by solving this thing; covariance matrix times E equals Lambda times E, where E is a vector. So, essentially what I'm saying here is that I'm looking for a vector that when multiplied by the covariance matrix, remains the same and just changes in magnitude, that's this scalar here, Lambda, which is my eigenvalue, right? So, that's the intuition behind this equation, we want to find the vector that multiplied by the covariance matrix doesn't turn any more because it has already reached the direction of greater variance, right? It's only gross. We can solve this using Lambda 1 and then using Lambda 2 and we will get to respectively the first and the second eigenvector, and then we will see which value is bigger for Lambda 1 or Lambda 2, which one explains most of the variance and that will be our first principle component and the second one will be our second principle component. So, let's go back to Watson studio and see this implemented in code. Right, so here we are back in Watson studio, and instead of rotating a random vector to find the direction great is variance, what I will do now is just to use the formulas I've just shown you to analytically compute the eigenvalues and the eigenvectors. Again, here is the link to Wikipedia, the eigenvalue algorithm, I really encourage you if you're interested to go and read all about it and see where all these transformations come from and how we actually compute the eigenvalues, but it's the same idea. So here we have the first eigenvalue which is the trace of the covariance matrix added to the square root of the trace to the power of 2 minus 4 times the determinant, the covariance matrix in this all divided by 2. and then we also find the second rule of the quadratic equation by just replacing the plus with the minus here. So, the eigenvalues for our example would be this two 4.266 and so on and the second eigenvalue is 0.2883 and so on. Now for computing the eigenvectors, I exploit something called the Cayley-Hamilton theorem, again you can go on Wikipedia and read all about it. It has to do with having this multiplication here when Lambda 1 and Lambda 2 are eigenvalues, they annihilate each of the columns and you're left with eigenvectors. So what I would do here is I compute the eigenvectors and I also normalize them and print them. So what we get here is E_1 is the first eigenvector, E_2 is the second eigenvector. Then I just stack them together to have like a matrix of eigenvector just to keep everything neat and tidy, and this is my matrix. For sanity check, I will also compute the eigenvalues and eigenvectors from Pi. So, I'm just using this function and I will print the results and we can compare them to our manually computed PCAs. So, let's do this and see here first the eigenvalues; 0.2883, 4.2662, and if we go up, we will see that there are an exact match, 4.2662 and 0.2883. Right, and then, the same for the eigenvectors. You have here minus 0.9901, there is this guy here and minus 0.14. So there is this guy here, and then the other one is 0.140, 0.990, so these guys here. So perfect match. What we can do now, is we can plot the regional data and then we plot the first eigenvector and then the first in the second eigenvectors together, and see what the direction of greater variance is. So let's do this, and here we see the original data, the first eigenvector and the second as we expect are perpendicular to the first one.