Good morning, fellow computational neuroscientists, and a happy Valentine's Day. Because it's Valentine's Day, we are going to talk about representing vectors and matrices in a different basis. So we're going to talk about the mathematics used to go from one representation of a vector or matrix to another. And today, everything we're going to be doing will be in two dimensions. But not to worry, all of the math and equations we use will generalize to vectors of arbitrarily many dimensions. This is a common strategy when figuring something out in linear algebra. Start in two dimensions where you can draw pictures and work your way up right there. So let's draw a picture. Let's start with our standard coordinate frame or standard basis as they say and we'll label or axes x1 and x2. And let's throw a vector in here, v, v for vector. And there are two dimensions, so our vector has two numbers, v1 and v2. And what that means is that our vector v goes over v1 in the x1 direction and over v2 in the x2 direction. So that is a fine and dandy representation of our vector v. We can write down v1 and v2, those are its components. However, there may come a time in life when it becomes useful to represent our vector v, not in our x1, x2 coordinate frame but in a rotated coordinate frame. So let's call our old coordinate frame x1 old and x1, x2 old. And our new coordinate frame, we'll label the axes with x1 new and x2 new. And so from the perspective of our new coordinate frame v will be a different vector. How can we find the representation of v in our new coordinate frame? Well this is not too complicated. What we want to do is find out how much v goes over in the x1 direction and how much v goes over in the x2 direction and how can we do that? What we're going to do is we're going to project v onto a unit vector x1, hat, x1 new hat for the new unit vector in the direction of x1. And the projection of v onto x1 new hat, the unit vector pointing in the direction x1 new, will give us the amount that v lines up with the x1 new direction. And in the same way the projection of v on to x2 new hat, the unit vector pointing in the x2 new direction will give us the amount that v points in the x2 new direction, so v2 new. So, how do we actually write out the math? So, if we were to write, v1 new, what does that equal? That just equals, the projection of v onto the unit vector, pointing in our x1 new direction. And as we learned in earlier weeks, that's just the dot product between v and x1 new hat. It's important that x1 new hat is a unit vector so that v1 new isn't scaled by an unnecessary amount. And likewise, v2 new = v x2 new. And so how can we write out this equation as a matrix? Well, dot products are very easy things to stick into matrix equations. So we can write v1 new, the vector v1 new is equal to some matrix times our v old, sorry that should be v new not v1 new. And this matrix is our change of basis matrix. And in each row of our change of basis matrix we have a row vector corresponding to one of the new basis vectors. So the top row of our change of basis matrix is the row vector for x1 new hat and the bottom row is the row vector for x2 new hat. We write the little t to indicate the transpose, meaning that we’re taking a column vector to a row vector. So in this case x is a 2 x 2 matrix. So let's go through an example, we'll draw out our original x1 old as one axis. And x2 old as the other and let's draw the vector so v old will be, how about 1, 4. So it goes over 1 and that's one that's 4, not perfectly proportioned but that's okay. How would we represent this vector in a coordinate frame that has been rotated 45 degrees? Well, the first thing we need is to make our change of basis matrix. So, we need to find the unit vectors x1 hat new and x2 hat new that point in the directions of our new axes. Well, this is just a trigonometry problem, so we can right the x1 hat new is equal to 1 over the square root of 2, 1 over the square root of 2. Because if it's at 45 degrees, it has to go up as much as it goes over and the square root of 2 term comes in because the magnitude of this vector has to be equal to 1. Similarly, x2 hat new is equal to, while we go over minus 1 over square root of 2 and up 1 over square root of 2. This allows us to write our change of basis matrix x. So how did we do that? What we did was we brought x1 hat new to the top row. So we have 1 over square root of 2, 1 over square root of 2 and we have x hat 2 new in the bottom row. So that's minus 1 over square root of 2, 1 over square root of 2. So then how do we write V new? Well from our equation, that was equal to x times v old. And remember when you multiply a matrix by a column vector, what you're doing is taking the dot product of each row of the matrix with the column vector. So this equals 1 over root 2, 1 over root 2 minus 1 over root 2, 1 over root 2 times 1, 4. So now doing this matrix multiplication, gives us 1 over square root of 2, plus four4 over square root of 2, in the top. So that's 5 over square root of 2 and minus 1 over square root of 2 + 4 over square root of 2 in the bottom so that's 3 over square root of 2. And that equals about 3.5 and 2.1. So in our new coordinate frame we go over 3.5 and up 2.1 and that's all she wrote. All right, so here's some intuition. If you have a vector v, it's really the same vector in kind of a deeper way, regardless of what basis you represent it in. I could represent v in the old basis, or the new basis, or maybe even a newer basis, newer x2, newer. But regardless of what basis I represent v itself doesn't actually change. The only thing that changes is the numbers we use to describe v. And so the change in basis formula tells you how to find the numbers that describe v in one basis, given the numbers that describe v in another basis. As well as your change of basis matrix, which relates the old basis and the new basis. But v as kind of an abstract entity is still basically the same thing, regardless of your representation. In kind of the same way that a certain object is the same object regardless of what angle you see it from. So hopefully the idea of representing a vector in a different basis doesn't theme all that crazy. However, what does it mean to represent a matrix in a new basis? So let's we had A matrix A old = ( a11 old a12 old a21 old a22 old) What does it mean to change the basis of the matrix? Well, unfortunately it's not quite as simply representing each of the columns of A in the new basis. But even if that would be more simple it wouldn't be very useful. So what is a matrix? Or what does it do? What a matrix does, is to take one vector, so a matrix takes the vector v, v old, and it maps it to a new vector. So this new vector is A old times v old and that's all it does. A matrix takes one vector and spits out a new vector. And if it's a square matrix then it'll spit out the new vector in the same vector space as the old vector. So that's kind of nice. And very often in neuroscience, we will be working with square matrices, so that makes life easy. Okay, so, what would it mean to write A new, what would it mean to represent A in a new basis? Well, we saw that v was kind of this abstract entity that stayed the same in a lot of ways, regardless of which basis you were in. So, if we change our representation of v old to v new, so we do our change of bases formula and get the old represented in a new bases. We want A new to have the same action on v new as A old had on v old. So we want to choose A new such that it does the same thing to v new as A old did to v old. We should find the representation of the matrix that preserves its action. So A old, v old, that's just a vector, right? So we can represent it in our new representation by multiplying it by x, right? So this is the new representation of A old v old. So what do we want that to be equal to? Well, we want the representation of A old, v old in the new basis to be equal to A in the new basis multiplied by v in the new basis. So all this says is that A new times v new is A old times v old but represented in the new basis. But what equation do we have for v new? Well we know that v new is equal to x times v old. So we can write A new times x v old is equal to x times A old, v old and then if we multiplied both sides of this equation by x inverse, we get x inverse times A new times x times v old is equal to A old times v old. And so what this means is that A old = x- 1 A new times x. So if our criterion is that A new should have the same action on v new as A old had on v old. That is A new times v new should just be A old times v old, but written in the new basis. Then we get this equation. So let's move to a new slide and write down what we've learned. We had A old = x inverse times A new times x and this statement is equivalent to A new = x times A old times x inverse. So this is the solution to our problem. This is the representation of A old in the new basis. So this is how you write A new such that it has the exact same actions A old had but in the new basis. So this is how we change the basis of a matrix. And how did we change the basis of the vector? Well, we had that v new = x times v old, so that's representation of v old in the new basis. So, to represent a vector in the new basis, you multiply that vector by your change of basis matrix. And to represent a matrix in the new basis, you multiply the matrix on the left by your change of basis matrix. And on the right by the inverse of your change of basis matrix. And just to recall, x was composed such that each row was a unit vector. So, vector with length one pointing the direction of an axis in the new basis. And so we did all of this in two dimensions. But these equations hold no matter if you have three dimensions, four dimensions, or even an infinite number of dimensions. So that is how you change basis. So just one example of when changing basis might be useful and we actually talked about this earlier on in the course when we mentioned principal components analysis. And in principal components analysis, the basic idea is that you have a bunch of vectors that have been drawn from some random distribution. And depending on the distribution they're drawn from, they might be aligned along some other axes that has rotated compared to the coordinate frame you started with. So in PCA, what we do is we change the basis of all of the vectors from our random distributions, such that they line up mostly along one or two or a small number of those axes. And what this does is to help decorrelate the components of the vector. So in our old basis x1 and x2 were very correlated. When x1 went up, x2 went up and when x2 went down, x1 went down. But in our new basis x1 new and x2 new, the components of the vectors are uncorrelated, so that makes things a lot easier when you're working with probability distributions. And there are plenty of other examples in which changing basis really helps you out, such as solving a system of couple of differential equations or dealing with the dynamics around a fixed point. And these will come up as the lectures continue but that's all for right now, so stay tuned.