0:06

So let's cover Devenport's q-Method.

Â This was done.

Â It's a classic method, you don't see it much flown any more, but

Â many formulations are built on the answers that come from this.

Â So it's a good one to be aware of.

Â And so we'll talk about the benefits.

Â It's this magic math that happens but also the challenges of implementing this.

Â Quest is the one that's probably flown the most out there.

Â But there are all kinds of other modifications these days that people have

Â done to that even.

Â So this research continues, that people look at this stuff.

Â So with the q-Method, this comes from the quaternion.

Â Our book, at least, uses betas.

Â And the order like I said depending on which source you look at.

Â The scalar one might be the first or it might be the last so

Â it just kind of depends on those things.

Â So that cost function, we needed the norm of this squared.

Â The norm of vectors is the same thing as vector squared is the same thing as v

Â dotted with v, all right?

Â That gives you the norm of v squared or

Â in the matrix form v transpose v is equivalent to the dot product.

Â So I'm just taking that residual vector or matrix in this, transpose it with itself.

Â That's going to give me the known squared of the residuals.

Â I'm summing them up.

Â Now this matrix math, if you carry it out there'll be V in the B frame transpose,

Â V in the B frame.

Â Well these are all unit vectors, a unit vector representation dotted with itself.

Â The transpose with itself always gives you 1.

Â It has to be a unit vector.

Â Same thing here, if you do this transpose with this, you have the BN transpose BN.

Â Because of the orthogonality, BN transpose BN just gives you identity.

Â And you end up with V in N frame transpose V in the N frame again.

Â A unit vector norm squared is just one.

Â So, 1 plus 1 is 2 minus, there are these terms.

Â This one transposed with this and this one transposed with this.

Â Looks like they're opposites.

Â But the answer's a scalar.

Â That's a convenient trick in matrix math.

Â If the answer's a scalar, you can always transpose that term.

Â Which of course reverses the matrix math order and

Â every element gets transposed again, all right?

Â a times b transposed is b transposed, a transposed.

Â That identity, that's what you use.

Â because now you can group them together and you get minus 2 times that one term,

Â they're actually identical.

Â If you do that the factors of 2 come out.

Â I have a one-half that's going to completely drop out, and

Â you end up with this.

Â So this is exactly cost function.

Â I've just done some matrix math to manipulate it a little bit.

Â But it will turn out this is a pretty convenient form that we have.

Â 2:57

But this cost function has two terms.

Â This term,

Â the weights times 1 summed up is just going to be the sum of the weights.

Â And you pick the weights, nothing is going to happen with that.

Â The second term, this is the part you adjust.

Â We're finding the correct bn matrix such that this is minimized.

Â So minimizing J is equivalent to maximizing this G function.

Â This G-function is that second term, the weights times this other stuff.

Â And because it's minus G that's what's in there.

Â So if you look back here these weights times this is G.

Â So this is the sum of the weights minus G.

Â If you want to make J as small as possible, the sum of the weights is fixed.

Â You have to make G as big as possible.

Â So we've replaced a minimization

Â of J with an equivalent maximization of g.

Â Okay, so now to keep that in mind we want to make g as big as possible.

Â That will be the best fit of the attitude measure.

Â Now how do we do that?

Â Well, Devenport found a way to do it with quaternions.

Â So we're going to take this BN matrix and write it in terms of quaternions.

Â You've seen it element by element.

Â This is a nice matrix compact way to write that thing.

Â And so this is the BN matrix in terms of the quaternion.

Â Where epsilon is the vectorial part, and beta nought is the scalar part, right?

Â Now this is where the magic happens.

Â We're not doing this in class.

Â But Devenport proves that this quantity written out in

Â terms of quaternions, this g-function.

Â Can be rewritten into this beautiful elegant quadratic term.

Â 4:38

Quadratic functions are amazing for optimizations.

Â They make life much easier.

Â There are whole fields on convex optimizations.

Â So we do this, but there's a K matrix which is a 4 by 4 because this beta set is

Â a 4 by 1, and so the answer has to be a scalar still.

Â We want to find the beta set, the quaternion set, that pre-imposed

Â multiplied on the K matrix makes this g function as big as possible.

Â Right, this is a maximization now of g to minimize the cost function, j.

Â Now how is this K defined?

Â First thing you do is you take the observations, v hats in the B frame.

Â And you do an outer vector product with the same observations that you know.

Â You know your environment,

Â you know what they're supposed to be pointing in the end frame, right?

Â That's all the stuff that's given.

Â This is the part that we measure.

Â And you multiply times the weights.

Â So the weights are embedded inside the b matrix.

Â This vector, this is a three by one, this would be a one by three,

Â the answer gives you a three by three.

Â So the vector outer product,

Â you end up with everyone whose elements you're summing is a three by three.

Â And the answer b is a three by three.

Â 5:47

Now we use that as a stepping stone.

Â You can see the K matrix is decomposed like this.

Â Sigma is a scalar, Z here is a 3 by 1, and S is a 3 by 3, and

Â of course this is an identity operator also a 3 by 3.

Â So there's some partitioning of how you assemble this.

Â The S matrix is simply B plus B transpose.

Â Which makes it a symmetric matrix.

Â Sigma that appears here and here is simply the trace of the b matrix.

Â So that's just a diagonal three terms of this b matrix.

Â And the z is defined as differences of these off diagonal terms.

Â That's kind of how the math works out.

Â So Devenport proved this.

Â I even asked.

Â He worked at Godard where he was a colleague of Devenport at some point.

Â Well, how did you ever come up with this?

Â And even Landis was saying, I don't know.

Â [LAUGH] He quietly went away and worked on something and shows up and

Â goes, I think I've got something interesting.

Â Holy shit, that's cool.

Â because now if you can do this, let's look at the magic math that happens.

Â How we get here?

Â If it seems mystical it is.

Â It's even worse then Euler parameter properties.

Â I get all this math to prove it.

Â But once you find it it's like wow, this is very powerful.

Â So fundamentally we're taking g.

Â We want to maximize it.

Â And we have to find the attitude measure,

Â what is the right quaternion set that makes g as big as possible?

Â So let's look at this further.

Â 7:17

Let's look.

Â If I have a function y(x) and

Â you look at this function y(x) and it does stuff.

Â If you want to find the extreme end points of this function y.

Â What's the classic operation you have to do, how do we find these points?

Â Differentiate, right, with respect to x.

Â So you say the partial of y with respect to x.

Â And that derivative has to be what for an extreme end point?

Â Zero, right, that finds all the flat spots.

Â 7:53

That could happen.

Â Now you don't know if they're maximum or minimums.

Â You have to look at local curvatures or

Â different numerical techniques to find that kind of stuff.

Â But to find the extremums it's just taking a cost function and

Â taking the derivatives with respect to those things t hat you're estimating.

Â And now you're looking for all the places where could those be zero, right.

Â Now that's fine if it's an unconstrained cost function.

Â Here, the MRPs are limited in that I

Â cannot just have an MRP of 000.

Â Or here, if you want to maximize this, the answer would be make beta infinity,

Â infinity, infinity, infinity.

Â 8:30

I challenge anybody to come up with a bigger number than that,

Â infinity squared summed up.

Â Why is that not the correct answer?

Â Well, that's cheating because we know

Â betas live on this four dimensional sphere, right?

Â But it's a unit sphere.

Â So this is not just an unconstrained optimization problem,

Â this is in fact a constrained optimization.

Â Now I'm just going to show you how to do this,

Â this whole class doesn't now how to do this staff.

Â So some of you may have seen Lagrange multipliers, some of you may not have.

Â But essentially this is what's called an augmented cost function.

Â You take the original cost function for which we want to find an extremum.

Â And you add, or you can add or subtract, really.

Â 9:11

The constraint is written in a form where if it's applied,

Â that constraint function is zero.

Â That's always how these things are formulated.

Â That's why you see basically the sum of beta squared minus 1

Â has to be equal to zero.

Â And then times lambda.

Â Lambda is your Lagrange multiplier that you have to find.

Â So if you've seen this before hopefully it makes sense.

Â If you haven't seen it it's not a big deal, not important for

Â the class otherwise.

Â But this is how you can do it.

Â So if you find a set of betas subject to this being satisfied,

Â then this actually doesn't matter in the end, right?

Â But if you pick infinity, infinity,

Â infinity it's going to greatly impact your answer.

Â And mathematically we can't find an extremum.

Â So good, so we want to find the extremum but it's a constrained extremum function.

Â So now we take the derivative of this,

Â this part of this matrix I can find the derivatives.

Â And, yeah, this is kind of like 2 times x squared.

Â The derivative ends up being, with respect to x, 2 times x.

Â Instead of just 2, I have a matrix here and t his is kind of like your x squared.

Â So the derivative of this just gives you 2 times the matrix times x.

Â And if you want to you can just carry it out as a simple math problem in

Â component form.

Â And prove this to yourself if you haven't seen that.

Â Great, same thing here, beta transpose beta, this is really just x squared,

Â 1 times x squared.

Â That'll just give you 2 times x.

Â And I have a lambda scalar in front of it.

Â The derivative of 1 or lambda, it's a constant, vanishes in this case.

Â 10:44

So we've taken derivatives with respect to beta.

Â Sorry, to the 1 lambda, don't depend on beta.

Â This is what you end up with.

Â We set this partial equal to 0, like y prime equal to 0.

Â That's what gives us the flat spots.

Â So the 2s we can cancel out.

Â And you end up with this relationship.

Â K times beta estimated has to be equal to lambda times beta estimated.

Â 11:13

Yeah, this is an eigenvector, eigenvalue problem.

Â But instead of on a 3 by 3 as we did earlier, we relate it to the e hats.

Â Where an eigenvector with a plus 1 eigenvalue of the dcm,

Â this is now a 4 by 4 matrix.

Â So we can show that maximizing this, not maximizing, I should say, the extremums.

Â These could be maximums or minimums of this.

Â The answers are going to be this.

Â Now, a four by four matrix like this will have four possible answers.

Â They will have four possible lambda values and four possible eigenvectors.

Â Remember, eigenvectors are not unique.

Â One zero zero is an eigenvector so it's two times one zero zero, right?

Â We are always going to pick the ones that are normalized because we know those

Â are the answers we're looking for.

Â 12:23

But I don't feel lucky, so I want to have some math to prove which one to use.

Â If we go back to the original cost function,

Â it was basically beta squared times k, that we had, right?

Â That was the quadratic measure.

Â And I know this is the condition for

Â an extremum, the K times beta I can plug in as lambda times beta.

Â 12:43

The Lagrange multiplier is your eigenvalue of the K matrix.

Â So we plug that one in, beta transpose lambda times beta.

Â The lambda is a scalar.

Â So you can move that anywhere in that matrix math.

Â I'm moving it up front.

Â Now I have beta transpose beta.

Â I know that has to be one because it's a unit quaternion set.

Â So if I apply this condition to my original const function.

Â It turns out this const function will always just be lambda.

Â It's a four by four matrix, I will have four eigenvalues.

Â Which lambda do I pick now of those four?

Â >> The biggest.

Â >> The biggest, right?

Â because the goal is, I had to maximize g and minimize my cost function j, right?

Â So it's a maximization.

Â So, out of all the possible answers, the answer is it's the eigenvector

Â corresponding to the maximum eigenvalue of that K matrix.

Â And that's it.

Â Yes?

Â >> Why don't we just differentiate

Â it again to find the [INAUDIBLE] multiplication?

Â 13:53

>> because you're basically done at this point.

Â because numerically, I mean I could do that but then I come up with curvatures.

Â You need to figure out what's minimum, what's maximum and so

Â forth if the second order is precise enough.

Â But in this case, numerically, you can even do the higher order and

Â then apply this for every of the four possible answers.

Â I have to find the four possible answers first.

Â And here you must solve the eigenvalue, eigenvector problem.

Â And just knowing which one is the largest tells me that's it.

Â because I want to make g as big as possible.

Â So you could do that, but in this case, you would just slow down your algorithm.

Â Once you have this insight,

Â 14:38

So this is basically the steps and we'll go through the mathematical once.

Â You set up your observation.

Â In the homework, you do the same.

Â But then you do those outer products to get the B matrix, and

Â then from the B matrix, the trace and transpose.

Â You assemble the K matrix, the 4 by 4.

Â Once you have it, you find the eigenvalues, the eigenvectors.

Â Don't do this by hand, please.

Â That'd be completely wasted at this point.

Â Use a computer, use MATLAB, Mathematica, something, and

Â ask it to solve eigenvector, eigenvalues.

Â It'll be very happy to do this for you,

Â and it'll spit out four answers, all right?

Â Then you find the largest eigenvalue and the associated eigenvector.

Â Remember, eigenvalues and eigenvectors come in pairs.

Â If MATLAB spits out minus 1, plus 1, 0.5 minus 0.5,

Â it was the second element that was the largest.

Â Pick the second vector MATLAB gave you, that's the one.

Â 16:26

>> The positive- >> Yeah, the positive scalar, right?

Â So MATLAB may give you one with a positive first term, or not.

Â It has no idea.

Â It has some internal algorithm that just flips a coin and says, of the two vectors,

Â this is one I'm giving you.

Â Right, it's still you as user who gets to decide, this is the one I want to use.

Â They're both perfectly valid.

Â It's just one might be giving you 359 degrees and

Â the other one tells you it's minus 1, all right?

Â So you do get two possible answers, as you would expect, with the unit constraint.

Â