0:46

>> Very specific.

>> Chuck.

>> Determinants plus one.

>> Determinants, right has to do every orthogonal matrix has to

determined of plus or minus one we prove that quickly.

But the proper orthogonal has a plus one that means it's used the right hand rule

or the mirroring operations in mathematics also give you this orthogonal matrixes.

But they have a determinant of minus one and so

we need one with a plus one determinant.

That's what we need here.

I is simply the identity but it could be, this is for n by n matrices,

not just three by three.

And then q is the skew-symmetric matrix which we're very familiar with,

skew-symmetric just means q transposed is minus q, right?

It has that mathematical property definition.

Then we define this equation.

1:42

the DCM, or a orthogonal matrix.

For three dimensional rotations, this is our attitude description.

That is the DCM matrix.

But this actually also works for n dimensional spaces.

Now where does this appear?

Structures is a big field where we often do i and t decompositions.

You end up with these symmetric positive definite matrices and

you can decompose it.

There's this huge n by n you have to invert.

And there's whole theories on how to actually do it better, extract notes, and

your eigenvectors of a symmetric positive definite matrix form a orthogonal matrix.

You might have a 100 by 100 orthogonal matrix.

A 100 times 100, that's a 100 squared elements you have to track and invert.

So with attitude instead of a three by three, which is nine.

We know we can get away with three corners.

That's the minimal description.

It turns out with a 100 by 100, it's always n minus n minus one.

So n times n minus one over two, so

for three by three it's a three times two just six over two which is three.

I mean three degrees of freedom for attitude, if it's a 100 by 100,

it's a 100 times 99 divided by two, which is way less than a 100 squared.

So that's why people are interested in these things.

In different ways to decompose it, so we can use attitude descriptions

anytime you have to describe the time behavior of an orthogonal matrix.

So we're just going to spend a few minutes on this to kind of highlight that

the concepts you're learning actually expand to much higher dimensional

spaces as well.

And there's ongoing cool research in this area.

Now this equation has some interesting using properties.

The skew-symmetry we defined,

we had to define the way we did earlier with that x till d.

You know there was zero, zero, zero minus x three, x two in the first row and

so forth.

Then this works.

But look at the other side.

I have one minus q, one plus q inverted or we switch the order.

3:34

With matrix math a times b TIBO is that the same thing as b times a?

>> Not generally, no.

>> No.

That's a big no no, right.

You can't just flip the order of matrix math.

You break all kinds of laws right there.

Here it turns out this Cayley transform doesn't care.

So this particular version allows you to switch.

I can compute this and this and one times the other with the other times the first.

It's just kind of cool.

So this is a math thing that you can do and

you can easily compute this coin math lab plug in numbers and you will see.

Wow I do get back in the flow of a matrix and see where that goes.

Now, the inverse mapping, if you have a othogonal matrix.

And you're trying to decompose it back into a othogonal matrix,

this is the math you have to do.

I take identity minus that matrix, identity plus the matrix, and

invert that one.

The order, again doesn't matter.

4:56

>> This not an eye test.

>> [LAUGH] No, there's nothing else different, right?

So this is kind of q's.

How often does that happen?

Not only can we interchange the matrix order in this particular mapping in fact

you can write one sub routine to do the forward mapping.

And the inverse mapping, you just have to give it an n by n matrix.

And this matrix needs to either be an orthogonal matrix in which case the output

will be a skew-symmetric matrix.

Or you give it a skew-symmetric matrix do the same math and

out comes an orthogonal matrix.

How cool is that?

You don't see that very often.

5:29

We'll say the Cayley transform after we do attitude estimation that be one

application where this form is used to decompose.

And come up with all estimation method here a few years ago as well.

So I'll show you that, but that's it so forwards and

backwards it's the same it's a really cool property.

And our publishing gets so excited about this stuff but it is kind of neat.

Here's a simple example, now for 3D space you plug it in and

this is where this definition of skew-symmetry is important.

These q one, twos, and threes, are the CRP's.

So another way to get to the CRP besides quaternions or

stereo graphic mapping from them Geometrically.

You can also say from this Cayley math, I get a, I put in the DCM I do this math,

out comes this skew-symmetric matrix whose components with

the right sides which is are my classical Rodrigues parameters.

But this also works, this allows us not to generalize the idea of classic

parameters to n dimensional manifolds, not just three dimensional spaces.

6:30

And so this is just some numbers you can plug it in do something similar

on a computer coming up and that will give you this.

If you took these numbers and

plugged them into our earlier formulas on how to go from CRP to DCM.

You would get back this one exactly or

you can put it into the Cayley transform function again.

And you get back to exact same equation, the exact same values.

So it's kind of an elegant mapping that you have.

Higher dimensional, I'm just showing you one, here's an orthogonal matrix.

And you decompose it and you can see instead of tracking 16 numbers now in your

code you could have it decomposed into six numbers.

Some ordinate factor of two better, in as far as compactness, right?

But just knowing what these six elements represent, these 16 elements,

there's a mapping to and from them.

We still need differential equations on how to integrate those.

So what's that differential equation?

And this one actually, if you remember,

we proved this differential equation had to hold for general orthogonal matrices.

This wasn't just good for three dimensional matrix, orthogonal matrices.

You could prove this formula works regardless of the dimension.

And you can use this, I'm not going to do this in the class.

But you can use it to actually derive q dots, right.

That's like your CRP rates but in a matrix form and

how this relates to z omega or if you have CRP rates you can do this.

So people have used these kind of things to integrate these differential

equations to come up with the evolution of this

eigen vector matrix versus dealing with the eigen vector matrix directly.

8:04

So different things that can be done there.

So I'm not going to spend much time on this, this is just the,

where else you could go.

What else is happening in this kind of of a research and how we describe it.

But this is physical example.

If you do mechanics, you would see a system mass matrix of order n times

the accelerations is equal to some forcing function.

This could be decomposed, as these are symmetric positive definite.

These are orthogonal.

And they satisfy these things, so you can replace these v and v transpose with

a subset of coordinates and that's basically how this is to be applied.

8:52

Okay, that's classical Rodrigues parameters.

What I want you to remember is that our singular they move the singularity by as

far as you can go without wrapping back on to yourself.

So 180 degrees is much better than oiler angles but they're still singular.

There's some nice properties back and forth, they're linearized to that mapping,

those are all cool things but that's kind of the concept.