0:40

>> I'm going to say no.

>> [LAUGH] Yeah, the way I'm asking question, that's a good 50-50.

Right, but if you have the state space form,

we're going to have this quickly there.

It's typically written this way, right?

They always write these systems as first order equations,

I'm going to show that once.

But the states that go into the system, a rigid body, Casey,

how many degrees of freedom is it now?

>> Three or six.

>> Right, if you do translation and rotation.

If you only do attitude, it's three.

So the number of degrees of freedom,

that's the number of states required to perfectly describe the dynamical system.

So the quick way to remember that is just if you know the degrees of freedom on

the system, that's how many states you must have as a minimum representation.

Sometimes we use more, and we've seen that.

I could use quarternions, and all the sudden it's four.

You didn't make it a four dimensional problem.

I could use DCMs, it didn't make it a nine dimensional problem because I have all

these constraints I have to satisfy, right?

So this formulation is going to be, well, for the most part, unconstrained.

But it's just that's the number of states that we're throwing in.

That's what we've chosen to do.

1:44

And you can have up to N.

Typically, N has to be at least the size of

the number of degrees of freedom of your system.

Then we have these differential equations where autonomous means

it doesn't explicitly depend on time.

Yes, these states' x varies.

Bring my sniper system, x goes up and down, up and down.

That's fine, but there's no driving force that says

u is equal to 3 times t just because, you know?

So the time isn't explicitly in there.

If it is explicitly in there,

it's a non-autonomous systems and then proofs get a lot harder, actually.

Then you have to talk about uniform stability and is it stable for today,

great, but what about tomorrow?

And this is classic problem in orbits because you want to intercept

an asteroid that's about to hit the Earth.

And yes, today you could be stably getting there, but

tomorrow it's not enough fuel, it's not going to work.

Now your position relative to that asteroid impacts your stability argument

and that makes it a non-autonomous system.

So we're doing primarily time-independent systems, kind of what we have.

Our masses are not deploying slowly with time, we have a rigid body,

all this kind of a thing.

So we can rewrite this.

This is our just natural dynamics, always x = to f.

But that's a first order form, so let's look at that.

So if we have theta + omega n sine theta = 0,

that's my natural dynamics.

How would I write this in first order form?

4:19

Anyway, no excuse, I was wrong.

This would be basically what we just did.

It would be theta and theta dot.

Those are the two things.

If you have N degrees of freedom, we have three attitudes,

three positions, it's a second order differential equation.

I also have to keep track of their velocities.

So this becomes a 2 times N here would be 2 times the number of degrees

of freedom, yes.

That would make more sense, good.

So we've got that and that's basically so if we go here,

if we just wrap this up and you save this, then you're saying,

okay, x dot is going to be theta dot and theta double dot.

And theta dot, let's call this x1, x2.

Then theta dot, the x1 dot is going to be nothing but x2.

And x2 dot is going to be -omega and

sine of x1 that you would have, right?

So you'd just introduce these forms.

So you could always take a set of second, third, fourth order differential equations

and introduce, as you're talking about, these other derivatives.

That's what we're doing, right?

So, this can always be done, so this form here is very general.

And we have our first order differential equations.

Now we want to talk about control.

So we don't just have x double dot plus something x equal to 0,

but it's equal to some control effort.

There's some server, some thruster, something you're controlling, right?

So you have an authority over this.

How much thrust is produced?

How much torque are you creating with wheels?

And this has some function, so depending on the states that happen here,

there's some function that could be going in here.

This function is written as g(x) so it could be non-linear.

But you can also, as you will see in several problems we do,

we have some elegant controls where we could just throw in -k times sigma,

sigma being my attitude error.

And it's still going to be stable, but

we can also throw in non-linear feedback controls.

We'll see some of them. So very general.

It's just our control formulation is a non-linear, encompasses linear.

And then when you apply this into the dynamics,

you get what's called the Closed-Loop System.

So that's when you had the natural dynamics, which were unstable,

you apply some control, hey, I'm going to take this attitude angle and

feedbacks on it, and that's going to stabilize it.

And then once you apply that control, that's the Closed-Loop System.

So you just plug in that u into it, and

that's the stability that we're going to be looking at.

Not the original system, but

we really care about the new system that we've created this way.

Equilibrium states, we discussed this.

We've seen this with the rigidi body spinners,

we've seen it with dual spinners.

It's when, if you have always this form, x dot = f, for

what set of states x is this derivative across all the states going to go to zero?

That means in an example we had earlier with the theta double dot + omega

n sin theta, that vector was theta and theta dot.

So I have to figure out for what set of states is that all going to vanish?

And typically, you also need theta dots to be zero.

You're really just looking for the states in the end.

And when you find that, that's your equilibrium point.

7:28

This does not imply stability.

This just means if I put a spin perfectly around that axis, I can't do it.

It's too hard.

It always flips. But if you did a perfect spin,

that would be an equilibrium.

It's just not a stable equilibrium, right?

The other ones are much easier to do.

Those will be the stable equilibriums.

So that's what we're going to be defining.

That's what nature gives you.

But what you're going to find too for the tracking problem,

there's some mathematical tricks.

And the next thing you do it looks very, very similar.

And essentially what we're doing is if this is what nature gives you,

I can make other points equilibrious in space with feedback.

I could make something hover and fight gravity and

that becomes a stabilizing thing, if I have enough fuel and thrust.

That's how your quad rotors work, right?

The little UAVs and cameras flying around.

There's no equilibria that should be hovering over your house at 15 meters.

But your control made that work, and that became all of the sudden,

equilibrium, it hovers, you know?