0:04

Before I jump into this, just timewise,

Â let me do one example by hand

Â where I want to lay out how to actually implement a control.

Â The last homework you have to simulate stuff.

Â You've been working on how to simulate kinematic differential equations.

Â Now, you're adding the kinetic differential equations.

Â And in control, you still need the full six degrees of free, you know,

Â the translate, altitude, and rate.

Â Six states, not six degrees of freedom.

Â Rotational motion, but we have to add control.

Â So, any simulation, let's start this out.

Â We're gonna have, yeah why not, we'll do it in green.

Â It's not St. Patrick's Day, but I'm sure the Irish are happy. OK.

Â So we're gonna say sigma,

Â and I'm gonna be explicit, omega_B/N.

Â If you're coding a tracking problem,

Â I highly recommend your variables aren't just sigma in your code,

Â because you'll get confused.

Â I know I get confused, even out all these years. Which sigma is this?

Â So sigma relative to B relative to our R relative to N,

Â R relatives to N. You know,

Â be explicit. You know.

Â So here, I'm just gonna say these are the states.

Â This is our typical sigma we've had before.

Â And therefore, our differential equations of this is

Â gonna be the 1/4 B of sigma_B/N, right?

Â The matrix, omega_B/N and this other one is the inertia

Â inverted times omega tilde I omega plus torque plus L,

Â if you add disturbances to it, right?

Â Those are the differential equations you have to integrate.

Â There are six of them now that you have to deal with.

Â So let's see how we gonna simulate this in our set-up.

Â In the code, at some point,

Â you have to say: what are my initial conditions?

Â Which should be pretty straightforward.

Â You put in your initial altitude relative to inertial,

Â your initial rate relative to inertial.

Â So good. Then we start a time loop.

Â I'm just gonna do a first order integration.

Â If you want to Runge-Kutta,

Â it's a few extra steps.

Â It's easy to implement. It's just more stuff to write.

Â So, right now, before I integrate,

Â I actually need to compute my control solution because this integration needs

Â an input what is the control torque that you're applying to it.

Â All right? To get the control,

Â the control we're gonna do is,

Â or just one of these functions.

Â What was it, -K sigma_BR minus P_BR

Â and then all the other terms.

Â Right? So I need my altitude relative to the reference.

Â I need my attitude rate relative to the reference before I can compute the control.

Â So you're gonna have to find the control.

Â To find the control, we have to find the reference.

Â So if you're doing reference tracking,

Â what you're gonna have to do first is find

Â R. That means you need the altitude relative to R,

Â you need omega R relative to N,

Â and you need the angular accelerations relative to N, right?

Â So, however you generated the reference trajectory,

Â especially in the project two, you're defining a frame.

Â You can maybe numerically differentiate it to get these rates.

Â Maybe differentiate it twice with enough histories and you get the feedfor-.

Â Now in the project, you don't need feedforward acceleration.

Â It's just a PD control without feedforward. So that would do.

Â But in your other code, if you have something,

Â if this is your elliptic orbit,

Â you need to know your orbit rates, your orbit accelerations.

Â This is what feeds forward into the control.

Â So you have some subroutine probably at this time,

Â what is my reference state?

Â Right. Where should I be pointing in some way?

Â Good. Once we have these,

Â then you compute the control, right?

Â And that's a maybe not a subroutine that you go,

Â "Hey, this is the control,

Â and I'm doing interval feedback.

Â I'm doing the PD. I'm doing this nonlinear control,

Â whatever control variable you have."

Â That way, if you have a different control you want to apply,

Â all you do is change one line and say "Hey, no, use this control.

Â Use this function. Use this function."

Â The reference generation is the same.

Â The equations of motion are the same.

Â This gives it a nice modular architecture.

Â Now that we have this,

Â then we can compute. Ugh, I can't write today.

Â There we go. In the Runge-Kutta thing,

Â they're called the first derivative just K_1, I believe.

Â Right. So I need to compute this.

Â This is this F function which depends on the states,

Â this, but also my control.

Â So here you compute this one with the current control -- states.

Â The current this.

Â And if something is time-explicit in there,

Â let's say your dynamics model included time-dependent atmospheric drag, you know.

Â You may have to throw in a time variable so it knows how to resolve that and so forth.

Â In our current problem, we don't have anything that's time-explicit,

Â but if want to make it general, you do this.

Â So good. If we do this though, this integrates it.

Â If you do Runge-Kutta... Let's to Runge-Kutta.

Â So then you do K_2 is equal to this F function again.

Â And then you do X plus K_1 times h/2,

Â I believe.

Â T plus h/2.

Â It's the same control.

Â In your dynamics, we don't typically update

Â the control as we're doing a Runge-Kutta time step.

Â Because really, your control gets implemented digitally at a discrete frequency.

Â You may be simulating your dynamics with a thousand hertz,

Â one millisecond time steps,

Â but your control only gets updated at one hertz.

Â So, you can actually put logic in your code that says, "Hey,

Â only every thousand time step updates the u control.

Â Right? So then, you're really holding your control piecewise constant.

Â In the most simplest case,

Â if you have a thousand hertz integration and you compute the control every time,

Â you're still holding your control piecewise constant within it.

Â And that's quite practical too,

Â because here, let's see,

Â to get this, it needs these states.

Â If you took an estimation class,

Â you'd have a whole routine that computes.

Â To get the control,

Â you'd have to have your estimated biases and rates and all of this,

Â and this doesn't happen at the subcontrol intervals.

Â That only gets updated once, right?

Â So that's why this is just hold your control u constant. That's all you have to do.

Â It's just an input to the routine,

Â and, then you do your rest of the dynamics like normal.

Â So you do this, K_2, K_4.

Â And then the next state,

Â N plus 1 is equal to the current state plus all these K's,

Â you know, they go in here, if you're doing a Runge-Kutta.

Â If you're doing an Euler, it's just one of the steps.

Â So it's kind of easy, alright.

Â And then you do that. And then that's it.

Â If you're doing MRPs,

Â you still have to check if the sigma part of X is greater than one,

Â then shadow, right, then switch to the other set.

Â That should be happening outside of this integration.

Â So up to now, if you've been running your code,

Â well, you had the kinematics first,

Â integrating this, then we added the kinetics,

Â still basically the same integrator.

Â If you're doing altitude,

Â you probably already implemented this logic somewhere to switch the MRPs to the set.

Â So to do these control homeworks,

Â if you've had nice code, it's gonna be really simple.

Â All you have to do, if it's regulation,

Â you don't even need this part.

Â You just compute u,

Â and then integrate forward and apply that.

Â You just have to make sure your equations of motion include that u.

Â And if you have a disturbance,

Â if you're doing an interval kind of problem, an L,

Â is it on model torque,

Â or some external disturbance,

Â you can throw it in as well.

Â So it becomes really, really easy now at this stage to implement that kind of a control.

Â If you're doing the current homework as you do with

Â spring-mass-damper with lots of stuff too,

Â it's the same logic.

Â At some point, you have your reference X_R for

Â that spring-mass-damper reference system minus the actual,

Â that gives you the tracking errors.

Â Then you compute the control,

Â hold it piecewise constant,

Â integrate forward a time step,

Â and repeat. And that's it.

Â That's how we would typically implement

Â a numerical simulation of a dynamic system with a feedback control applied.

Â And again, there's no estimation here,

Â otherwise that would happen here somewhere.

Â You gather the information,

Â run through a QUEST algorithm,

Â to take this stuff and figured out what's my heading and

Â maybe the common filter to figure out what the rest of the stuff is,

Â and that's what feeds into the control evaluation, you know. So...

Â