We begin our inquiry into mechanical systems with a reminder of Newton's laws. Now, the time honored introduction to Newton's laws are Feynman's lectures, amazing series of lectures, I refer you to that book. It's now online, it's hard copy, you can get it in many, many different shapes and forms. We're remembering that Newton said that forces cause accelerations. And they're proportional to accelerations through the constant of proportionality called mass. If we measure we need a frame of reference. We're going to put a frame of reference in this figure. And once we have a frame of reference, we can measure position. Chi, in this picture, we can measure velocity that I'll label v. Of course, the velocity is the time derivative of the position. And the acceleration that I'm labeling alpha, is the time derivative of velocity. You remember this from calculus. That if you have a constant acceleration, the horizontal line in this figure, then your velocity gives you a constant slope curve. And the position integrates to a parabolic curve. Please review your calculus in case any of this is confusing or unfamiliar to you. Our notation will be, when masses or anything else moves, will use chi of t. And when we want to think about their variation in time, will use chi dot, that is a little dot over the Greek variable. Chi to denote the time derivative of the motion chi of t. So this equation says v(t), the velocity of any incident of time t, is defined to be, that's what this funny symbol means. It's defined to be chi dot of t, which is the time derivative, d by dt of the position chi of t. When masses move, they have velocity. And they have acceleration, and you remember that the acceleration alpha of t is the first derivative of v dot. And therefore, it's the dt squared of chi or chi double dot at each instant t. The calculus version of Newton's law now tells us that not f equals ma, but f equals m times d squared chi dt squared or m chi double dot, is balanced by the force f. Let's introduce to our moving particle, an acceleration that's impeded by the force of friction. Friction is most commonly modeled as a viscous force that's proportional to the velocity and in opposition to the direction of acceleration. This equation, m chi double dot = -b times chi dot, expresses in one neat form all of those ideas. It says that Newton's ma, or the inertial force of acceleration is exactly balanced by the viscous drag introduced by the friction that's slowing down the motion in proportion to its velocity. Now, if we take off our physicist's hat and put on our applied mathematics' hat, we're going to realize that this is a differential equation. Namely, if we recast chi dot as v, and recast chi double dot as v dot, we realize that v dot = lambda v is an equivalent formulation of Newton's law with viscous friction. Where lambda, that will come to think of as the time constant is given by the ratio of the viscous stamping coefficient and the mass. I hope that you've had a little bit of an introduction to ordinary differential equations. And if you have, you'll know that when faced with a linear time-invariant differential equation, so-called LTI, we can immediately deduce. That our friend, the exponential function of time must give a solution as long as we scale time by the time constant lambda. So let's check. d by dt of e to the lambda t is d by dt of this infinite series. And if you apply the derivative operator to each term of this series, d by dt of the constant 1 is 0. Let's bring d by dt in under the summation sign for each of these terms. Take d by dt of the kth term. And you remember from calculus k comes down. k- 1 is left as the exponent. And k factorial is going to be reduced to k- 1 factorial by cancellation with k. And so we get lambda times that same infinite series, and we realize that d by dt of e to the lambda t is just lambda times that function, e to the lambda t. We realize now that a solution for this ode, for any initial condition, is given by v at time t from initial condition v0. Defined to be e to the lambda t times the initial velocity condition v0. Again, I am assuming in this module that you've had a course in ordinary differential equations, perhaps just a linear, ordinary differential equations course. And I'm hoping that you will take some time to review these ideas if they seem unfamiliar or foreign to you. Let's move ahead. Let's add to our acceleration balanced by viscous friction, and acceleration balanced as well by a compliant force. This would be the spring force due to the stretching of a spring. And you'll recall from you physics classes that the spring or the compliance law asserts a force in opposition to the direction of motion in proportion to the amount stretched. That says that the spring term is negative k times chi. And we have the differential equation m chi double dot equals negative v times chi dot minus k times chi. Now what we'd like to do is think about this rather as a first-order system in two dimensions than as a second-order system in one dimension. Let's say that again. Let's look at this new equation, it's a vector definition. I'm going to think about the two-dimensional vector, boldface x, as defined to have two entries, x sub 1, x sub 2. And I'm defining x1 to be the position variable chi from this differential equation. I'm defining x2 to be this position variable to be this velocity variable, excuse me, v From the differential equation. I can now look at the time derivative of each entry of the vector x. And I'm going to denote that thing by putting an x over the vector quantity. I'm putting a dot over the vector quantity x. And what that means is we should be thinking about the vector of derivatives. When we write down the vector of derivatives, we realize that d by dt of x1 is just x2. How come? Because d by dt of chi is just v. When we write down the derivative of x2, we get back the differential equation. Namely, we have to go back to the differential equation to realize that chi double dot is the right hand side of the original ODE divided by m, and that's what's been written here. So equation one shows that the second order scale of differential equation can be replaced by a first order two-dimensional linear time invariant, or LTI dynamics. Now, I'm going to rewrite the right-hand side of that first order two-dimensional equation in matrix vector form. And again, I'm assuming you've had a bit of an introduction to linear algebra so you'll know how to think about this right-hand side term as the product of a constant matrix cap A, where I've written out A on the side here, times the original vector x whose entries are x1 and x2. This is how we're going to be thinking about our dynamical systems, we're going to be thinking about them not as second order one-dimensional, but as first order two-dimensional. Or as we begin to add degrees of freedom, first order, 2n where n is the number of degrees of freedom. Don't worry if degrees of freedom doesn't mean anything to you yet. We're going to be reviewing these ideas carefully in weeks two and weeks three. Let's move ahead. Now, we have a two-dimensional LTI system and we have to think about solving it. How will we solve it? We'll use diagonalization. Here, I refer you to the excellent textbook by Hirsch and Smale and Devaney, which has been published in a few editions, the most recent 2004. You'll remember from linear algebra that matrices can generally be diagonalized, even if they can't be diagonalize directly, there will be a block diagonal form. I'm going to ignore those niceties, and I'm going to pretend that any two by two matrix can be diagonalize by this change of basis through the eigenvector matrix E. Once again, you should review these ideas in your linear algebra text or they are reviewed very carefully in the Hirsch, Smale and Devaney textbook that I'm sighting here on this page. Carrying on, in the new coordinates system, if we think about exponentiating a matrix, we realize, by linear algebra and calculus that the time derivative of the exponentiated signal is the exponentiated signal of the time derivatives. Please check this algebra, I will not check it for you in live real time. But there will be some exercises for you to work on this thing to check and make sure that you understand how to show by this series of manipulations. That the solution of vector linear time invariant system has the same form as the solution of the scalar linear time invariant system that we talked about in the previous slide. Namely, through each initial condition of the vector x0. The vector x0 means, choose your favorite initial position and your initial velocity. And think about the time future trajectory on the plane of positions and velocities. And what you will see from this differential equation solution is that, you have a matrix exponential, e to the At, multiplying that initial condition, and you'll get a family of trajectories. Here is the crucial shift of attention from ODE theory into what's called dynamical systems theory. Let's look at these different panels. In each of these panels I plot for you, first on the left hand side, the time trajectory of this solution, broken out as a position trajectory over time on the top and a velocity trajectory over time on the bottom. You can read off the initial condition at time zero, and you can see the trajectory as two traces over time. Let's instead plot those two traces on two-dimensional graph paper as I'm doing on the right, where you start with the initial condition in position at unit, and the initial condition of velocity at zero. And here's the initial condition here on this phase portrait as we've come to call it. And over time, these motions converge down to zero. They both converge down to zero and you can see what the curve looks like as time goes by on the XX dot plane. Each of these four panels has a different initial condition. And those different initial conditions give rise to time trajectories, which we want to think of as spatial curves in the phase plane XX dot. So much so that we want to completely ignore temporarily the time aspects of the trajectory. And ask ourselves only what happens asymptotically? And you can see when I merge all of these phase portrait plots from the right-hand sides of the four panels, I'm going to get this characteristic logarithmic spiral that shows solutions from every different initial condition spiraling down in to the origin. The differential equations point of view is to think about solutions to initial condition problems. That is, think about the time trajectory through initial conditions. The dynamical systems point of view is to think about orbits as spatial curves and orbits over time as transformations of space. Let's follow these ideas further. Once I have a notion of phase space, I can begin to think about dynamics in geometric terms. In this slide, I'm going to try to interpret total energy, mechanical total energy as a geometric norm. Now, once we have the phase space, we can begin to think about reinterpreting mechanics and dynamics in geometric terms, which we will do as follows. You'll remember total energy from your physics class. Total energy is the sum of the kinetic energy, which for a single point mass is just one-half m times the square of its velocity, plus its potential energy. And in the example we've been using, the only potential energy comes from the spring. And you'll recall that the potential energy stored in the spring is quadratic in its extension. So that the potential energy that will label phi sub S of chi is just one-half k chi squared. This is the amount of energy stored in the spring as it gets stretched from the origin to a distance chi. Let's take the sum of these two functions, and we realize that kappa of x2, remember what x2 is. x2 is d by dt of chi, which is v, plus phi sub S of x1. You remember what x1 is. x1 is just our old friend, the position chi. I sum these up and I get a scalar valued function which I'll label eta in this equation. Eta sub HO means, the total energy of the harmonic oscillator, which is a function of x1 and x2, where x2 shows up as the kinetic energy variable and x1 shows up as the potential energy variable. Let's look at this sum in more geometric terms. I'm plotting the level curves of eta sub HO, the total energy of the harmonic oscillator. I'm plotting the level curves, where bluer and darker are lower energy levels, and oranger and brighter are higher energy curves. In what you'll see, if I select the parameters to be appropriately balanced, namely, if square root of m and square root of k have the same magnitude, then the level curves of this total energy function are concentric circles. Let me write that down algebraically and you can see that I've got this squared sum, and if I think about this squared sum as a geometric norm squared, I realize that I should think about total energy as a kind of a norm. When I think about total energy in these geometric ways, then it becomes very interesting for me to consider in the geometric phase space, what the relationship is of the time trajectories to these level curves. Below, I've plotted in bright red, the trajectories in forward time through the initial conditions. And what you can see visually in this picture is that, from any initial condition, if you follow the arrows, energy is decreasing, energy is getting bluer, and bluer, and bluer. That is, the total energy norm, we're interpreting there's a norm, decreases over time. What we would like to realize is that this is a general idea. This general idea was recognized many, many times in the past, but it was articulated most clearly by Lord Kelvin in the 1880s, who observed that when a mechanical system is phased with a dissipative force, then the total energy must decrease. We are now going to take Lord Kelvin's idea, and we're going to look at the rate of change of energy along the motions. And we're going to realize that geometrically, because the total energy serves as a norm. We can interpret Lord Kelvin's idea as defining a basin, where particles are collected at the low energy state. That's the idea that's being presented in this complicated figure over here that goes along with the motions at energy over time. What am I plotting? I'm plotting the composition of the total energy, eta sub HO of the harmonic oscillator in composition with the position at any incentive time through some initial condition x0. Where all the different initial conditions are seen as the initial point of these red curves on the plane. If we look at the energy of these points along the curves, we get the blue shadows up on the energy basin. In the blue arrows show that Newton's laws are piercing from higher energy into lower energy states and hence, any particle that started any of these higher energy states will be forced to lose its energy and dumped down, and asymptotically wind up at the origin. This can be checked algebraically using calculus. And there will be some questions for you to check this, to make sure that you can do this computation that's being shown on the slide, where I use the chain law from calculus. Let's just talk that through quickly, and you'll check it on your own. d by dt of total energy is equal to d by dt of the time function, which is the composition of the space function eta, with the time function x of t. Let me use the chain rule to remind myself that d by dt of a composition is d by Dx of eta of x, times d by dt of x of t. Of course, I'm not using your one dimensional calculus, I'm using multivariable calculus here. And what I mean by this big capital D is all the possible partial derivatives of the scale valued function. Namely, it's got D with respect to x1. And D with respect to x1 of eta only sees phi sub S. D with respect to x2 of eta only sees Kappa because Kappa depends not at all on x1, and phi depends not at all on x2. Please go back and check when you do the exercises to make sure that you can follow this computation. Let's also remember that the trajectory through the initial condition has to satisfy the original differential equation. So that d by dt at any point along this trajectory x of t has to be equal to a times x of t. Now, let me evaluate the spatial derivatives of the total energy on the left with the matrix product on the right. Let me simplify that using linear algebra, and what I see is I get -b times x2 squared at all times t. And that says that d by dt of eta, the original total energy variation can never be positive. This is the algebraic proof of Lord Kelvin's observation, that total energy in the phase of viscous damping must always decrease. And from our point of view, we're going to interpret that geometrically as defining a basin. The time derivative, eta dot is in itself and important physical concept. It's the power, the rate of change of energy is called power in physics. And the fact that we've shown that it is always negative gives rise to this basin. This nonpositive function of space specifies the rate of energy loss to the damper. And it creates the energy basin that we're drawing on the top figure, originally with blue shadows of the phase plane, those red curves. And when we take the shadows away, we just look at the field vectors interacting with the surface of the energy. And we see we don't even need to solve the differential equation to understand that energy must be decreasing. This is a key moment, it's the transformation from what you learned in ODE class, into the dynamical systems point of view, which is going to be crucial for us when we talk about nonlinear versions of these ideas. The motion of this system has decayed down this basin to zero energy because the power is always negative. And we can see that the power's always negative by taking the total energy, and interacting the total energy gradient with the vector field to produce these piercing blue arrows shown in this picture. Please make sure you understand these ideas and work through some of the exercises before going on.