We discussed the simple iteration in a Jacobi form, now I'm going to discuss an alternative way known as Seidel's method, sometimes known as Gauss-Seidel method. Let's see. Again, we're looking at the same linear system Ax equals b, where A is the left-hand side matrix, x is a vector, and b is a vector too. So, for Seidel's iteration, what we do we split our matrix A into three parts. First, we take a lower diagonal part call it L, then we add the main diagonal, let's call it D, and then what's left is the upper triangular part. Then we'll identically rewrite our system with this L, D, and U matrices. This is very similar to what we had for Jacobi method. Now let's see. For Jacobi iteration, what you do you take the diagonal part and use it for the current iterate of diagonal parts are the previous iteration or are related to the previous iteration. In Seidel's method, you make a different decision, you take for the current iteration both diagonal and lower triangular parts, and then the upper triangular part of your matrix only talks to the previous iterate. You suddenly can do this, whether it's useful is a different story. But that's certainly a valid splitting, and that's a possible iterative way. Let's see how it works. So in coordinates it looks like this. Here is my original matrix A. The upper triangular part is here, this is my lower triangular part, and then there is diagonal. If I write it in components, so any iteration will look like this. I take this X1, and from the first equation I just obtained these because everything else is from the previous iteration. Then on the next step, I feed this value of X1 at the current iteration to the second equation, and I directly get X2, and then I take this X2 and feed it to the third equation, and same with X1 and so forth. Again, in the third equation I will only have X3 and so on. So in general, each iteration is I'm sweeping down my matrix, and in each step I use values of the previous components of my X vector from the current iteration. Notice, the difference from Jacobi method, is that for Jacobi method I was using the previous iterates for all components, here I'm reusing or I'm using the current step from the previous components from the current step. That's basically the main difference. What about convergence? That's an iteration. Can we prove something about convergence? Right. So, to make progress, we just rewrite the original system in a same form convenient for simple iterations, again introducing this B matrix, and in this case the B matrix is again the off-diagonal part of U multiplied by the inverse of the diagonal bit, and then you can formulate the theorem which looks like this. So if one norm or infinite norm of this B matrix is less than one, then Seidel iterations converge with a rate of a geometric series, and the common fraction of this series is smaller than the norm of B. This theorem is not very hard to prove even though the proof is a bit tedious, so I'll skip it, and then formulate something which is closer to the theorem we had for the sufficient condition for convergence that we had for the Jacobi method. Let's see. So then here we'll define two majors, I'll call them BL and BU, which are just upper and lower parts of my original matrices scaled by the diagonal. Then, let's see. If the sum of these two norms is less than one, then Seidel iterations converge, and we can even have some error bounds which are related to Q which is a certain combination of the norms of this BU and BL matrices. Let's see. This theorem is really a slightly weaker version of the previous one. Notice that on the previous slide, there was the condition on B which is essentially the sum of this BL and BU. So what you really need to require is that the norm of the sum is smaller than one, and obviously if the norm of the sum is smaller than one then the sum of norm is certainly less than one, the converse is not true. So this one is really a slightly weaker condition, but it's not too difficult to prove and actually it would be a good self-test exercise. So the proof of this theorem S2 is very similar to what we were proving, what we're doing for the Jacobi method for a similar condition, so it wouldn't be a nice exercise at this stage. So, that's a reasonably general convergence sufficient condition for convergence of Seidel iteration. You can make a bit of progress if you assume something about your system. For example, if your original system has a matrix which is symmetric and positive definite, then what you can prove is that Seidel's iterations converge. They converge with a rate of a geometric series, and you don't need to require anything about the norm. So essentially Seidel's method is very good and very performant for symmetric positive definite matrices and that's the case which is reasonably often, you can find reasonably often in price. These are sufficient conditions, now what about necessary and sufficient condition? Again, I will only state the result, I will not prove it. So the way to go is you again rewrite your iteration a little bit. This time you write Seidel's iteration as a simple iteration like this, with the matrix I called [inaudible] , which is again the combination of this D, L, and U. Then for simple iterations we know that they converge if and only if the spectral radius of the iteration matrix is smaller than one. So it just applies here, no need to prove anything really, we've already proven this. So we have a sufficient condition, we have a necessary and sufficient condition, we have a very nice result for symmetric positive definite matrices. The only thing which would be helpful at this stage is some intuition. So if I just summarize, the choice between Jacobi and Seidel's method boils down to the structure of your original matrix. So loosely speaking, if what you have is a system which is diagonal dominant or in some sense close to a diagonal, then you should choose Jacobi method. If your matrix is and sometimes close to a triangular one, or symmetric, or close to symmetric, then Seidel's iteration would be a good choice.