Hello again. This is the last segment of a rather long week, I have to admit. But it has hopefully been an informative one. In danger of stating the obvious, improved restoration results can be obtained when the restoration filter takes the local content of the image into account. The filter, in other words, is not content agnostic anymore. This should make sense conceptually, but it is also strongly supported by experimental evidence. So base, for example, of the local spatial activity a different restoration filters applied to each and every pixel. In this segment, we describe one way to introduce this special adaptivity of the restoration filter with the use of weighted norms. We discuss one specific way to adjust these weights based on properties of the human visual system. With reduced specially adapted restoration algorithms, in terms of the iterative implementation, since now direct solutions, for example, the frequency domain, are not visible anymore, but more importantly, because the special activity of the image needs to be computed from the available data. Since such data are noisy and blurred, it makes sense conceptually to estimate subjectivity at each iteration step based on the partially restored image. We also include in this segment an example of the effectiveness of the positivity constraint in restoring an impulsive signal. We also address the problem of the so called ringing artifacts, which appear in the stored images, especially with certain types of glass such as the glare due to motion between the camera and the thing. Developing image processing algorithms which are not content agnostic, but instead take the content of the image into account. And therefore, performing a different filtering processing operation per pixel is of the utmost importance in image processing. It has been proven that such filters outperform non-spatially adaptive filters in all applications that they have been considered. We actually showed a spatially adaptive noise smoothing filter when we talked about enhancement. And we would like to do the same here when we're talking about image restoration. So one of the ways to introduce spatial adaptivity for the constrained square filter is to consider this function file path we want to find it's roots that is equal to what's shown here. So I have the fidelity to the data term again and the smoothness constrained that we talked about but now, these norms are weighted by a W 1 weight and a W 2 weight here. So let us see first how the weight should be interpreted. So if I have a weighted normal vex by w square, this is simply x transpose w transpose wx. And can also be returned as the sum one, let us say capital N element in the matrix. X, W of I squared. X, I squared so here W is a diagonal matrix with entries W 1, W 2, W N, in this particular example. So we see that the weight assigned different importance to the elements of x. So now when I find the norm, the contribution of each element in x is not the same, but again, it's weighted by the value of Wi, and the Wi's are all non-negative. So with this, then, definition if you wish, of the weighted norm, we should be able to find the gradient with respect to half of the terms inside the parenthesis. It's something we did before without the weights a couple of times already. And if we do so, the successful approximations based iteration is shown here. So this is the iterative algorithm that I'm performing now. If W1 equals W2 equals identity, then this is exactly the constrained squares iteration that we have talked about. I should comment here that this iteration cannot be taken out to the discrete frequency domain, even when H and C are block seculent. Simply because the weight matrices, WN W2 are not block seculent and therefore I cannot take it to the discrete figures domain. And so in implementing this iteration, I could either implement it in the spatial domain or alternate between spatial and frequency domain. There convolutions have performed in the figures of the domain and then we take the result back to the spatial domain to implement the weights and go back and forth that way. Choices of these weights. One possible choice is shown here. Drawing from some experiments that were done to demonstrate the masking properties of the human visual system. According to it, high-frequency information in the image masks high-frequency noise. And put it differently, the noise is not as visible at the edges as in the flat regions in an image. Therefore one of the form restoration, we can allow noise go through at the edges. And the way to accomplish this is by setting the entrance of the W matrix here very small, close to 0 at the edge locations, and allow them to be larger at the flat region. So if w is small, then we disable eh smoothness constraint. And that's what you want to do at the edges. So this visibility matrix is calculated according to a formula like this. It's made proportional, or rather it's made inversely proportional to the local variance of the image. Sigma squared of f. So if the edge of the local variance is high, therefore V, therefore w small, therefore C of F is disabled. We can then chose W1 is 1 minus W2. Because we want with that term to kind of do the reverse. So we want to perform the convolution at the edges. And we don't enforce that much the convolution in the flat regions. Actually, if I have a perfectly flat part of the image, then blending it will not change anything. It will be exactly the same signal. So this is just one choice of W 1 and W 2. People have experimented with all kinds of other different choices but the concept when it comes to the convolution that we are discussing here is again, to not perform smoothing at the edges, therefore sharp edges will be restored, and it'll be noisy, but that's fine. The noise will not be visible while we want to smooth out the noise in the flat regions. Now, the question is how do we calculate the V here. For example, the local activity all we have available is Y, which is a noisy image. So we can work Without them, try to remove noise and then find the local activity. Or, alternatively, we can update these matrices as the duration progresses. So, in other words, we can make these matrices functions of k. So, I could put a i here in all the weights. Which, again, all this means is that at each iteration, I apply my estimation of the local variance, and then the visibility function, and then W1, whatever the formula is to update this matrices. So let's see now how this filter performs in restoring an image. So here is one implementation of the spatially adaptive CLS filter that I just mentioned. So this is the noisy blurred image you've been working with. This is the adaptively restored image. This is the CLS image. They both are implemented in an iterative fashion. By comparing the two, we see that there's a final control between noise amplification and sharpness in the adaptively restored image. The edges are considerably well restored, while the noise has been suppressed. In the CLS image, we don't have this control. The edges are sharp, but there's a lot of noise amplification. Here we show the visibility function computed by finding the local variance in the image. It's like an edge detector. So, black has a small value. So, at the edges the weight for the smoothness term is small. Therefore the smoothness term is disabled, and it's high values, white values in the flat regions. And here finally we show the difference, the absolute difference between the two restorations, the spatially adaptive and the non-spatially adaptive. We see that they differ at the edges, which is also something we can visually confirm. But also there's a lot of noise in the difference, which is the noise that we see amplified in the CLS restoration. Actually both the visibility function and this difference are mapped in the 32 to 255 range, linearly for visualization purposes. So the spatially adaptive filter is another powerful framework you might say. But its success depends on the appropriate choice of those W1, W2 matrixes. They can be done once at the beginning, or they can also be updated as the iteration progresses. Of course, there is the issue of proving the convergence of the algorithm when the weights are updated, but there is work in that direction as well. You try to linearize this non-linear function. You try to optimize and there are some convergence results as well. We want to demonstrate here with a simple example that the positivity constraint that we mentioned earlier can be a powerful constraint when it comes to restoration. So here is a toy one dimensional signal, it consists of three impulses. It is blurred by the 1D motion blur over 8 samples that we have been using, and this is the observation. Clearly since these two impulses are close together, they are not distinguishable anymore after blurring. And also the intensity's here, the values rather are small because the motion blur is normalized. If I run the iterative least squares algorithm with the positivity constraint, which means that at each iteration step the negative values of the restored signal are set to zero, I'll clip. Then I'm able to obtain a perfect restoration, perfect recovery of the three impulses, as shown here. If I run the iterative least squares without the positivity constraint, then this is the result we obtain. The three impulses are still picked out, but their height is not the correct one. And then clearly there's a lot of activity in the rest of the signal that should have been equal to zero. It should also be clear that applying the positivity constraint at the end after the algorithm converges to a result like this is not the same as applying it at digital iteration step. Because clearly here if I said the negative values's equal to zero in this signal, I'm not going to obtain this signal. You recall that in a number of restoration examples we showed, these so-called ringing artifacts are present. So an edge here, a sharp edge, propagates as shown here for example. So you see a couple of artificial, nonexisting edges. If we look at these images and measure this distance between these replications of the edges, these are equal to eight pixels, which is the length of the degradation, the motion blur. And as expected, the ringing artifacts are a function, should be a function of the impulse response of the degradation system. So here's a simple analysis of what takes place. h(i, j)'s the impulse response of the degradation system, r(i, j) the impulse response of the restoration filter. The convolution, we call it S all. And in essence, S all is convolved with the original image to give us the restored image. So if things were ideal, then h should be the convolutional inverse of r. And therefore, S all should be just the delta. If we take this to the frequency domain, then S all (u,v) should be = 1. So the ringing artifacts are due to the fact that there are deviations from the delta, as we'll see in a simple example next. So for the gradation I've been using throughout this presentation, 1D motion blur over 8 pixels, we show here the frequency response of the overall system when direct inversion is used. So I can match in the restoration filter exactly the inverse of all frequency values of the frequency response of the degradation, except at the locations of the zeros. Since the inverse filter is zero, those locations as well. I perform here a generalized inverse. So this frequency response is 1 at all frequencies exact, except the exact zeros. If I take this signal now to the spatial domain, I obtain this. It's a delta, almost equal to 1, here. But we also see we have this train of smaller impulses that are 8 pixels away. So as already mentioned, it's this impulse response that will convolve the original image to give us an estimate of the original image. So if I'm 8 pixels away from an edge, I will find the exact value of the intensity of the image at that location. However, I will have the replica of the edge due to this little impulse. If I move 16 pixels away, I'll have a second replica due to the second impulse and so on. The good news you might say is that these impulses are small, and therefore, the ringing artifacts will not be really pronounced. The bad news of course is that such a direct inversion amplifies noise. So if we look now at the frequency response of the overall system when a least-squares filter is performed iteratively, here is what we see. It's exactly zero at the zeroes of the same function. However, close to those frequencies, we see that the values are not exactly equal to one. Because as we mentioned, the values of those frequencies close to zero, they would not be fully restored. Would not be equal to the direct inverse, since also there's a low rate of convergence towards the inverse at those frequency locations. So if I take this back to the spatial domain, I see this picture, so the impulse is not exactly at 1 now. But also this train of impulse I had before it tapers off, but the values of this impulse is close to the big impulse are now larger. Again, the spacing is 8 pixels. So if I convolve now this impulse response with the original image, I do see that 8 pixels away from an edge, I will have a pronounced edge due to this impulse. 16 pixels, I will have another replica to this impulse and after two or three impulses. Ringing artifacts will fade out and this is exactly what we observed in the images shown in the previous slide. So with the iterative least-squares filter here, we deviate from the delta or deviate from one here. And this gives rise to the ringing artifacts, but at the same time this way we can control the noise amplification. So there's this important engineering tradeoff that takes place when I compare the various filters. So we reached the midpoint of the course. You just completed 50% of the material. My warmest congratulations to all of you. During the six week, we learned that the recovery problems encountered in practice. Which come with different names such as restoration, super resolution upon sharpening, are all inverse problems. We could actually discuss in detail another inverse problem, that of motion estimation earlier in class during week four. We establish the notation for being able to represent images as vectors and the degradation equation in matrix vector form. This representation allows us to model and provide restoration solutions to a larger class of problems than the restoration deconvolution problem. It provides us also with a flexibility of utilizing a number of optimization techniques from the literature. We then discussed three widely used approaches for solving the image restoration problem. The simplest possible one is the least-squares approach, also referred to as inverse filtering. The extension of the squares is constraining the squares with which prior knowledge about the solution is incorporated into the solution process. This is also an application of the theory of regularization. Finally, a general framework was also presented. The set theoretic estimation approach, which is applicable to solving various other recovery problems in addition to the restoration problem. This of course the case, with the other methods we discussed, such as least-squares and constraining squares. A specific synthetic image example was used throughout the presentations so as to easily demonstrate the relative advantages and shortcomings of each method. As you saw, the public of recovery is rather mathematical, drawing material from linear algebra, estimation theory, optimization, etc. Again, even if not all the derivations and algorithmic details are crystal clear to you. You still can have some good basic understanding of the approaches and be able to make immediate use of them in your work and study environment. Although equipped with the knowledge applied this week, you should be able to provide the solution to any inverse problem. We will continue next week on the same topic, since it is a rich and important one. So see you next week.