Image filters have become as ubiquitous as social media.

Most of the images you see are

edited versions with several filters

applied on the original ones.

This is the reason no filter hashtag

grows into prominence on social media.

While the reality is,

there is hardly any image

that qualifies the hashtag no filter.

Why? Because image filtering is

embedded within the imaging system itself.

For instance in course one,

we studied the use of Bayer filter in

creating an RGB image from the image mosaic.

Every image filter has

a very specific and dedicated functionality.

To name a few of them,

we can filter images in order to add soft blur,

sharpen details, accentuate edges or remove noise.

A vertical edge detection filter,

will detect the vertical edges in the image.

Sounds simple and easy, right.

The real challenge here is,

how do we come up with these filters?

You will have to design

appropriate filters that can deliver

the functionality you want

in a robust and efficient manner.

To understand spatial image filtering,

let us look at this toy example where we

use a three by three moving average box filter.

Now let us play around with

the size of the averaging box filter.

As we keep increasing the size,

the image becomes more and more blurred.

But along with the blur,

something strange happens in the resultant image.

Do you notice these artifacts?

Earlier there was only one edge here,

whereas now we have two.

There is a strong reason for this,

and we will study this problem in detail while

we discuss the image frequency domain analysis.

But for now, let us look at

more efficient averaging filters out there.

We see that the moving average filter assigns

the same weight to

all the pixels that we are working with.

Let us check what effect we observe,

if we give more weight edge for the center pixels.

The standard averaging filter

in the field of image processing,

has its weights derived from Gaussian function.

Do not be overwhelmed by

the question of Gaussian function.

The two parameters that you need to look out for,

are the mean and the variance.

The mean value defines where the Gaussian is station,

and variants defines the spread of the function.

Now, let us sample

the continuous Gaussian function

into a three by three discrete filter.

Let us look at the result of Gaussian filtering

on the same example picture we used earlier.

Notice, that even if we increase

the size of the Gaussian filter,

we do not see any of the artifacts that we

saw earlier when we were using box average filter.

The averaging filters we've discussed so far,

can be categorized as linear image filters.

Where the output pixel value,

is a linear combination of the input pixel values.

A simple example for a non-linear filter,

is a median filter.

Like the name suggests,

we find the median value of the intensities within

a given neighborhood and

replace the output pixel with that value.

Median filter is known for

its effectiveness to remove

salt and pepper noise in an image.

Let us look at the effect of applying median filter

on an image with salt and pepper noise,

and compare it with the result of an averaging filter.

Which one looks better?

Clearly the median filter, right?

Thanks to the median filter,

the image is taken on

your smart phone look great over the years,

even if there are thousands of pixels that are becoming

dysfunctional or corrupt on the image sensor.

Let us dive deep into

the concepts of linear image processing.

Now, let us look at the two ways in which we can apply

a given filter kernel on a given image.

I have used the word kernel for the first time.

A filter kernel, is simply

the matrix that holds the values of the image filter.

The first formula you see is called as correlation,

and the second one you see is called convolution.

Both these concepts are borrowed

from the field of signal processing.

Instead of delving deeper

into their definitions at this time,

let us empirically learn what

these formulas do when we apply a filter on an image.

Often, convolution and correlation operations

are viewed as one and the same.

Convolution is preferred over correlation as

the filter response is identical to the filter itself.

You may have already heard about

convolution neural networks abbreviated as CNNs,

which are the pillars of deep learning field.

Correlation inverts the output

giving a counter-intuitive result.

If the filter kernel is symmetrical,

both the convolution and

correlation give the same result.

Convolution is more famous than correlation.

Let us now look at how we detect edges in an image.

An edge can be defined as a set of

contiguous pixel positions where

the intensity change is high.

An edge can be formed in an image for multiple reasons.

For an edge, the gradient value

in one direction is very high,

while the gradient value in the other direction

which is orthogonal, is low.

Edges can be detected using

either first order or second order derivative filters.

Let us look at the intensity plot

of this image which has noise.

If we sample a row from this image

and apply a first order derivative filter,

it detects edge at locations where there is noise.

To overcome this, we can smooth

the image before applying

a first order derivative filter.

We could use the associative property of convolution,

to reduce this two step process into a single step.

Applying a Gaussian smoothing filter

and a derivative filter,

will have the same result as applying

a derivative of caution filter on the given image.

Here are the plots of

Gaussian and the derivative of Gaussian.

The digital approximation of derivative

of Gaussian in both x and y directions,

is widely called as Sobel filter.

When Sobel filter is applied,

we get the magnitude of the edge strength in

both x and y directions as shown in the figure.

If you observe closely,

the edge responses spread across several pixels.

Ideally, the edge should be having the width of a pixel.

To find the contiguous pixel width edge outline,

we can apply a technique

called as non-maxima suppression.

We can also detect edges

using second order derivative filters,

like a Laplacian filter.

How do you think we arrived at

this filter kernel that is

equivalent to a continuous Laplacian function?

Edges are detected at locations wherever we find

zero crossing in the output image

produced by a second order derivative filter.

For large filter kernels,

a more efficient implementation

exists if the filter is separable.

The associative property of the convolution operation,

can be used to implement separable filtering.

The distributed property of

the convolution operation can be

applied to build steerable filters.

Let us see how this is done.

Well, we hope this lecture gave you

a good introduction to digital image filter design,

and ways to improve the efficiency by

using the principles from signal processing domain.

In the next lecture,

we will perform deeper analysis of

the image filters by studying them in frequency domain.