Convolutions are simple operations used to detect features in different areas of an image. Before this video, I demonstrated how to apply that symbol convolution to an image. Up next, I'll show you some ways to tweak those convolution operations. I'll review what padding and stride are and share with you some intuition behind the use of padding for convolutions. So take this grayscale image from the previous example. When you apply that 3 x 3 convolutional filter to it, you start by computing the element-wise product of the filter with the first 3 x 3 section of the image at the top left corner. Then you move to one pixel to the right into the same set of computations and after that moved to filter. Again a pixel to the right, and then you've moved it down to one pixel, and starting from the left again until you reach the end. Moving one pixel to the right and one pixel down is called using a stride of one. So you're only moving one pixel at a stride of one each time. It doesn't have to be the same stride value for going right versus going down as well. Your stride could be two to the right and four down, for example. The larger your strides are, the less of your image you will have coverage of for your filter, but at the same time it will make the computation faster. It definitely is a trade off here. A stride of two, would mean that every time you actually take two steps to the right, so your filter starts here still in the top left corner, but now you go two pixels to the right each time, and then because it's two both for right and down, you go two pixels down there, and so you only do four computation. That's much faster. As you can see, the stride determines how many sections the image your filter will visit. In this case, the result of convolution is now just a smaller 2 x 2 matrix, because you visited four of those 3 x 3 sections with the filter. Two at the top and two at the bottom. Padding is another tweak to the basic convolution, so take this 3 x 3 image and this 2 x 2 filter. The values here are not important in the pixel so, I've left them out. When you compute the convolutions with stride of one in this case, you visit four sections of the image. One in the top-left corner, one in the top-right corner, one in the bottom-left and one in the bottom-right. At the end of the pixel in the center, it gets visited four times, while the pixels in the corners are only visited once and the other pixels are visited twice. So this means that the information kept in the center, gets more attention than the information on the edges typically. This is a problem if you have, let's say features on the edges that you care about. For example, it could be an entire dog in this pixel over here, or it could be part of a dog like its nose or something being up here, and that's useful information and you want to make sure your filter puts equal emphasis across the entire image. To address this, you can put a frame around the image so all the information appears at the center of this whole new picture, exactly like in a framed picture. This is called padding. The padding size can vary depending on the size of your filter, and the size of the image. The value used for the padding can be anything actually in this outside value, but it's often set to zero, which is called zero-padding. When you do those before computing the convolution, the filter scans the image along with its frame, but this time the result is that, every pixel in the image is visited the same number of times. It's really just the frame that's visited fewer times. That's because now, all the pixels in the image are located at the center surrounded by that frame. To sum up, the stride tells the filter how to scan the image and influences the size of the output and how often pixels are visited. Padding is a frame put around images, in order to give the same importance to the pixels at the edges of the images, as the ones in the center.