Welcome to our next lecture. And in this lecture we are going to be talking about limiting usage on our kubernetes cluster. And the first thing we are going to talk about in this lecture is limiting resources on pods, right? And so we know that pods are essentially containerized processes, right? Kubernetes does not use containers directly. It wraps containers in pods, right? And the pod is one or multiple containers, right? Wraps together. And so the container is essentially a process that is isolated from the rest of the system, right? We isolate the process. And so since the process is already isolated, the question arises of can we further restrict the process in some way, right? Can we restrict for example, the amount of memory that the process can use, or can we allocate some memory to the process, right? And the answer is yes. So how do we do that? Well, within kubernetes we have two main concepts for pods, and we have resource requests and resource limits. And and the way we can think of this right is essentially that research requests requests, the least amount of resources that the pod requires, right? So essentially with resource requests we say to the node that this pod should not run on a node that does not have sufficient resources, right? And resource limits is the opposite, right? Resource limits actually limits the resources that the pod can use, right? So with the resource limits we say to kubernetes do not let the this pod consume more resources than we configure, right? So let's take a look at a resource manifest. So we have our YAML file and this is the MFR file from a deployment. We have our spec file, within the spec file we have our containers, and we have our first container object. And within the container object we have our resources. And within the resources we can figure requests and limits, right? So the requests, we can figure that this particular container will require at the very least 10 CPU millicores and 20 maybe bytes of memory, right? And in the limits part we can figure that this container shall not exceed 80 millicores, and 100 maybe bites, right? And so at this point you might wonder what is this Milic or unit? And so in kubernetes, kubernetes uses this unit of millicores, where basically 1CPU=1000m, right? And so the CPU can be a virtual CPU or a physical CPU depending on the host. So how do we set these resources? Well, we can edit the yaml file directly, the descriptor directly, right? Or we can actually use a new kubectl imperative command, right? We can use the set command. Then what are we setting? Well we are setting resources, right? And then the kubectl set command expects the resource type, right? So we are setting the resources on the deployment. And then the deployment name. So in our case for example, it could be hello-world-nginx. And then simply we configure the actual resources. So for example, dash dash requests, and we configure the requests as per the yaml file above, right? And then we can configure the limits, right? So how do we view the requests and limits? Well, we can view it of course per individual part, right? But maybe a nicer way is to describe a node, right? And what you can see within a node is basically all of the requests and resources of all the pods that a node hosts. So essentially within a kubernetes and node is a machine that runs the kubernetes platform, right? And it hosts a number of pods, and then you can actually see the CPU requests, right? Memory requests and limits for each particular pod that the node is hosting, and then we can see the total, right? Total of requests and limits below. And then if you have installed the metrics API, then you can actually execute the kubectl top command, right? And view these metrics via kubectl. But again, this requires you to have additional controllers deployed on your kubernetes cluster.