So, let's touch on the virtualization concepts themselves. So first of all, why are we even interested in virtualization? Fundamentally, if we look at what has happened in server technology in the past 20 years or so, is that driven by Moore's Law, we've seen a transformation where we had a single core inside a CPU, to where we've got a very large number of cores inside CPU's and then similarly from a single socketed server into multiple socketed servers. In order to utilize those resources, the design of applications themselves from a historical standpoint, where you had the hardware and the operating system and then a collection of applications running on that, needs to transform. So that those systems can more fully utilize the resources that are available there. So what we've done inside virtualization, and this is not a new concept, this goes back 50 years or so. From a service standpoint in the high-volume service that we look at today, probably only goes back about 12 or 15 years or so. But the computer science theory of it saw this and mainframes a very long time ago, is that they realized that in order to maximize the utilization of that physical resource, that this concept that they now call virtualization came into play and that is that we can have multiple instantiations of operating systems co-resident on a single hardware platform and then on top of those instantiations of the operating system, interesting applications or interesting services that can be there. So before the virtualization came into place, we really had a single instantiation of that operating system. That hardware and that software that we're very tightly coupled in that, that application itself was specific to that operating system, that was a very specifically tied to that hardware instantiation. Then running multiple applications that they had a great degree of variance in them, not just additional instantiations of the same applications, but quite divergent applications, often lead to conflicts in the utilization of those physical resources. There were stranded resources as well. So, we had conflicts and we had stranded resources in there. Then there was also some concern about security from a standpoint. So, if you think about multi-tenancy and the fact that if you got applications from two financial institutions running on the same set of servers, they may be concerned about those coming together on that. So fast-forward to server virtualization, is that this is one of the ways that we can actually break that dependency that I spoke out between the hardware and the operating system, and then the application that's running on top of it. Yet we can still have a single platform from a management standpoint as opposed to just starting to copy multiple instantiations of all that hardware in there. At some point, it does allow us a certain degree of in-dependency from that underlying network compute and storage of that hardware that we spoke of in the previous section. It gives us a relatively strong isolation. So, this is a security statement. So, I gave you the two financial institutions model before. So, the multiple operating systems then have a greater security boundary on the hardware from that standpoint. So, while the system can be perfectly secure, it gives us a greater bit of a security for that standpoint. So, what are some of the key properties that we look for then in these things that we call virtual machines? What we mean by that then is this application that's running in multiple instantiations on a single network compute and storage. First of all, it's at partitioning, so that we can run multiple operating systems on one physical machine if that's necessary and desired. Then we can actually divide those resources and in some cases, allocate and manage them separately for the workloads that are running on top of those different instantiations of the operating systems. That fall isolation or that isolation that I spoke of from a memory management standpoint, from a security and from a fault, is one of the things that we can also ensure there so that if for whatever reason a particular operating system has a critical defect and that application is going to restart, it doesn't take down the rest of the services that are running on that virtualized machine and because of that independency. Whereas if they're all running on top of a single operating system, it's conceivable that one application could cause that operating system to a segmentation fault and then you'd lose the surfaces of all the other applications that are running on that environment. Then there's the concept of encapsulation. So, the encapsulation gives us the ability to have a namespace, if you will, for the entire state of that virtual machine and that can be saved and preserved so that we can actually generate a replication of those. That lead us into the migration and it introduces some life-cycle management capabilities to that application that we may not have had when we're on a purpose-built system in the past. So, let's take a look then at that encapsulation concept to begin with which would allow us to do. It allows us to spend operations. So we can actually stop that instantiation for the operating system, not just the application itself, but the operating system itself. Once it stopped or suspended, you can do something with it, let's restart it. We've made a change to it, let's restart it. We can also snapshot it. It is that when it's in a suspended state for example, we made them would go through and make an exact copy of it using something like a disk duplication and then prepare to be moved to another environment. The reason behind that is I want to make a copy of it to clone it to another machine and that's a pretty difficult thing to do again when we think about the support of our purpose-built systems in the past, where that operating system was a single instantiation, all the applications lived on it. And then the reason for that is maybe I'm doing Life-cycle Management and I want to upgrade the software, but I want to upgrade it on a trial basis and see that, that replicated instance can be upgrade and it's going to provide the services that I want. If I like it, I continue to live and if not, I may shut it down and then move my traffic back to the previous instance over the length of the platform. So many of the things that we can do that or record and replay operations for the functional standpoint. And again, the driver behind this is to improve the utilization of the hardware as we've continued to provide services of increasing capability driven by the increasing core counts, increasing memory utilization, increasing IO capabilities and we can also again, from that isolation standpoint, maintain some type of fault tolerance in our platform and that's independent of the application itself. So, those are some of the things that we've got from that standpoint.